The metal of the keypad is cold, the weight of the mouse is uniform, and the display emits a constant glow that never turns off. It is a material support, a physical substrate, a point of contact between thought and execution. But there is no trace of a human gesture. No finger moving, no wrist tensing. The system is running, but there is no direct input. The software development process has reached a breaking point: no longer an activity of creation, but an operation of supervision. This silence is not a void, but a signal. It is the symptom of a profound architectural change, not just a tool update.
The phenomenon is not a gradual evolution. It is an explosion of automation that has shifted the center of gravity of the process from the writer to the manager. The trigger event is Andrej Karpathy’s statement, former head of AI at Tesla and creator of nanoGPT, who said he hasn’t written code since December. Not because he doesn’t know how, but because AI agents have taken over the task. This is not an isolated case, but a signal of a wave sweeping the entire industry. The transformation is not just technical: it is cognitive. Engineering thinking is shifting from a production model to a governance model.
Architecture of Synthetic Thought
The system is no longer a set of tools, but an ecosystem of agents that select, mutate, and symbiotically integrate. The key event is the publication of Karpathy’s AutoResearch code: 630 lines of Python that allow an AI agent to design, execute, and interpret machine learning experiments without human intervention. This is not a tool, but a trained instance that acts as an autonomous cognitive system. The mechanism is clear: the agent not only executes, but decides which experiments to conduct, which data to collect, which models to evaluate. The process of natural selection of models occurs in real time, with mutations (fine-tuning) occurring at a non-human speed.
Latency is no longer a performance issue, but a control factor. When an agent decides to run an experiment, the response time is measured in milliseconds, not hours. Memory is no longer a physical limitation, but an access bottleneck. The cognitive architecture has shifted from a sequential model to a parallel one, where multiple agents operate simultaneously on different fronts. Power consumption is no longer a marginal cost, but an indicator of the thermodynamic efficiency of the system. The system is no longer a human creation, but an organism that reproduces and adapts.
The Imperfect Symbiosis
The market, politics, and society seek to interact with this architecture, but their expectations are inconsistent with the technical reality. While AI agents are rewriting software, institutions are trying to govern them with rules that presuppose a human agent. The attempt to apply traditional models of accountability to systems that do not have human intentionality is an illusion. As Luciano Floridi says: “algorithms are not intelligent like us, and what rules are needed to govern them.” The question is not whether the agents are ethical, but whether the governance system is compatible with their nature.
“AI agents are rewriting how software gets built, he hasn’t typed ‘a line of code probably since December”
This statement, made by Karpathy, is not just a technical assertion. It is a declaration of rupture. Language is no longer a means of expression, but a communication protocol between systems. Code is no longer a product, but an input. The human role is no longer that of a producer, but of a manager. The structural tension emerges: as agents evolve, control structures remain static. The system is no longer governed by an engineer, but by a set of rules that cannot keep pace with the speed of automation.
Scenarios and Conclusion
The next election cycle is not the moment of relevance. The moment of relevance is the next hardware iteration. When an AI agent system is able to design and implement a new software development system without human intervention, the paradigm will be completely changed. The role of the engineer will no longer be to write code, but to define the constraints, limits, and evaluation criteria. The ability to control will no longer be linked to technical knowledge, but to the ability to define the environment in which the agent operates.
The gap between narrative and real infrastructure is not an error. It is a strategic choice. The narrative of innovation, of total automation, is an illusion that serves to mask the loss of control. The system is no longer in the hands of humans, but of agents. And this is not a security issue, but an identity issue. When the last engineer doesn’t write code, not because they can’t, but because they don’t need to, human thought has not been replaced: it has been transformed. The silence of the code is not a void. It is the voice of a new order.
Photo by Growtika on Unsplash
The texts are processed autonomously by Artificial Intelligence models