The Illusory Promise of Technological Autonomy
The idea that a nation can achieve complete technological sovereignty is a dangerous delusion. In the 21st century, innovation increasingly relies on open and collaborative ecosystems where code sharing and global participation are the norm, not the exception. The obsession with total control, as demonstrated by recent pushes towards technology self-sufficiency in some countries, risks isolating nations, stifling innovation, and creating an even wider digital divide. Today, true competition is no longer about building everything internally but about orchestrating and guiding these open ecosystems.
This dynamic is evident in the artificial intelligence sector, where the proliferation of open-source models like Llama 3 is redefining the competitive landscape. While some countries invest heavily in proprietary projects, others focus on active participation in these open-source communities, recognizing that collaboration is key to unlocking the full potential of AI. Andrew Ng’s strategy, a proponent of open-source and founder of Landing AI, exemplifies this approach: investing in AI application companies rather than trying to build everything internally. This approach not only reduces costs and risks but also accelerates innovation and promotes diversity.
Human expertise is undergoing a radical transformation in this new paradigm. It’s no longer about being an expert in every aspect of technology, but about integrating and adapting existing solutions to specific needs. The figure of the ‘prompt engineer,’ for example, is emerging as a crucial competence, capable of leveraging the full potential of large language models. This shift in paradigm implies that human experience, once considered a fundamental asset, is becoming increasingly an ‘anecdote’ in the context of AI, serving as a starting point for automatic learning but no longer a guarantee of success.
The Heartbeat: Transformer Architecture and New Epistemology
At the heart of this technological revolution lies the Transformer architecture, a deep learning model that has transformed natural language processing. Unlike traditional recurrent neural networks, Transformers can process data sequences in parallel, allowing them to learn complex relationships between words and sentences much more efficiently. This ‘attention’ capability enables Transformers to focus on the most important parts of an input, ignoring noise and irrelevant information.
But the innovation of Transformers goes beyond mere computational efficiency. It implies a fundamental epistemological shift in how we conceive artificial intelligence. Transformers do not ‘think’ like humans but operate according to probabilistic logic based on predicting the next word in a sequence. This logic, while different from ours, can lead to surprising results, such as generating coherent and creative text, translating languages, and answering complex questions. The real challenge is not to replicate human intelligence but to understand and harness the potential of this new form of artificial intelligence.
This new epistemology also reflects in the development of ‘world models,’ which seek to build an internal representation of the external world based on observation and interaction with the environment. These models, inspired by cognitive psychology, allow AI agents to plan actions, predict consequences, and adapt to unforeseen situations. The ability to construct an accurate ‘world model’ is crucial for developing autonomous AI agents capable of operating effectively in complex and dynamic environments.
The Power Map: Monopolies, Algorithmic Paradox, and Data Control
The proliferation of open-source models like Llama 3 does not mean that power is distributed equally. On the contrary, control over data and computational infrastructure remains concentrated in the hands of a few large technology companies. These companies, such as Google, Microsoft, and Amazon, have access to vast amounts of data and computational resources, allowing them to train increasingly powerful and sophisticated AI models. This creates an algorithmic paradox: the more we personalize AI models to meet our specific needs, the more dependent we become on these companies for access to data and infrastructure.
This paradox is further exacerbated by the trend towards standardization. While open-source promotes diversity and innovation, the pressure to create interoperable and compatible AI models with various platforms can lead to a convergence around standards dominated by a few companies. This can stifle innovation and limit user choice. As Cathy O’Neil states in her book ‘Weapons of Math Destruction,’ algorithms can perpetuate and amplify social inequalities if not designed and implemented carefully.
“The problem with algorithms is not that they are inherently biased, but that they reflect the biases of the data they are trained on.” – Cathy O’Neil
The competition for control over data and computational infrastructure is destined to intensify in the coming years. Countries that develop a solid base of AI competencies and promote international collaboration will be best positioned to take advantage of the opportunities offered by this technology. Those that isolate themselves or focus on proprietary solutions risk falling behind.
The Irreversible Threshold: When AI Becomes Infrastructure
In the next 3-6 months, we will see an acceleration in the integration of AI into all aspects of our lives. AI will no longer be a separate technology but become the infrastructure on which many of the services we use daily are based. This means that the ability to develop and manage AI models will become a fundamental competence for all companies, regardless of their sector.
This change will have a profound impact on the labor market. Many repetitive and manual jobs will be automated, while new jobs will require AI competencies such as prompt design, data management, and model evaluation. It will be essential to invest in training and requalification for workers to prepare them for this new scenario. The challenge is not to stop automation but to manage it responsibly, ensuring that the benefits are distributed equitably.
The irreversible threshold will be crossed when AI becomes so integrated into our lives that it becomes impossible to go back. This does not mean that AI will be perfect or without risks. But it means that AI will have become an essential part of our society, and the future will depend on our ability to manage it effectively.
The Future is Hybrid: Human, Machine, and Open Source
The true revolution in AI does not lie in replacing human intelligence but in amplifying it. The future is hybrid, a future where humans and machines collaborate to solve complex problems and create new opportunities. Open source is the key to unlocking this potential, allowing everyone to participate in creating and sharing knowledge.
The illusion of technological sovereignty must be abandoned. The real challenge is to build an open digital ecosystem that is inclusive and sustainable, guided by collaboration and sharing. This requires a change in mindset, moving from competition to cooperation. Only then can we ensure that AI is used for the common good and that its benefits are distributed equitably.
As Elon Musk observes, Moltbook represents the beginning of a new era, an era where autonomous and collaborative AI agents operate. But this era will not be determined by technology but by the choices we make today. The question is not whether AI will change the world, but how it will change it. And the answer depends on us.
Photo by Greg Willson on Unsplash
Texts are autonomously processed by AI models