Cognitive Sovereignty: AI for Italy’s Public Sector

Date: 23.03.2026
Author: Carlo Cafarotti
Section: ROOT ACCESS

The Factory has a philosophy, its own thermodynamics, and therefore now needs a constitution.

In the first two articles of this special section [ROOT ACCESS], I laid out, also making an effort to clarify the underlying intentions, the logical foundations of HuAndroid:

  1. The Manifesto: AI is not an assistant but a Cognitive Exoskeleton, which means that humans should not be passive users, but Architects who design its constraints.
  2. The Asymmetric Advantage (Editorial Thermodynamics): The mathematical demonstration that a bare-metal and proprietary infrastructure can reduce production costs (0.099 euros for a complex analysis), transforming the variable cost of the cloud into a marginal fixed cost.

A third pillar was missing. The one that links philosophy to thermodynamics, transforming an experiment in a laboratory into a paradigm applicable at a systemic level.

In recent days, I sent a position paper to the Italian Digital Agency (AgID) in response to the public consultation on the new Guidelines for AI in the Public Administration. I did not do this from the pulpit of a theorist, who, after all, I could not presume to be, but with the concreteness of someone who has built a working infrastructure and asked themselves a question: Can the architectural principles of this Factory scale up to become a model for the State?

This article is an expansion of that document; it is the representation of the third pillar: it is called Cognitive Sovereignty.


The Problem: Governing AI with the rules of “cloud-rent”

The Italian Public Administration (PA) is about to take an irreversible step. The AgID Guidelines under consultation are a solid work that places safety, regulatory compliance (AI Act, Law 132/2025), and the need to mitigate vendor lock-in at the center.

However, the public and institutional debate is still flawed by a fundamental misconception: the idea that AI is “rented” like a traditional software, and that governing it simply means writing better prompts.

This misconception leads to three deadly consequences:

  1. Structural Dependence (Cloud-Rent): If AI is delegated to third-party APIs (e.g., OpenAI, Anthropic, Google), the PA is not buying an infrastructure, but a perpetual rent. Every token generated to process a public document is a tax paid to a foreign (and opaque?) ecosystem. The more adoption grows, the more the financial burden increases.
  2. The risk of “self-poisoning” (Model Collapse): If the PA uses AI to generate millions of synthetic documents, and those documents end up in the databases that will train tomorrow’s AIs, the system will enter an entropic loop. AI that feeds on its own output loses touch with reality, flattens exceptions, and produces a statistically average result that makes no sense. In a Public Administration where the citizen’s right is often based on the exception and the detail, this drift is unacceptable.
  3. The illusion of control: The AI Act imposes human supervision. But the standard model (Human-in-the-loop) reduces the official to a mere proofreader, i.e., the administrative point that must take responsibility. A system that “hallucinates” physiologically, and whose errors are patched downstream on a case-by-case basis, is not a controlled system: it is a machine that generates induced work for those who have to correct it.

The position paper sent to AgID proposes three architectural solutions to these threats.


The Three Pillars of Cognitive Sovereignty for the PA

The 3 pillars of AI in the PA

1. Human-in-Command: the Architect, not the corrector

The principle of human supervision (Principle 13 of the Guidelines) is vital. But if interpreted only reactively, it fails. The proposal: the Human-in-the-loop model must evolve into the Human-in-Command (HIC) paradigm.

In this scheme, the human does not evaluate the output downstream, but designs the “Constitution of Agents”: the ethical-logical guardrails, the epistemological boundaries, the inviolable algorithmic rules. If the system produces a drift, we intervene on the logical architecture and the weights of the system prompt, rather than canceling the single generated word.

Why it is Sovereignty: The decision-making responsibility is not delegated to a commercial black box, but is encoded in the system design, overseen by public officials.

2. Epistemic Security: agents designed to “attack” the truth

The architecture outlined by the Guidelines is solid, but lacks an explicit layer for factual verification (Grounding) during inference. The proposal: integrate a level of Epistemic Security based on Multi-Agent patterns. The main generative model must be systematically flanked by independent Critic Agents whose sole programmed purpose is:

  • Verify cross-references;
  • Uncover biases and logical fallacies;
  • “Attack” the textual production before release to test its resistance.

The goal is not a utopian absolute objectivity, but a procedural objectivity: the legitimacy of the PA’s output does not derive from the infallibility of the machine, but from the transparency, repeatability, and robustness of the automated review process.

3. Cognitive Sanctuaries: why the PSN must evolve

The guidelines for development rightly promote hardware neutrality. But the procurement directives do not yet translate this principle into binding requirements. The proposal: include in the tenders criteria for absolute premiums for solutions that:

  • Guarantee bare-metal execution on heterogeneous hardware (including CPU-only environments, to ensure operational continuity in the event of GPU crises);
  • Adopt local-first architectures under total public control;
  • Document rigorous thermodynamic efficiency (consumption/token).

These protected enclaves, which I call Cognitive Sanctuaries, find their natural habitat in the National Strategic Hub (PSN); in my view, the PSN cannot be limited to being a “data storage cloud”, but must evolve into the enabling computing infrastructure where Public Administration models are executed and queried in total isolation from commercial networks.


The Birth of the Cognitive Architect

These proposals require a profound genetic mutation of the public workforce. A new role is needed: the Cognitive Architect.

We are not talking about a “prompt engineer” seconded to the PA. We are talking about a hybrid professional who masters:

  • Administrative Law, to translate regulatory constraints into system constraints;
  • Algorithmic Ethics, to balance the machine’s decision-making weights;
  • Data Engineering, to orchestrate Critic Agents and govern synthetic entropy.

Law 132/2025 (Art. 11) mandates the training of those who use AI systems. The Cognitive Architect represents the absolute pinnacle of this training pyramid: the one who does not merely use the interface, but designs its engine. And I assure you, doing it in the field, it seems to me almost an art form, beyond strict logic.


Conclusion

Cognitive Sovereignty is not a political slogan. It is an engineering protocol.

It is the ability of a State to own the iron on which computations run (Cognitive Sanctuary), to engineer doubt into document production (Epistemic Security), and to elevate humans from proofreaders to rule creators (Human-in-Command).

If we do not assimilate this logical shift, the integration of AI into the Public Administration will be reduced to the most colossal outsourcing of thought ever attempted in the history of the Republic. And for someone like me, who has also held a public political role, this is a nightmare I hope does not come true.

AI is not governed by writing better prompts. It is governed by designing the architecture. And designing the architecture means knowing how to fuse philosophy with thermodynamics, and code with constitutional law.

The triptych is complete.

Carlo Cafarotti

>>> system override by human <<<

If you want to delve deeper, the full position paper sent to AgID (with textual amendment proposals and regulatory references) is available in full: Download PDF – Link.