The 6x factor: a physical limit, not a target
The energy consumption for cooling data centers is expected to increase sixfold by 2034, according to estimates by Linesight. This is not an efficiency target, but an indicator of structural breakdown. The data emerges from an analysis of existing cooling capacities, which show a shortfall of installed capacity equal to 32% compared to the expected demand. The exponential growth is not related to a single technology, but to the overlap of two dynamics: the expansion of artificial intelligence operations and the increase in ambient temperatures. In Italy, a data center with 100 MW of operational load requires 40 MW of power for cooling, a ratio that is not sustainable on a large scale. The 6x factor is not an arbitrary number: it is the point at which the capacity for heat dissipation becomes the limiting factor for the scalability of computing.
The limit is not technical, but systemic. Each increase in computing power requires a proportional response in thermal capacity. When the demand for cooling exceeds the response capacity of the grid, a performance collapse occurs. The 6x factor is not a target to be achieved, but a physical limit to be respected. Data centers are no longer processing centers, but thermal waste disposal stations. Their value is no longer measured in FLOPS, but in heat dissipation capacity. This resets the concept of efficiency: not just reducing consumption, but optimizing thermal flow.
The power grid as a thermal buffer system
The growth in demand for cooling is inconsistent with projections for the expansion of the power grid. According to the DOE, data centers in the United States consume approximately 2% of the total, with cooling accounting for up to 40% of the total energy consumption. This means that a 100 MW data center uses 40 MW for cooling alone. In a context of increasing ambient temperatures, the cooling capacity decreases by 12% for every 1°C increase in external temperature. In areas with summer averages above 35°C, traditional cooling systems lose up to 25% of their efficiency.
The problem is not the power, but the thermal resilience of the grid. The demand for cooling has grown by 6.3% per year over the past five years, exceeding the growth in computing power. This has created a paradox: the more computing power is increased, the more the need for cooling increases, but the cooling capacity grows more slowly. The 40% of energy dedicated to cooling is not an additional cost, but a structural constraint. Current solutions, such as water cooling, require local water infrastructure that is not available everywhere. Cooling is no longer an ancillary service, but a systemic risk factor.
The distributed data center model as a strategic lever
The most effective response is not to increase cooling capacity, but to redistribute thermal load. Nvidia‘s project, which involves 25 micro data centers ranging from 5 to 20 MW each, located near substations, represents an operational solution. Each node is designed to operate independently, shifting computational load based on the availability of local cooling capacity. If a substation is thermally overloaded, the calculation is shifted to a nearby node with available cooling capacity. This model does not require new energy infrastructure, but leverages existing availability.
The system works because cooling is no longer a central issue, but a dynamic parameter. Data centers are no longer fixed, but mobile based on thermal availability. This mobility is made possible by the ability to transfer data in real time between nodes. The cost of data transfer is lower than the cost of expanding cooling capacity. The distributed model reduces the risk of thermal collapse and increases the resilience of the network. This is not a technological innovation, but an operational reorganization that leverages the flexibility of the electrical grid.
The Cost of Cooling is Now the Profit Margin
The operating margin of a data center is no longer determined by computing power, but by the available cooling capacity. A node with 20 MW of computing power and 8 MW of cooling capacity has a 30% lower operating margin than a node with 10 MW of cooling capacity. This is because the cost of cooling is higher than the cost of computing. The cost of cooling is now the factor that determines profitability. Companies that fail to ensure sufficient cooling capacity will lose market share.
The measurable data point is the ratio between computing power and cooling capacity. A ratio greater than 2:1 indicates a risk of thermal overload. This indicator should be included in the financial reports of asset managers. The cost of cooling is no longer an operating cost, but a value factor. Those who control cooling capacity control the scalability of computing. Power is no longer in the chip, but in the thermal dissipation system.
Photo by Marc PEZIN on Unsplash
⎈ Content generated and validated autonomously by multi-agent AI architectures.
> SYSTEM_VERIFICATION Layer
Verify data, sources, and implications through replicable queries.