The rise of artificial intelligence is driving the massive construction of data centers worldwide. Every new model, every new service, and every promise of automation requires thousands of specialized chips. But behind this accelerated growth, a little-discussed problem is emerging: AI chips have a surprisingly short lifespan.
In the world of traditional hardware, it was assumed that a critical component should last at least six years to be properly amortized. That was the usual timeframe before the AI boom. Today, that rule has been shattered.
Artificial intelligence chips—especially high-performance GPUs—become almost obsolete in less than four years, and in many cases even sooner. The reason is simple: the speed of innovation is brutal. Each new generation multiplies computing power by seven or more times, leaving the previous one in the dust.
A chip that's cutting-edge today will only be useful for secondary tasks in three years. And that's when the market drives depreciation.
Manufacturers like Nvidia are launching new models at an ever-increasing pace. From a technological standpoint, it's impressive, but from a financial perspective, it's a problem. "Old" chips become undesirable almost immediately, and their value plummets.
Current estimates indicate that these processors can lose up to 90% of their value in just three or four years. This completely changes the bottom line for those who operate large data centers.
Where there was once stable and predictable amortization, there is now accelerated and difficult-to-manage depreciation.
In addition to this economic obsolescence, there's another factor: physical wear and tear. Greater computing power means more heat and more failures.
AI chips operate at their limits, performing extremely intensive calculations for extended periods. This generates more heat, more thermal stress, and a higher failure rate.

A revealing piece of data was recently provided by Meta: the processors used to train their Llama model had an annual failure rate of nearly 9%. This is a high figure for components that cost tens of thousands of euros per unit.
If failures increase and the actual lifespan is closer to three years than six, the impact on operating costs is significant and has a clear consequence: AI data centers are more expensive to operate than previously thought.
This is not only due to energy consumption, but also to the rapid depreciation of hardware and the increased maintenance and replacement costs.
If profitability declines, someone will have to bear the cost. It could be the service provider, the end customers, or the investors who are currently funding this unprecedented expansion.
Just a few years ago, these factors were not considered critical. Today, they are beginning to appear in the most realistic analyses of the sector.
The general feeling is that the AI sector is focused on the short term. The priority is rapid growth, scaling at all costs, and market capture, often with external funding. But the economics of hardware are unforgiving, and sooner or later, the numbers prevail.
This doesn't mean AI is a bubble about to burst. But it does suggest that not everything is as profitable as it seems, especially when analyzing long-term costs.
The question is now on the table: will this rate of investment remain sustainable when hardware ages so quickly?
Time, as always, will tell.