In just a few years, artificial intelligence has gone from being an academic curiosity to becoming a central component of our digital lives. It's present in search engines, social networks, virtual assistants, creative tools, and, of course, conversational models like ChatGPT and others. But as its use grows, so has an increasingly pressing question: how much energy does AI really consume?
When we talk about the energy consumption of artificial intelligence, we must distinguish between two phases: training and inference. Training a model like GPT-4 or GPT-4o requires enormous amounts of computational power. It involves weeks (or months) of intensive work on supercomputers with thousands of GPUs running at full capacity, processing terabytes of data. This phase has a very high energy cost.
But once the model is trained, inference begins: what we users do when we interact with an AI to answer questions, translate texts, or generate code. Is this also energy-intensive?
OpenAI CEO Sam Altman recently explained: a typical ChatGPT query consumes roughly the same energy as keeping a light bulb on for a few minutes. In individual terms, it doesn't seem like much. But when we scale that figure to billions of daily queries globally, the impact is no less significant.
Fortunately, the history of technology is also the history of increasing efficiency. Just as Moore's Law predicted for decades that chip capacity would double every 18-24 months, in the world of AI we are seeing a progressive reduction in the energy cost per operation.

New AI chips, such as those designed by Nvidia, Google, and Microsoft, are optimized to improve performance per watt. Furthermore, newer models like GPT-4o are designed from the ground up to be more efficient, with architectures that make better use of resources and enable faster response times with less power consumption.
One of the most visible consequences of this improvement in efficiency is the drop in prices for AI services. The "price war" between providers like OpenAI, Anthropic, Google, Mistral, and the newcomer DeepSeek is significantly reducing the cost per million tokens (the basic unit of text processed by these models).
OpenAI and other major players have lowered their prices to ranges between $2 and $10 per million tokens. And DeepSeek, which has made a strong entrance in 2024, has reduced this range to levels as low as $0.50 per million.
Competitive pressure and technical improvements are making AI increasingly accessible, with new specific applications for sectors such as healthcare, education, design, legal, and even agriculture.
Despite their success, companies like OpenAI continue to invest more than they earn. In 2024, the company reported losses of around $5 billion, although it projects revenues of around $12.7 billion this year. Even so, its valuation has continued to rise, approaching $300 billion. This apparent paradox is typical of disruptive technologies: a lot of money is spent scaling and conquering the market before achieving sustained profitability.
One of the major concerns today is not only how much AI consumes, but what it can do and who controls it. The speed of development far exceeds the capacity of governments to react. While large technology companies innovate and deploy increasingly powerful tools, regulatory authorities are lagging behind, trying to catch up with ethical, privacy, security, and transparency standards.
Yes, artificial intelligence consumes energy. But it may not be the "energy monster" some imagine, especially when compared to industries like transportation, intensive agriculture, or even video streaming. Modern AI is computationally intensive, but it's also improving in efficiency at a remarkable rate.
We are probably facing one of the most transformative technologies of our time. Using it judiciously, demanding transparency in its operation, and moving toward smart regulation will be key to ensuring its development is sustainable, accessible, and ethical.