In one of the most surprising and controversial moves in recent months, the United States Department of Defense has announced the closure of a key office dedicated to evaluating the security and operational performance of military systems incorporating artificial intelligence (AI). The news, confirmed by the Secretary of Defense himself, is officially justified as a cost-saving and budget-efficiency measure.

But many experts and analysts have already spoken out: is this really the time to cut back on controls when it comes to the military use of a technology as disruptive as AI?

It's worth putting the numbers on the table to understand the potential impact of this decision. The Pentagon is, in practical terms, the largest defense purchasing company on the planet, with annual purchases exceeding $167 billion. From communications systems and drones to combat platforms, logistics, and intelligence analysis, the presence of AI is growing and transversal. The goal, naturally, is to achieve more effective, faster, and more capable systems with less human intervention.

In this scenario, the functional safety of these systems is not a technical detail, but an absolute priority. A failure in a conventional weapon can have serious consequences; a failure in an AI-based autonomous military system can be downright catastrophic.

Until now, the Department of Defense had specific units tasked with evaluating and testing these systems before their operational deployment. One of these was precisely the office now being eliminated: the Joint Artificial Intelligence Center (JAIC), which played an essential role in validating autonomous systems.

Reducing waste is desirable—and, in the case of the US military budget, necessary—but efficiency cannot be confused with strategic imprudence. In the field of military AI, the consequences of not conducting adequate testing are not limited to operational errors. We're talking about automated decisions that can determine the life or death of people, both our own and civilians.

It's disturbing to think that a technology that has yet to reach full maturity and continues to raise ethical and technical questions is being massively incorporated without rigorous evaluation. Because we're not just talking about predictive software or image analysis: we're talking about systems capable of making autonomous decisions in combat situations.

What's most worrying isn't just the cuts themselves, but the reasoning behind them. Citing economic reasons in such a massive budget environment sounds more like a political or ideological gesture than a sound technical assessment. Meanwhile, populist rhetoric proliferates where technology is presented as an infallible panacea, ignoring the real risks of hasty implementation.

More control, not less, in an era of automated warfare. Recent history is littered with technological errors that have cost lives. And in every case, a lack of verification, rigorous testing, and quality control have been determining factors. If this is true for a nuclear power plant or a commercial airliner, it is even more so for an armed drone or an autonomous defense system that uses AI to select targets.

In this context, the Pentagon's decision not only seems irresponsible, but also representative of a broader trend: a technological policy driven by immediate interests, simplistic rhetoric, and leaders more concerned with appearing efficient than ensuring real security.

When one reads decisions like this, it is inevitable to ask: what kind of society are we building? Who is in charge? What kind of critical thinking are we promoting among citizens? Every day it becomes more difficult to distinguish between real technological advancement and political marketing.

And when a lethal error occurs—because it will—in a poorly evaluated weapons system, no one will take responsibility. It will be called unforeseen failures, changing operating environments, or human error, and some unfortunate soul will pay with his life for a decision made to "save costs."

As a society, we should demand the opposite: more control, more accountability, and less demagoguery. Because if we allow military AI to advance unchecked and untested, we will not only be compromising future security, but also the ethics of the present.

Amador Palacios

By Amador Palacios

Reflections of Amador Palacios on topics of Social and Technological News; other opinions different from mine are welcome

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEN
Desde la terraza de Amador
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.