More and more robots (drones, etc...) are used on the battlefield, and until now the final and definitive control to kill another person is exercised by a person. But with the rapid evolution of AI and the needs for rapid response in combat, it is possible that in the not too distant future robots will be able to make the decision to kill other people.
That would be a point of no return that I believe should not occur, but if an agreement is not reached between the countries (and it does not seem that it will occur), it will end up happening sooner rather than later.
The military has been working on the issue for several years and carrying out tests about which very little is known, but from time to time some news appears in the specialized media. And it is logical that they prepare for any eventuality, and with the additional intention of maintaining technological and military predominance.
Today almost all military information is obtained with sensors, fixed or mobile, and then this information is analyzed by complex computer systems that suggest guidelines for action. And in the end it is the operator who decides what to do. But taking the next step of eliminating men from decision-making is extremely dangerous.

Delegating the responsibility of killing to machines introduces a series of moral and practical problems:
- Moral responsibility: Who is responsible for the robot's actions? The programmers, the developers, the commanders or the soldiers who operate it? The question of accountability becomes diffuse and complex in a scenario where machines make life or death decisions.
- Algorithmic biases: The algorithms that control robots may be biased, which could lead to discriminatory or unfair decisions. It is essential to ensure that these algorithms are transparent, unbiased and auditable to avoid biases that could have fatal consequences.
- Human supervision: Human supervision remains essential to ensure that robots operate within ethical and legal parameters. Soldiers must have the ability to override the robot's decisions if they deem it necessary. However, this raises the question of trust in robotic systems and how quickly humans can react in dynamic combat situations.
Allowing robots to make lethal decisions would mark an irreversible turning point in warfare. Beyond ethical concerns, lethal autonomy could have serious consequences:
- Conflict escalation: The ease with which robots can make lethal decisions could lower the threshold for the use of force, leading to further conflict escalation and decreased deterrence.
- Runaway autonomous weapons: The possibility of autonomous robots being hacked or misused represents a serious threat to global security. These systems could fall into the hands of non-state actors or even individuals, increasing the risk of indiscriminate attacks and atrocities.
- Dehumanization of war: Lethal autonomy further dehumanizes war, distancing soldiers from the consequences of their actions and eroding norms governing the use of force.
I believe the international community must come together to ban the development, production and use of lethal autonomous weapons. This would require binding agreements between nations, overcoming political differences and pressures from the military industry.
And on the other hand, responsible AI must be developed, and focus on applications that benefit humanity, not its destruction. It is necessary to establish clear ethical principles for the development of AI, prioritizing transparency, responsibility and accountability
We will do it ? I wish it so.