Not long ago I read in the news about the meeting held in mid-November between the presidents of the USA and China, Biden and Xi Ping. Meeting in which, among others, the topic of AI and its use in issues related to defense was discussed, and in which it seems that they agreed to have experts from both countries reflect on the topic.

 Talking in principle is almost always positive, and if they are able to reach an agreement, even better. But the “problem” is that AI is developing so much and so quickly, that it is being introduced into all areas, including security and defense.

And it is clear that a tool like AI is too useful not to use it in military and defense systems. Another thing is how it is done and especially how it is controlled.

Until now, the majority of military officials base their “security” on having somebody ( a physical person ) in the entire control process, to ensure that ultimately it is a person who makes the final decision to attack, and that it is not a machine that does so, who decides to kill someone.

We have seen many times how the president of the United States is accompanied by a person with a bag containing the “nuclear briefcase.” And this has been the case for more than 50 years.

Today, artificial intelligence (AI) is revolutionizing many aspects of our lives, from the way we work to the way we have fun. However, there is one area where the use of AI is particularly problematic: making difficult decisions.

In the event of a possible nuclear attack, the decision to launch a counterattack is one of the most difficult a leader can make. There are many variables to consider, such as the probability of an attack, the potential damage it could cause, and the possible consequences of a counterattack.

In theory, AI could help make these decisions more efficiently and accurately. Machines can process large amounts of information much faster than humans, and can identify patterns and trends that humans might miss.

However, there are also a number of potential problems associated with using AI in this context. One of the main problems is bias. Machines are trained on data that reflects the biases of the people who created it. If the training data is biased, the AI will also be biased, and its suggestion could be wrong.

Another potential problem is the lack of transparency. It is difficult to understand how machines make decisions, making it difficult to evaluate their results. This could lead to mistakes or decisions that are not in the best interest of humanity.

Not long ago, I heard a technology expert say that AI is a child who is learning, and as such can make mistakes and be deceived. That person recommended (I agree) that we are overvaluing AI and its capabilities. And I think some investment expert has also said something similar.

It is true that AI has enormous capabilities, and like all technologies it is more positive than anything else, but there can be nothing without limitations and without control.

Let's hope that experts from the USA and China (and other countries) talk among themselves, and reach points of agreement so that AI is a technology that helps the development of humanity. It has enormous potential for doing so.

And now that this year 2023 is ending, I take this opportunity to send everyone my Best Wishes for the Next Year 2024!!!

By Amador Palacios

Reflections of Amador Palacios on topics of Social and Technological News; other opinions different from mine are welcome

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEN