We can say that Artificial Intelligence (AI) is more fashionable than ever, and while some companies (Microsoft, Open AI, Google, etc...) rush to make advances and try to put them on the market as soon as possible to get ahead of their competitors, many other experts issue well-founded opinions calling for prudence, and to control this new technology before it is too late.

Those of us who don't know about are in the middle of all this hubbub listening to one another, but our common sense tells us that there should indeed be some prior control over a technology that can do a lot of good, but also a lot of harm.

The statements of one of the leading experts in the field, Geoffrey Hinton, who until recently was one of Google's highest experts in this field, seemed very important to me. And I think he has behaved in a very honest way. He has left his job at Google, and has subsequently expressed his views freely and with great grace.

Geoffrey Hinton is a computer scientist and a pioneer in the field of deep learning and artificial intelligence (AI). In several recent interviews, he has warned about the risks of AI and has spoken of the need to set ethical limits on its development.

One of the main risks that Hinton identified is the fact that AI can be used for evil. If AI falls into the wrong hands, it can be used for surveillance, social control, and warfare. Additionally, AI can be used to create malicious bots, which can be programmed to perform harmful actions such as spamming or spreading fake news. Hinton argues that regulations are needed to prevent these risks and protect society.

Another risk that Hinton identified is that of reliance on AI. If AI becomes too advanced, it can be difficult for people to make decisions without it. And this could have negative consequences on the human capacity for problem solving, decision making and creativity. According to Hinton we must maintain a balance between AI and human intelligence.

Something important that this nice man has also expressed is the importance of transparency in the development of AI, and that its decisions can be explained to humans. If people can't understand how AI works, they won't be able to trust it.

He has also pointed out that AI can have unintended consequences. For example, if an AI system is designed to minimize costs in a company, it may not take into account the environmental or social costs of its decisions.

In conclusion, Geoffrey Hinton warns about the risks of AI and talks about the need to establish ethical limits in its development. Risks include misuse of AI, human reliance on AI, unawareness of its algorithms by regulators, general lack of transparency, etc….

AI can bring us many good things, but we must make sure that those things are used for the benefit of the majority of society.

Not long ago the European Union has proposed a series of regulatory measures that go in this direction, but it remains to be seen if they will be applied or not. I hope they will be put in force quite soon.

By Amador Palacios

Reflections of Amador Palacios on topics of Social and Technological News; other opinions different from mine are welcome

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEN