The cybersecurity landscape has changed forever. What until recently seemed like science fiction is now a tool available to almost anyone: the creation of Deepfakes. This technology, powered by Artificial Intelligence (AI), has perfected the art of identity theft, both in audio and video, raising the risk of digital scams to unprecedented levels.
Forget robotic voice recordings. Today, AI's ability to recreate a person's voice is astonishing. Just listening to a sample of a few seconds is enough for deep learning algorithms to generate an identical copy, capable of fooling even the most attentive ear. The more audio provided, the better the voice impersonation.
This has immediate implications for our daily lives. If you receive a phone call, the voice of your friend, family member, or even a superior at your company could be simulated by a machine. The line between reality and artificiality blurs, making it almost impossible for the average citizen to differentiate whether they are speaking to a person or a sophisticated AI program.
This scenario is fertile ground for phone fraud. Deception is much more effective when the voice of someone trusted is used to request sensitive information or money transfers.
In the visual realm, the situation is equally complex. Advances in video synthesis and technologies such as face-swapping make it possible to create recordings of a person saying or doing something they never did. Current programs achieve perfect lip-syncing with the generated audio, reinforcing the sense of authenticity.

The speed of this technology is key. AI is already allowing the recreation of voices and gestures in real time. This not only facilitates scams but also makes them faster and harder to trace, as the attacker can operate from any country, beyond the reach of local jurisdictions.
Faced with this technological leap in deception capacity, the feeling of vulnerability is palpable. While Deepfake technology advances at a dizzying speed, the response from authorities often seems slow.
The technical capacity exists to track and dismantle these fraud networks. Digital forensic analysis tools and call and post traceback tools can be applied. However, the lack of coordinated action, or sometimes institutional neglect, allows fraudsters to run rampant.
The development of AI demands a commensurate institutional effort to combat cybercrime. It is unacceptable for citizens to be like "ducks" in a shooting gallery, easy targets for a few digital criminals.
It is time for governments and law enforcement to use the technical means at their disposal to protect citizens from this new and growing digital threat. Technology has given us the problem; the law and due diligence must provide the solution.