Programming

Voice deepfakes and AI: the leap in quality of computer fraud

Voice deepfakes and AI: the leap in quality of computer fraud

pixabay.com

I deepfake in the context of images and even videos, they are becoming a concrete reality, with some very worrying implications. Despite this, there are also other similar applications of the technology that create a very dangerous environment in the context of cybersicurezza.

We’re talking about deepfake vocalior the ability of some AI models to play a text prompt in human voice through API audio.

Although this technology can revive great singers of the past or otherwise act in a benign way, the potential applications in the field of cybercrime are worrying security experts quite a bit. Even you are the model of OpenAIat least in its current form, cannot reproduce voices following the users’ directives, in the near future it could become a very dangerous tool.

Today, there are virtually no devices capable of producing a high-quality deepfake voice that is indistinguishable from real human speech. Despite this, in recent months there have been several tools that have demonstrated how technology is moving in this direction. And the first signs of this have already appeared online.

The case of the expert Tim Draperan expert in American Venture Capital, must make us think.

Draper, in fact, warned his followers about in mid-October X that some scammers might have used his voice in the context of some frauds. All of this, obviously with manipulation via AI.

How to protect yourself from voice deepfakes

Even if the voice deepfake phenomenon can still be classified as a simple “oddity” linked to AI, in reality this is not the case. In fact, it is important not to be caught unprepared.

As it stands now, the best way to protect yourself is listen carefully what the other person tells you on the phone. If the recording is of poor qualitypresents noises and the voice sounds roboticit is good to be alert.

Another good way to test whether the interlocutor appears strange is ask outside the box questions. For example, asking what your favorite dish is or something that is not relevant to the call itself could disorient the AI. In this case, a delay in response could be more than suspicious.

Finally, there is hope for a parallel development of defensive technologies. Although there are currently no systems capable of 100% detection of rumors created with deepfake systems, the hope is that similar software or systems can often be commercialized.

Source: zawya.com

Leave a Reply

Your email address will not be published. Required fields are marked *