Programming

Worms and generative AI: a devastating cyber threat

Worms and generative AI: a devastating cyber threat

piqsels.com

When you think about theArtificial intelligence and you have malware, we immediately have the image of chatbots being exploited to create malicious agents. In reality, these dynamics seem destined to change radically.

Companies and startups are always busy building complex and customizable AI ecosystems. Convenience, in this sense, comes at a very high cost with regards to potential dangers. Some researchers have warned of the potential danger posed by worm related toGenerative AI.

These would have the ability to spread from one system to another, stealing data or distributing malware into the affected system. The discovery is thanks to Ben Nassiresearcher of Cornell Techwhich confirmed that it is a type of cyber attack never seen before and with devastating potential.

The malicious agent was named by Nassi and his colleagues Morris II in a sort of tribute to Morrisor the first worm that appeared on the Internet in the now distant 1988.

In a research-related document, shared with WIRED, the researchers demonstrated how Morris II can attack, for example, an email-related AI assistant, taking full control of the managed email inbox. All this by breaking, without consequences, some protections imposed by the AI ​​system concerned.

Generative AI at worm risk: here’s how to limit potential risks

It should be clarified that worms linked to generative AI have not yet been identified in nature but, despite this, they represent a danger that will soon have to be dealt with. The same technology as AI and gods LLMin fact, offers a clear opening to cybercriminals.

Generative AI works by receiving suggestions, in most cases textual, to complete a specific task. To protect systems and users, however, developers provide barriers, theoretically insurmountable, in order to avoid abuse. Cybercriminals, for their part, can study ways to make AI ignore these limitations, with harmful consequences for users.

According to researchers, they would also be at risk ChatGPT e Gemini, as well as any other AI assistant currently out there. From the research and tests carried out, any type of victim data could be at risk of theft, from Bank account details ai credit card numbers.

In any case, Nassi himself wanted to reassure users. In fact, even those who work with AI can adequately protect themselves from these dangers. For Nassa, in fact, also the adoption of traditional safety practices they could help avoid unwanted encounters of this type.

Furthermore, as confirmed by another researcher, viz Adam Swanda, it is important that humans always maintain control over AI. In this regard, it is important to limit automation and have an approach based on the approval of operations carried out by AI.

Leave a Reply

Your email address will not be published. Required fields are marked *