Security

Artificial Intelligence & Cybersecurity: the FBI raises the alarm

Artificial Intelligence & Cybersecurity: the FBI raises the alarm

that theArtificial intelligence could be, in the wrong hands, a very dangerous tool, it was already known. The claims of theFBI in this sense, they are a clear confirmation of these fears.

According to the American government agency, in fact, an alarming number of people exploit AI technology for attacks phishing and to develop malwarewith disastrous consequences for the context of cybersecurity.

Systems like ChatGPT, once the filters are bypassed, they are able to create sophisticated malevolent agents capable of bypassing the latest security systems without much difficulty. All this has made the possibility of realizing within everyone’s reach malware poliformici which, until a few months ago, were the prerogative of more experienced programmers.

While models like ChatGPT and Claude 2at least theoretically, are considered as “walled gardens”, the forces of order are focusing on the world of open source AI.

In this context, people can train AI models according to their specific needs, with completely unpredictable consequences. Cases like WormGPTwhich allows hackers to access industry-focused ChatGPT clones black-hatopen up disturbing scenarios for the near future.

From polymorphic malware to WormGPT: all the risks of artificial intelligence

Attackers can automate the whole process creating web pages and email campaigns using these tools, effectively reducing the time it takes to set up a malware campaign.

The problems for the FBI, however, are not limited to computer viruses. Generative AI, harnessed to use deepfakecan be very dangerous. The possibility of creating videos and dialogues that seem real in all respects has devastating potential in terms of cybercrime and beyond.

To solve this problem, leading AI companies such as OpenAI, Microsoft, Google e Meta have undertaken to introduce a real “watermark” to signal the contents prepared using these technologies. However, OpenAI recently shut down its dedicated tool, known as AI Classifierwhich has not proved to be adequately effective.

The wide availability of privately usable open source generative AI technologies is a reality that must now be reckoned with and, in this regard, entities such as the FBI are starting to worry about the many illegal uses of these platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *