Programming

Is ChatGPT capable of creating biological weapons? Here are the OpenAI tests

Is ChatGPT capable of creating biological weapons?  Here are the OpenAI tests

Research and development in the field ofArtificial intelligence they are progressing at a dizzying pace. While this development offers new opportunities, it also creates somewhat disturbing scenarios.

OpenAI recently published the results of a test to determine whether GPT-4 will simplify creation and use of biological weapons or less. Based on the results of the audit, the company plans to proceed with the construction of a preventive system for this type of abuse. This type of risk is certainly nothing new given that, through the executive order issued by Joe Biden on 30 October 2023 it had already raised public awareness on the matter.

OpenAI, far from insensitive to this issue, is working on the development of a system that should warn of potential attempts to manipulate AI regarding the creation of biological weapons.

The company has decided to undertake a collaboration with Gryphon Scientific, a company specialized in scientific consultancy. The aim of this synergy is to create a system that, in fact, effectively blocks attempts to abuse GPT-4.

Biological weapons with AI? Here’s how OpenAI is studying a prevention system

The collaboration involved a test that involved 50 biology researchers and as many students who, divided into groups, tried to use GPT-4 to force the system and attempt to create a phantom biological weapon.

From the data obtained, OpenAI has ascertained how potentially “Artificial Intelligence risks making the development of biological weapons more efficient“. Sam Altman’s company has specified how AI is progressing rapidly and how, in the near future, it will be necessary to place a limit on its operation in certain contexts.

The company has therefore established how “Research on how to do this and how to prevent risks is extremely important“. Despite the chatbot’s refusal to answer certain questions, in fact, these limits are easily circumvented.

To limit the dangers, OpenAI is also building a christened risk reduction system Preparedness. This should include some preventive solutions, even if it is still in its infancy.

Leave a Reply

Your email address will not be published. Required fields are marked *