The progress ofArtificial intelligencein addition to many opportunities, has also created many doubts and worries.
In the context of the dangers regarding the progress of technology, however, the giants of the sector do not want to be caught unprepared. Precisely in this regard, he expressed himself Sam AltmanCEO of OpenAI who stated how “Regulation of AI is essential“.
Precisely in this regard, the company he created ChatGPTcreated a new team, called Preparednessto deal with a potential catastrophic situation caused by future AI models.
On the OpenAI blog we read a post that says “We believe that ‘frontier AI models’ that go beyond the capabilities of existing cutting-edge AI models have the potential to benefit all of humanity“, then also underlining how these place humanity itself at “Even more serious risks “.
In the post, published on October 26, some questions are then posed to evaluate potential future risks, such as:
- How dangerous are the most innovative AI models if they are misused now and in the future?
- How can we build a robust framework to monitor, evaluate, predict and protect humanity from dangerous characteristics in the most innovative AI models?
- If an AI model’s “thought model” is stolen, how can an attacker exploit it?
Catastrophic risks related to AI? OpenAI takes the field with two distinct initiatives
The Preparedness team (from English Preparation) is led by Alexander Madridformer director of Center for Deployable Machine Learning at the Massachusetts Institute of Technology.
The preparation team talks about the AI that will be developed in the near future, with the focus on its ability to deceive and persuade humans, also touching on other areas such as IT security and processing capabilities in fields such as chemistry, biology, radiology and in the nuclear sector. It will help monitor, assess, predict and protect against catastrophic risks in multiple contexts.
The mission of the readiness team also includes the development and maintenance of a system risk-informed development policy (RDP) and the creation of a governance structure, useful for accountability and oversight throughout the AI development process.
Additionally, OpenAI is collaborating with AI development companies such as Anthropic, Google e Microsoft to create an industry organization to promote safety in the context of AI, called Frontier Model Forumwhich is expected to officially open its doors in July 2023. It was also announced that Chris Meseroleformer director of artificial intelligence and emerging technologies research at the Brookings Institution, will be that entity’s first executive director.
In addition to monitoring the use of AI, Frontier Model Forum plans to implement several initiatives and funding to support researchers working in this context.