Programming

Privacy Guarantor wastes no time: opens investigation into OpenAI Sora

Privacy Guarantor wastes no time: opens investigation into OpenAI Sora

In recent weeks, the company led by Sam Altman presented a preview OpenAI Soraan artificial intelligence model that allows you to create realistic videos starting from descriptions (prompt) textual. The new tool, not yet publicly available, exploits a complex configuration based on the use of neural networks to generate video sequences at very high resolution, with detailed and complex scenes, lasting up to 60 seconds.

We had briefly described the functioning of OpenAI Sora, highlighting the power of the platform. The use of deep neural networks (DNN) divided into multiple phases, trained on a large corpus of video data, allows the system to understand, analyze and generate realistic videos. In addition to translating textual instructions, even very complex ones, into coherent visual sequences, Sora has the ability to narrate stories with a cinematic approach, offering a realistic and engaging visual aspect.

OpenAI Sora: video photorealism raises the concerns of the Europen Privacy Guarantor

We said that Sora stands out for the surprising results obtainable with great simplicity. The videos produced may include virtual actors capable of express realistic emotions. Try visiting the website dedicated to the OpenAI Sora project: to date the company in which Microsoft has heavily invested has published 9 demonstration videos generated by AI. Realize the type of video that the generative model was able to produce for each one prompt inserted as input (you can find it at the bottom, immediately below each video…).

The new technology that OpenAI is developing opens up new perspectives in the generation of video content using artificial intelligence, although it highlights the need for ethical and social considerations regarding media manipulation and security.

OpenAI Sora security and continuous training indicated as a solution

In presenting Sora, OpenAI itself declared that it is in the process of adopting various precautions on the security side safety. A team of red teamers is engaged in the verification and optimization of the model in order to combat uses that can lead to the generation of misinformative, inappropriate content, material that can fuel hatred and prejudice.

Thus, OpenAI technicians are developing “ad hoc” tools, such as a classifier automatic, in order to identify misleading content generated by Sora. At the moment, however, Sora is already able to exploit the security methodologies developed for other based models FROM AND 3. It is therefore possible to reject prompts that violate the usage policies or that may otherwise cause problems.

OpenAI, however, recognizes that it is virtually impossible to correctly detect all legitimate uses or potential abuses of the technology. L’continuous trainingfrom interaction and with the real world, is considered essential for the creation and deployment of AI-based systems that can be increasingly secure over time.

Because the Privacy Guarantor opens a new investigation against OpenAI

Il Privacy Guarantor Europen, which had recently sent OpenAI a formal complaint, deeming the previously implemented measures inadequate, opened a new file. This time the Authority refers to the possible implications that the Sora service could have on the processing of personal data of users located in the European Union and in particular in Europe.

Within 20 days, OpenAI will have to specify whether the new artificial intelligence model is a service already available to the public and whether it is or will be offered to European users and residents in our country. Furthermore, the Guarantor’s office asks OpenAI to clarify in detail a series of elements on the functioning of Sora: le training mode of the algorithm; the data collected and processed to train it, especially if it concerns personal data; whether among these there are also particular categories of data (religious, philosophical beliefs, political opinions, genetic data, health, sexual life); what sources are used.

In the event that the service was offered to European users, the Guarantor asked OpenAI in particular to indicate whether the methods envisaged for informing users and non-users as well as the legal bases for the processing of the data provided comply with the European Regulation (GDPR) .

Leave a Reply

Your email address will not be published. Required fields are marked *