AI Act, the new rules for artificial intelligence: here’s what they are

AI Act, the new rules for artificial intelligence: here's what they are

What a regulation of artificial intelligence it was in the air, we had already known for some time. Now, however, the road to official approval of the so-called AI Act European now seems leveled out.

The commissions of European Parliament (Internal Market, IMCO; Civil Liberties, LIBE) have in fact approved the text of the legislation focused on the design, development, updating and use of artificial intelligence solutions in the modern digital age.

What is the AI ​​Act and what does it contain

The AI ​​Act, originally proposed by the European Commission in April 2021, includes a list of prohibited uses of AI, together with specific rules for uses in high-risk areas (for example in education, health and employment). There are also obligations regarding the verification of the data quality produced by generative models and risk assessment plans.

Some provisions relating to the transparency on the functioning of AI, especially for the creation of chatbots and image and video processing tools.

At this point, despite the constraints initially placed by countries such as France (which evidently feared affecting the business of some national companies…), the AI ​​Act should become law within a few months. A vote in plenary session by Parliament is expected in the coming weeks; the final approval of the Council will then follow.

The European Union is oriented towards a gradual implementation of the new law: i legal requirements will gradually grow for interested developers between 2024 and 2027. There will be a transition period lasting two years, during which many companies that have heavily invested in the development of AI-based solutions will necessarily have to update themselves.

The use of stochastic models becomes more dangerous

Bernd Greifeneder, CTO di Dynatracenotes that based on established guidelines, it appears that the European Union has, understandably, focused its regulation on reducing the geopolitical risks of AI, but has focused on commercial issues.

Most organizations will likely find themselves using what the EU calls “little to no risk” AI models. These models do not fall directly under the influence of the AI ​​Act, but the EU is encouraging organizations to engage in voluntary codes of conduct to better manage risk“, argues Greifeneder. “In developing their codes of conduct, it is important for organizations to recognize that not all AI is the same. Some types, such as those based on non-deterministic approaches, such as generative AI, carry greater risks than other models“.

The use of stochastic modelsHowever, we say, it is the “spice” of the AI ​​systems that we all know, it is the “engine” that makes the tools trained using large volumes of data work.

The AI ​​Act “shakes up” traditionally used schemes, calling companies to reconsider, says Greifeneder, “how AI makes decisions, whether it is transparent and which processes it has access to and control over. Without a classification framework that clearly outlines characteristics, organizations will struggle to use AI safely, regardless of whether it is compliant or not“.

Criticisms on the possibility of activating facial recognition mechanisms to combat crimes

There is also no shortage of voices criticisms. Patrick Breyer, MEP and member of the LIBE committee, claims that the AI ​​Act contains several gray areas which should be remedied as soon as possible. For example, Breyer (Pirate Party) argues that, in its current form, the AI ​​Act would open the door to using artificial intelligence for mass facial recognition. In other words, to identify a few thousand people subject to arrest warrants (in relation to crimes indicated in the AI ​​Act itself), the legislation would support the use of biometric surveillance on an unacceptable scale and on a permanent basis.

The AI ​​Act bans the concept of “social score”

The European law currently being approved, however, strictly prohibits the use of AI for activities aimed at processing the so-called social score. It is a system of evaluating or classifying people based on a series of behaviors, actions or data associated with their activities in daily life.

Social scoring can involve several factors, including online interactions, financial transactions, workplace behavior, personal relationships and more. Such systems can be implemented by government agencies, private organizations or companies for various purposes, such as risk management, fraud prevention, or even social control.

Since this scheme or derived approaches raise clear ethical concerns and very serious problems in the matter of privacythe promoters of the AI ​​Act decided to outlaw practices such as social scoring, protecting citizens from potential abuse.

Opening image credit: – David Gyung

Leave a Reply

Your email address will not be published. Required fields are marked *