Computer

EU approves draft AI Act: moves towards artificial intelligence control

The European Parliament approved the first draft of theArtificial Intelligence Act. The Union’s push for new rules useful for regulating generative AI and new advanced algorithms continues.

The European Union is moving ever faster towards the regulation dell’artificial intelligence. Proposed by the Commission in April 2021, theArtificial Intelligence Act the latest draft was approved with 84 votes in favour, 12 abstentions and 7 against. This is the first key step towards the certainty of a future in which i IA system are safe, transparent, traceable, non-discriminatory and “green”.

As announced by the commissions Internal Market e Civil Liberties of the European Parliament, the Commission’s proposal has passed and will now require the approval of the entire Parliament and negotiation with the Council: in this perspective, the dedicated plenary session is scheduled for the period between 12 and 15 June 2023.

But what does theArtificial Intelligence Act European? As per the official document, the standards follow an approach based on the risk and they establish obligations for suppliers and users depending on the level of risk that AI can pose. The systems that will be strictly prohibited by the European Union are already clear, ie those that constitute an “unacceptable” risk for the safety of citizens. Specifically, the following solutions are discussed:

  • systems of remote biometric identification “in real time” in spaces accessible to the public;
  • Remote Biometric Identification Systems”postal”, with the sole exception of law enforcement agencies for the prosecution of serious crimes and only with prior judicial authorization;
  • Biometric categorization systems using characteristics such as gender, race, ethnicity, citizenship status, religion, political orientation;
  • systems of predictive police based on your profile, location or past criminal behaviour;
  • systems of emotion recognition by law enforcement, border management, workplace and educational institutions;
  • Scraping indiscriminate of biometric data from social media or CCTV footage to create facial recognition databases, violating human rights and the right to privacy.

Similarly, AI systems capable of influencing voters’ voting and the platform algorithms large (i.e. over 45 million monthly active users) are considered “high risk”.

Finally, the last change to the text concerns generative AIs such as ChatGPT: the companies that manage these systems will have to comply with very specific transparency obligations.

Leave a Reply

Your email address will not be published. Required fields are marked *