Home Corporate Communication News Artificial Intelligence Act, the new EU law on AI

Artificial Intelligence Act, the new EU law on AI

It guarantees safety and respect for fundamental rights and promotes innovation

Artificial Intelligence Act, the new EU law on AI

Artificial Intelligence Act, the new EU law on AI

 
The European Parliament has approved the law on artificial intelligence (AI), which guarantees safety and respect for fundamental rights and promotes innovation.

The aim is to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI systems , while promoting innovation and ensuring Europe leads the field. The regulation establishes obligations for AI based on possible risks and level of impact.

Prohibited applications

The new rules outlaw some AI applications that threaten citizens' rights.
These include biometric categorization systems based on sensitive characteristics and the indiscriminate extrapolation of facial images from the internet or from CCTV recordings to create facial recognition databases.

Emotion recognition systems in the workplace and in schools , social credit systems, predictive policing practices (if based solely on profiling or assessing a person's characteristics), and systems that manipulate human behavior will also be banned. or exploit people's vulnerabilities.

Exceptions for law enforcement

In principle, law enforcement agencies will not be able to use biometric identification systems, except in some specific situations expressly provided for by law.

"Real-time" identification can only be used if strict guarantees are respected , for example if the use is limited in time and space and subject to judicial or administrative authorization.
Permitted uses include, for example, searching for a missing person or preventing a terrorist attack.

Using these systems after the fact is considered high risk. For this reason, in order to be able to appeal, the judicial authorization must be linked to a crime.

Obligations for high-risk systems

There are also clear obligations for other high-risk AI systems (which could cause significant harm to health, safety, fundamental rights, the environment, democracy and the rule of law).

This category includes uses related to critical infrastructure, education and vocational training, employment, basic public and private services (e.g. healthcare, banking, etc.), some law enforcement systems, migration and border management, justice and trials democratic (as in the case of systems used to influence elections).

These systems have obligations to assess and reduce risks, maintain records of use, be transparent and accurate and ensure human oversight. Citizens will have the right to lodge complaints about AI systems and to receive explanations about decisions based on high-risk AI systems that impact their rights.

Transparency obligations

General purpose AI systems and the models on which they are based will have to meet certain transparency requirements and comply with EU copyright rules during the training phases of the various models.

More powerful models, which may pose systemic risks, will also have to comply with other obligations, such as carrying out model evaluations, assessing and mitigating systemic risks, and reporting on incidents.
Furthermore, artificial or manipulated images and audio or video content (so-called "deepfakes") will have to be clearly labeled as such.

Measures to support innovation and SMEs

EU countries will need to establish and make available regulatory testing spaces and real-world testing mechanisms (sandboxes) at national level, so that SMEs and start-ups can develop innovative AI systems and train them before bringing them to market. market.

Next steps

The regulation still needs to be subjected to final verification by lawyer-linguists and should be definitively adopted before the end of the parliamentary term (rectification procedure). Furthermore, the law still needs to be formally approved by the Council.
It will enter into force twenty days after publication in the Official Journal of the EU and will start to apply 24 months after entry into force, except in respect of:
  • prohibitions on prohibited practices, which will apply from six months after entry into force;
  • codes of good practice (nine months later);
  • rules on general purpose AI systems, including governance (12 months) and obligations for high-risk systems (36 months). (Source: https://www.europarl.europa.eu/ )


SEE THE CSQA ONLINE COURSE:

LEAD AUDITOR ISO/IEC 42001:2023 - ARTIFICIAL INTELLIGENCE

INFORMATION AND REGISTRATION

Would you like to have more informations?

Contact us

Newsletter subscription form

You need information, contact us

One of our staff will answer or contact you as soon as possible

Fields marked with an asterisk (*) are required