QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Close this search box.

World’s most extensive AI rules greenlit by EU Parliament

(Source: ESCO)
(Source: ESCO)

The European Parliament has approved what is believed the world’s most comprehensive regulatory framework for the use and rollout of artificial intelligence.

The Artificial Intelligence Act, which was agreed in negotiations with EU member states in December of last year, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

“The regulation establishes obligations for AI based on its potential risks and level of impact,” the EU Parliament said in a statement.

The new rules ban certain AI applications, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

Also outlawed is emotion recognition in the workplace and schools, social scoring, predictive policing, when it is based solely on profiling a person or assessing their characteristics, as well as AI that manipulates human behaviour or exploits people’s vulnerabilities.

“The AI Act is a starting point for a new model of governance built around technology.”

– Dragos Tudorache, MEP

Speaking after the vote, Internal Market Committee co-rapporteur Brando Benifei said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency.”

He stressed that “unacceptable AI practices will be banned in Europe,” adding that “the AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development.”

Also responding to the vote, Civil Liberties Committee co-rapporteur Dragos Tudorache shared with QA Financial that “we have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies.”

Tudorache, however, warned that “much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.


The use of biometric identification systems by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations.

“Real-time” RBI can only be deployed if strict safeguards are met, for example its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation, the European Parliament stressed in a statement.

“Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack,” it went on to explain.

“Using such systems post-facto, or ‘post-remote RBI’, is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.”


Clear obligations are also foreseen for high-risk AI systems, due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.

Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services, including banking and insurance, certain systems in law enforcement, migration and border management, justice and democratic processes, such as influencing elections.

“Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights,” the statement read.

“The AI Office will now be set up to support companies to start complying with the rules before they enter into force.”

– Brando Benifei MEP

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training.

The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content, so-called ‘deepfakes’, need to be clearly labelled as such, Parliamentarians agreed.

Next steps

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature, through the so-called corrigendum procedure. The law also needs to be formally endorsed by the European Council.

It will enter into force twenty days after its publication in the official EU Journal, and be fully applicable 24 months after its entry into force.

Want to stay up to date and receive our news, features and interviews for free?

Our e-newsletter lands in your inbox every Friday. Sign up HERE in one simple step.