20 March 2024

EU approves World's First AI Act

ai.jpg

March 14th, 2024 – EU Parliament passed the Artificial Intelligence Act, which aims to ensure safety, compliance with fundamental rights and promote innovation.

The regulation, agreed upon in negotiations with member states in December 2023, was supported by MEPs with 523 votes in favor, 46 against, and 49 abstentions. It seeks to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, while also fostering innovation and positioning Europe as a frontrunner in the AI sector. The regulation sets out responsibilities for AI based on its potential risks and impact levels.

As Dragos Tudorache, a Romanian lawmaker asserted, the AI Act has nudged the future of AI towards a focus on humans, where humans are in charge of the technology and it helps us make new discoveries, drive economic growth, advance society and unlock human potential.

Major tech companies generally support the idea of regulating AI while also working to ensure that any regulations benefit them. OpenAI CEO Sam Altman caused some controversy last year when he suggested that the maker of ChatGPT might leave Europe if it couldn't comply with the AI Act, but later clarified that there were no actual plans to do so.

How does the AI Act function?

Similar to many EU regulations, the AI Act was originally created to serve as consumer safety legislation, employing a “risk-based approach” towards products or services utilising artificial intelligence.

The more risky an AI application is deemed to be, the more scrutiny it will undergo. The majority of AI systems are anticipated to be low risk, such as content recommendation systems or spam filters. Companies have the option to adhere to voluntary requirements and codes of conducts.

Banned AI systems

The new Act prohibits specific AI technologies that pose a risk to individuals' rights, such as biometric identification systems using sensitive traits and indiscriminate collection of facial images from the Internet or CCTV forage to create facial recognition databases. Additionally, emotion detection in workplace and schools, classify people for social-scoring purposes, try to predict whether a person will commit a crime unless they are already known to be involved in criminal activity or exploits people’s vulnerabilities due to their age, disability or socio-economic situation are also not allowed. Moreover, using biometric data to deduce sensitive characteristics like sexual and race characteristics are prohibited.

Law enforcement exemptions

In general, law enforcement is not allowed to use biometric identification systems unless in specific and limited circumstances. Real-time use of these systems must adhere to strict guidelines, such as time and location limitations and obtaining prior authorisation from a judicial or administrative authority. Examples of permitted use cases include searching for a missing person or preventing a terrorist attack or for a situation involving sex-trafficking victims.

Obligations for high risk AI systems

Responsibilities for high-risk systems also apply to other high-risk AI systems that have the potential to cause significant harm to health, safety, fundamental rights, the environment, democracy, and the rule of law. Examples of high-risk AI applications include critical infrastructure, education, employment, essential public and private services like healthcare and banking, certain law enforcement systems, migration and border control, as well as justice and democratic processes such as election influence. These systems must evaluate and mitigate risks, keep usage records, be transparent and accurate, and have human oversight. Citizens will have the right to file complaints about AI systems and receive explanations for decisions made by high-risk AI systems that impact their rights.

Fines for non-compliance will go up to €35 million (around $38 million) or 7% of a violator’s global revenues whichever is higher.

Next steps

The regulation is currently undergoing a final review by a lawyers/linguists team and is expected to be officially adopted before the end of the legislative session through the corrigendum procedure. The law also needs to be officially approved by the Council. It will come into effect twenty days after being published in the official Journal and will be fully enforceable 24 months after its entry into force, with the exception of certain provisions: bans on prohibited practices will be enforced six months after implementation; codes of practice will be enforced nine months after implementation; general-purpose AI rules, including governance, will be enforced 12 months after implementation; and obligations for high-risk systems will be enforced 36 months after implementation.

For further information and assistance, feel free to get in touch with Xenia Kasapi, Head of Intellectual Property and Data Protection in the firm.

Latest News

2024 Update: Cyprus Shipping changes in the Tonnage Tax System

Legal 500 Cyprus Rankings 2024: Ranked but restless

EU approves World's First AI Act

Subscribe to our newsletter

Receive the latest news and events of our group as well as important industry updates.