On March 13, 2024, the European Parliament approved the proposal on harmonized rules on artificial intelligence (AI), also known as the Artificial Intelligence Act or the AI Act for short. Following a final lawyer-linguist check, the European Parliament published a corrigendum on the AI Act on April 16, 2024. The corrigendum served to correct some of the language in the act before it is set to be published in the Official Journal of the European Union.
The draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values while boosting innovation. The AI Act regulates AI based on its capacity to cause harm to society. The obligations will be enforced through a “risk-based” approach, whereas AI applications are categorized as unacceptable, high, limited or minimal risk. In essence, this means that the higher the risk, the stricter the rules.
Accordingly, if the risk of an AI application is deemed to pose an unacceptable risk, the application is banned. That is the case for AI systems that involve biometric categorization if such base their algorithms on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. The same is true in relation to AI that manipulates human behavior or exploits people’s vulnerabilities as such, systems that threaten citizen’s rights. However, exemptions are put in place for law enforcement.
The AI Act will also pose clear obligations for systems categorized as high risk. High risk AI includes systems used in relation to critical infrastructure and private and public services e.g., in banking and healthcare. Such systems will be obliged to assess and reduce risk, maintain use logs, adhere to transparency requirements, be accurate and ensure human oversight.
General-purpose AI (GPAI) will have to meet requirements as to transparency, for example, compliance with EU copyright and trademark law. GPAI will also have to publish detailed summaries on the training data used. More powerful GPAI models, however, may face additional requirements such as conducting model evaluations, assessing and mitigating systemic risks, and reporting on incidents. In many respects, the rules set out requirements that AI service providers must comply with to ensure, among other things, transparency. Citizens will also have the right to submit complaints through a complaints structure.
Returning to the impact of the regulation on businesses using and/or offering AI solutions, each AI will need to be analyzed on a case-by-case basis, in particular due to the risk-based approach and, given the EU’s stated aim of supporting small and medium-sized enterprises (SMEs), we can expect vast differences in the obligations imposed on “big tech” and, let´s say a modest start-up.
Next steps
Once published in the Official Journal, the AI Act will enter into force twenty days after its publication and be fully applicable 24 months thereafter with the exception of certain provisions that will apply at an earlier or later date. These include the bans on prohibited practices (applicable six months after entry into force), codes of practice (applicable nine months after entry into force), GPAI rules including governance (applicable 12 months after entry into force) and obligations for high-risk systems (applicable 36 months after entry of force).
Gulliksson will keep you informed of further developments. Please contact us if you would like to discuss the provision or use of AI solutions and their legal implications.