The EU AI Act: What it Means for Your Business

With the EU’s Artificial Intelligence Act soon to come into force, leaders around the world are considering the implications for business. BSR’s Technology and Human Rights teams discuss the key features of the Act, and who will be affected.

Foto: Photo by gopixa on iStock

29.04.2024

Sponseret

Richard Wingfield, Asako Nagai, Hannah Darnton, and J.Y. Hoh, BSR

Key Points

  • With the EU’s Artificial Intelligence Act soon to come into force, leaders around the world are considering the implications for business.
  • The Act will have impacts well beyond the borders of the EU, and will apply to any companies developing, deploying or using AI—meaning it does not apply uniquely to technology companies.
  • BSR’s Technology and Human Rights teams discuss the key features of the Act, and who will be affected.

The Artificial Intelligence Act (or “AI Act”) is a new piece of legislation that will regulate the development, deployment, and use of AI within the EU, but will impact businesses well beyond its borders. Its overall objectives are to ensure a well-functioning internal market for AI systems within the EU, as well as to a high level of protection of health, safety, human rights, and environmental protection.  

The AI Act is ambitious, not only in the sense that it is the world’s first major piece of AI-related regulation, but also in its scope. The definition of “AI system” (in Article 3) is “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.  

Defining a Risk-based Approach 

Within that broad scope, however, the approach taken is a risk-based one, with different rules and requirements depending on the level and nature of the risk with the AI Act categorizing five types: 

1. Prohibited AI practices: The AI Act designates and prohibits a number of AI practices deemed to be particularly harmful, such as deceptive techniques that influence behavior resulting in harm, or which exploit people’s vulnerabilities. These prohibitions will come into force in six months. 

2. High-risk AI systems: The AI Act designates other types of AI systems as “high-risk”. Examples include biometrics and the use of AI in a critical infrastructure, education and vocational training, employment, essential private and public services, law enforcement, and the administration of justice. For these, the Act sets out a number of requirements relating to their development, include establishing risk management systems, maintaining technical documentation, and ensuring accuracy and robustness, as well as human oversight. The Act also includes rules relating to placing these systems onto the EU market, putting them into service, and using them, such as establishing quality management systems, documentation-keeping, cooperating with national authorities, and complying with conformity assessments. These requirements will come into force in three years. 

3. General purpose AI models (including large language models): The AI Act creates a number of requirements specific to general purpose AI models, such as maintaining technical documentation, providing instructions on safe use, and ensuring that copyright law is respected. These requirements will come into force in twelve months. 

4. AI systems requiring transparency: The AI Act’s focus on AI systems that are intended to interact directly with natural persons, and will require providers to ensure that people are aware that they are interacting with AI. In addition, deployers of AI systems that create “deep fakes” or that generate or manipulate text for general information purposes must disclose that the content is artificially generated or manipulated. These requirements will come into force in two years. 

5. Low risk AI systems: While the AI Act does not impose any requirements on AI systems that don’t fall into the above categories, it does require the drawing up of voluntary codes of conduct for these lower-risk systems. 

The AI Act also contains measures intended to support AI innovation, start-ups, and SMEs, including through AI regulatory sandboxes. It will be implemented and enforced both at an EU-wide level (including through the AI Office, a new body being developed within the European Commission, empowered to develop codes of practice, guidance, and take infringement action) and designated national authorities in each EU member state. A range of potential fines and sanctions will be available (under Article 99), with the most serious breaches (non-compliance with the prohibited AI practices) punishable by fines of up to 35 million or seven percent of a company’s total worldwide annual turnover for the preceding financial year. 

Who will the AI Act affect? 

The legislation sets out different requirements for different types of companies in the AI lifecycle: 

  • Companies placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU (“providers”); 
  • Companies that deploy AI systems and have their place of establishment or are located within the EU (“deployers”); 
  • Companies, whether providers or deployers of AI systems, where the output produced by an AI system is used in the EU; 
  • Importers and distributors of AI systems; and 
  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark. 

As such, the requirements are not limited to companies developing AI technology, but any company using AI systems or their outputs in the EU, meaning that the AI Act will be important for a wide range of companies of all sectors using AI (whether that use is specific to a particular sector, such as AI-based creditworthiness assessments for financial service companies, or general uses such as using AI for recruitment decisions).  

Finally, it does not matter whether providers or deployers are established or located in the EU or outside, meaning that the requirements will be relevant for companies across the world where the AI system or general purpose AI model is placed into the EU market, or where its outputs are used in the EU. 

With the AI Act now coming into force, understanding its requirements will be important for any company developing or using AI. The legal framework is novel and complex, but broad both in scope and in the range of companies to whom it will apply, both inside and outside of the EU. By understanding the Act's classifications of risk levels, and your company’s role within the AI lifecycle (provider, deployer, etc.), you can ensure compliance. Our next blog post will delve into how to take a human rights-based approach to address some of the risks posed by AI which the AI Act seeks to mitigate. 

For more information on the AI Act or to discuss its implications for your business, feel free to contact our Tech and Human Rights team

12.12.2024BSR

Sponseret

Integrating Climate and Nature: A Dual Approach to Business Resilience

14.11.2024BSR

Sponseret

The Silent G: Six Questions Every Leadership Team Should Ask About Sustainability Governance

04.11.2024BSR

Sponseret

Racing Past the Crossroads: How Sustainability Leaders Can Reassert Ambition

31.10.2024BSR

Sponseret

Adequate Wages vs. Living Wages: Implementation Guidance for Companies

28.10.2024BSR

Sponseret

Collaboration Crossroads: Recognizing When to Part Ways for Greater Impact

14.10.2024BSR

Sponseret

An Impact-Based Approach to Responsible AI