The EU AI Act: 11 Recommendations for Business

The EU AI Act applies to all AI systems impacting people in the EU across all sectors. Businesses can prioritize the order in which provisions take effect, starting with prohibited AI, followed by AI systems that interact with people, and finally high-risk AI systems. BSR's Technology team discusses how a human rights-based approach will help ensure compliance.

Foto: Photo by artJazz on iStock

23.05.2024

Sponseret

Richard Wingfield, Hannah Darnton, Asako Nagai, and J.Y. Hoh, BSR

This is the second of a two-part series from BSR’s Technology and Human Rights team on the latest developments around the European Union Artificial Intelligence Act and its implications for business.

With the EU’s Artificial Intelligence Act (the AI Act) soon to come into force, leaders around the world are asking themselves what it will mean for their business. The Act is broad in its scope, applying to a wide range of AI systems and companies at all parts of the AI value chain, from development to use. Therefore, it is crucial for any company that develops or uses AI to understand which requirements of the Act will be applicable. This is an exercise that many legal and compliance teams are already undertaking.

Where prioritization is needed, companies might want to look at the order in which provisions will come into force, focusing initially on ensuring there are no prohibited AI practices (since these rules will be in six months), then looking at requirements around general purpose AI models (twelve months), next AI systems that interact with people (two years), and finally high-risk AI systems (three years). 

How a human rights-based approach to AI can help 

As with other pieces of tech-related regulation stemming from the EU (such as the Digital Services Act), the protection of human rights (termed “fundamental rights” within the EU’s legal system) weaves its way throughout many of the AI Act’s provisions.

While there has been criticism from civil society that the legislation does not go far enough in, for example, prohibiting mass surveillance and facial recognition technologies, for companies developing and deploying AI systems in the future, an understanding of the potential adverse impacts on human rights of those systems is essential.

The most relevant provisions, which are particularly important for “high-risk” AI systems, are: 

1. Risk management systems: Companies will need to develop and implement risk management systems when high-risk AI systems are being developed, and these systems will need to include the identification and analysis of potential risks to human rights.

2. Data and data governance: When developing high-risk AI systems that involve training models with data, companies will need to ensure that those models are validated and tested in line with appropriate data governance and management practices, which must include an examination of possible biases that are likely to adversely impact human rights. 

3. Transparency and provision of information to deployers: Companies that deploy high-risk AI systems must design them in such a way as to ensure that their operation is “sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately”. This involves providing instructions for use that include, among other things, details of any potential adverse impacts on human rights.

4. Human oversight: Companies will need to design and develop high-risk AI systems in such a way that they can be effectively overseen by a human. Such human oversight must aim at preventing or minimizing any risks to human rights that may emerge when the system is used as intended, or where it is “misused” in ways that are reasonably foreseeable (i.e. where it might be used in ways which weren’t intended). 

5. Human rights impact assessments: Before certain high-risk AI systems are deployed (primarily when systems are to be used by the public sector), deployers must undertake an assessment of the impact on human rights that the use of the system may produce. This assessment must include details of the individuals and groups likely to be affected, the specific risks of harm, and what measures are to be taken to mitigate those risks. 

6. Reporting serious incidents: Providers of high-risk AI systems placed on the EU market must report any “serious incident” to the market surveillance authorities of the member states where that incident occurred, and “serious incidents” include breaches of obligations under EU law intended to protect human rights. 

The definition of “high-risk” means that some sectors are likely to be more impacted by these requirements of the AI Act than others, such as companies involved in the provision of key public services and infrastructure, or those in the financial services sector using AI to assess a person’s creditworthiness. But some “high-risk” AI systems may be relevant for companies’ internal uses of AI, regardless of sector, such as the use of AI-based emotion recognition technologies to determine an employee’s emotions (such as whether they are bored or unhappy), or the use of AI for recruitment purposes. 

These different provisions all require companies to develop a strong understanding of the potential impacts that their development or use of AI may have on human rights, as well as to take steps to mitigate those impacts. As such, the AI Act has overlaps with other existing and upcoming regulations which require human rights due diligence. Companies will be well placed to meet these emerging regulatory requirements by taking a harmonized approach, grounded in human rights, and specifically the UN Guiding Principles on Business and Human Rights (UNGPs), across their business.

In addition to BSR’s support for companies to apply to the UNGPs, we’re developing a range of resources specially on the human rights impacts of AI, including guidance on responsible AI in the financial services, consumer, healthcare and retail sectors, as well as our upcoming sector-wide human rights assessment of generative AI.

Our key recommendations on what companies can do now are: 

1. Put together an inventory of the existing and planned AI use cases within your company. 

2. Undertake human rights due diligence of those existing and planned uses of AI to identify high risk areas. 

3. Ensure you have a clearly defined purpose for each use of AI and consider establishing use limitations.

4. Establish a governance mechanism for the responsible use of AI within the company, such as internal cross-functional oversight committees, and/or an external advisory councils. 

5. Given many of AI’s risks, connected to privacy and data protection, ensure a high level of data protection within the company reviewing existing measures, policies and processes to ensure they meet the additional risks created by AI 

6. Test AI models for bias and externalities, to mitigate potential discriminatory impacts. 

7. Undertake adversarial testing and red teaming (exercises where the AI system is stress tested to discover how the system might be misused or lead to harmful outcomes). 

8. Provide transparency to users, internally and externally, about how the AI models and systems work. 

9. Integrate feedback through a reporting channel where potential misuse and abuse of AI systems can be reported.

10. Engage in dialogue with other industry players in your sector. 

11. Engage with external stakeholders throughout the AI life cycle to help inform decisions around the development, sale and use of AI. External stakeholders could also be part of an external advisory council (see recommendation 4). 

For more information on the AI Act or to discuss its implications for your business, members can contact our Tech and Human Rights team.

12.12.2024BSR

Sponseret

Integrating Climate and Nature: A Dual Approach to Business Resilience

14.11.2024BSR

Sponseret

The Silent G: Six Questions Every Leadership Team Should Ask About Sustainability Governance

04.11.2024BSR

Sponseret

Racing Past the Crossroads: How Sustainability Leaders Can Reassert Ambition

31.10.2024BSR

Sponseret

Adequate Wages vs. Living Wages: Implementation Guidance for Companies

28.10.2024BSR

Sponseret

Collaboration Crossroads: Recognizing When to Part Ways for Greater Impact

14.10.2024BSR

Sponseret

An Impact-Based Approach to Responsible AI