An Impact-Based Approach to Responsible AI
AI is evolving at speed, creating unprecedented opportunities and challenges across every industry and function. A responsible approach to AI goes beyond policies and principles to consider AI’s impacts on people and the environment across the company’s value chain over the short, medium and long term. BSR’s tech team offers key considerations for navigating AI’s evolving landscape responsibly.
Businesses are racing to integrate artificial intelligence (AI) technologies into their operations, spurred by the skyrocketing investment in generative AI, which reached US$25.2 billion by the end of 2023. Studies show that AI has the potential to improve productivity and enhances work quality by enabling employees to complete tasks more efficiently and bridging the skill gap between low- and high-skilled workers leading to reduced costs and increased revenues.
While many companies have adopted “Responsible AI” concepts—such as a focus on AI principles and governance, liability reduction, and avoidance of immediate risks—these approaches are insufficient to manage the profound impacts, risks, and opportunities posed by AI.
The development and use of AI technologies can bring environmental and societal risks related to the electricity grid and water supply, the right to privacy and worker rights, and beyond. Responsible AI efforts are often not well integrated with the core work of sustainability teams in understanding, managing, and reporting on such social and environmental issues.
BSR believes that a responsible approach to AI considers the potential benefits and adverse impacts of AI on people and the environment across the company’s value chain over the short, medium, and long term.
Rather than a focus on proximate liabilities and compliance, this approach focuses on understanding impacts as a way to identify and manage future risks. It also aligns with corporate sustainability regulations (e.g., the Corporate Sustainability Reporting Directive (CSRD) and the Corporate Sustainability Due Diligence Directive (CSDDD)) and business and human rights standards (e.g., UNGPs, OECD Guidelines). An impact-based approach can also help companies comply with the EU AI Act, as the act requires companies to identify and address the impacts of AI on fundamental rights.
While business leaders recognize the risks associated with AI, companies are largely unprepared to address them. According to Gartner, 66 percent of executives see the mass availability of AI as an emergent risk, but only 11 percent feel confident about their ability to provide effective oversight on AI risks. This points to a significant gap in current company practices.
BSR recommends five key considerations to effectively manage the social and environmental impacts of AI:
1) Social and environmental impacts are at the core of AI risks and opportunities. Common company practices for Responsible AI include AI principles, policies, compliance, and avoidance of legal risk. The actual and potential impacts of AI on people and the environment are often under-explored by companies. These impacts can be the main drivers of corporate risks and opportunities; addressing them is essential to build a meaningful approach to Responsible AI.
What are the social and environmental impacts of AI? Here are some illustrative examples.
Impacts to People:
- Bias in AI models leading to the discrimination of underrepresented groups
- Violations of the right to privacy
- Implications on employment, labor markets, democracy, and information systems
- Precarious working conditions for AI data enrichment workers
Impacts to the Environment:
- Increased energy consumption and carbon emissions for training and running AI models
- Downstream environmental impacts with the use of AI for harmful use cases (e.g., AI use to speed up fossil fuel extraction or to increase consumption of non-sustainable products)
- Increased water use and land conversion for datacenters
- Increased pollution and e-waste for manufacturing semiconductors and hardware that power AI models
2) Sustainability teams bring valuable tools to Responsible AI. Responsible AI practices are often led by cross-functional groups including data science, privacy, and compliance/legal teams. Sustainability teams can bring significant value to this work by coordinating with existing initiatives to understand the company’s environmental and social impacts, risks, and opportunities. Sustainability leads can also leverage existing toolkits for addressing these impacts, such as materiality assessments, human rights assessments, stakeholder engagement practices, and sustainability reporting.
Embedding AI-related impacts into existing sustainability structures and processes can make it easier for companies to manage both these fast-moving risk areas. Furthermore, leveraging the synergies between sustainability and AI can help create long-term business value.
3) Risks lie throughout the full AI value chain. Often when we think of AI impacts, we think of downstream impacts related to the use of AI, such as job displacement, bias and discrimination, surveillance, or misinformation. However, social and environmental impacts also occur in the upstream AI supply chain, such as worker rights issues related to data enrichment service providers, or increased water use and land conversion for datacenters that power AI systems. All these impacts should be considered as part of Responsible AI efforts. Importantly, sustainability regulations such as the CSRD and CSDDD require companies to conduct due diligence on and disclose impacts across their full value chain.
4) AI activities and risks differ by business function. AI-related impacts can occur across different functions depending on the company’s business model and AI use cases. Companies examining AI often focus on IT and engineering, but AI-enabled solutions are being used by a variety of different teams, including human resources, marketing, sales, and customer service. Increasingly, boards and executive teams are also using AI-based solutions to support corporate decision making. Engineering and procurement teams are two key functions where Responsible AI measures should be integrated when developing or purchasing AI solutions; but impacts can occur anywhere in the company.
Below we provide examples of AI use cases and illustrative impacts across different business
functions, which can all lead to sustainability and business risks.
5) AI use will have impacts in the short, medium, and long-term. Often, we focus on short-term impacts; however, AI technologies may also have impacts in the longer term. For example, impacts on labor and the economy, or impacts on democracy and information systems. Note that sustainability regulations such as the CSRD require companies to disclose impacts across the short, medium, and long-term.
Similarly, some of the long-term impacts are not related to individual companies. They are cumulative impacts of the use of AI across industries. It is important for companies to take collaborative approaches to explore and address these cumulative long-term impacts.
As AI transforms industries, adopting a responsible approach is critical to harnessing its potential while managing its risks. By considering AI’s impacts on people and the environment throughout their value chain, businesses can position themselves for long-term success.
To learn more about how BSR helps companies adopt an impact-based approach to Responsible AI, and to discuss what’s right for your organization, please reach out to us at web@bsr.org.