Why Every Business Needs to Think about Responsible AI
It’s time for all companies to take a human rights-based approach to the deployment and use of artificial intelligence (AI). Upcoming regulations will require companies to take appropriate measures to identify current and potential impacts.
Key Points
- It’s time for all companies to take a human rights-based approach to the deployment and use of artificial intelligence (AI).
- Upcoming regulations will require companies to take appropriate measures to identify current and potential impacts arising from the use of AI in their operations.
- Our new industry briefs explore the potential human rights impacts of AI in four key industries: retail, extractives, financial services, and healthcare.
AI technologies are transforming the way we engage with the world and the way companies conduct business. From generative AI technologies like ChatGPT, to facial recognition, to AI solutions for hiring, distribution, or research and development, evolutions in AI are transforming business operations at a startling pace.
This transformation presents complex, system-wide human rights opportunities and risks.
Tech companies are taking steps to integrate responsible AI practices to address these issues for some time. However, the risks and opportunities associated with AI are related to both the design and development of technologies, as well as how technologies are deployed and used by companies outside of the tech sector.
It’s time for all companies utilizing AI in their products, services, and operations to take a human rights-based approach to the deployment and use of AI.
BSR has worked with member companies to explore the potential human rights impacts of AI in four key industries: retail, extractives, financial services, and healthcare. We focused on identifying the current use cases of AI in these industries, assessing the potential human rights impacts, and recommended initial steps to address adverse impacts.
The findings are summarized in four industry briefs that we hope will serve as a starting place for companies.
The Use of AI in Different Industries
Retail, extractives, healthcare, and financial services companies are deploying and using AI systems in ways that may be connected to significant human rights risks. A few examples of AI use cases include:
- Retail: Personalization of customer experience, improved product search, in-store assistance, demand forecasting, and inventory management in the retail sector.
- Financial Services: Facial and voice recognition for account access, fraud detection, and credit risk assessment in the financial services sector.
- Healthcare: Patient care personalization, care delivery, and research and development in the healthcare sector.
- Extractives: Data collection and analysis, exploration, workplace management, and advanced monitoring in the extractives sector.
The use of AI technologies can alleviate or exacerbate human rights impacts, including but not limited to:
- Violations of the right to privacy through the collection, storage, and use of customer personal data;
- Discrimination by race, gender, age, disability, or other protected categories. This can manifest due to biases present in training data or using AI-generated insights in discriminatory ways.
- A positive impact on labor rights, via improved labor planning and better health and safety measures. However, the use of AI technologies may also result in loss of employee autonomy or privacy.
Regulatory Landscape
To date, there’s been limited focus on the responsibility of non-tech companies to address human rights impacts of their AI technologies. However, this is changing, in part due to upcoming regulations such as the EU Artificial Intelligence Act, which sets out a risk-based approach to assessing the potential risks AI solutions may pose to people’s rights, and the Corporate Sustainability Due Diligence Directive, which will require companies to take appropriate measures to identify the actual and potential human rights impacts arising from their operations.
To help companies outside the tech sector respond to upcoming regulations and act in accordance with their responsibilities under the UN Guiding Principles on Business and Human Rights, BSR is working with members across different industries to help them identify their human rights impacts related to AI.
BSR’s Industry Briefs on AI and Human Rights
Over the next few months, we will publish briefs for specific industries setting out potential human rights impacts of AI solutions and recommendations to mitigate them. These briefs are intended to help companies bring a human rights-based approach to the way they design, develop, and deploy AI technologies.
Coming soon:
- AI and Human Rights in Extractives
- AI and Human Rights in Financial Services
- AI and Human Rights in Healthcare
For further information, including how BSR can support you with the responsible deployment of AI technologies, please contact the team.
This article was originally published at the BSR website "Sustainability Insights" and is written by Lale Tekişalp, Associate Director, Technology Sectors at BSR, Hannah Darnton, Director, Technology and Human Rights at BSR, Richard Wingfield Director at BSR, Ife Ogunleye, Manager, Technology and Human Rights at BSR.