The Supply Chain Risk You Didn’t Know About: Navigating Responsible Sourcing in AI

It’s no secret that Artificial Intelligence (AI) is transforming business, and it’s doing so well beyond the boundaries of traditional technology sector companies.

20.10.2022

Sponseret

BSR

Key Points

  • Labor risk in AI supply chains is an emerging issue that all companies developing, using, or sourcing AI urgently need to consider and address.
  • Fully addressing these labor risks at scale will likely require sector collaboration, but individual company awareness and good practice remain nascent and require investment.
  • There are clear actions companies can take on an individual basis to begin assessing potential issues in their respective AI supply chains.

Every Company Is a Technology Company

It’s no secret that Artificial Intelligence (AI) is transforming business, and it’s doing so well beyond the boundaries of traditional technology sector companies. Five years ago, 71 percent of Fortune 500 CEOs agreed that “these days, their company is a tech company,” and 81 percent identified AI and Machine Learning (ML) as an “important” technology for investment.

This trend has only grown, with corporate AI adoption further accelerated by the pandemic and more companies utilizing AI beyond the enablement of efficient business processes to fuel the creation of new business models, products, and services.

While the proliferation of AI creates significant opportunity for business and society, it also surfaces several social justice and human rights issues, which includes an often-overlooked but significant labor risk in the AI supply chain.

Protecting the Rights of the Invisible Workforce

Considering that AI often replaces and nearly always transforms tasks previously performed by humans, it’s often assumed that labor risks may be reduced or eliminated. However, Machine Learning (ML) depends on a large workforce performing essential tasks of data enrichment—including cleaning, labeling, and moderating the vast troves of unstructured data—that make ML and the plethora of products and services it enables technically and commercially viable.

While crucial to AI/ML development processes, data enrichment workers are an invisible workforce, most often crowdsourced via third-party platforms and contracted with few to no labor protections. They are likely to earn subminimum wages due to typical risks and costs of employment outsourced to the individual, and little to no ability to file a complaint and access the right support.

Regulatory and private-sector approaches—where they exist—are insufficient, geographically disparate, and haven’t kept pace with the rate at which the sector is expanding globally. While failure to pay closer attention to this growing workforce and their precarious work conditions could become the next big supply chain emergency, thoughtful and collaborative approaches can protect and even promote human rights for data enrichment workers.

Taking Individual Corporate Action

Recently, BSR and Partnership on AI (PAI) joined forces to identify opportunities for industry and sector collaboration on responsible sourcing for data enrichment.

Our key findings indicate that individual corporate awareness and good practice are still nascent and require action in the near term to establish a foundation for shared learning and ultimately scale up to effective collaborative efforts in the future.

Here are key steps companies can take to assess and address potential issues through individual action:

  • Raise internal awareness. Human rights and responsible sourcing professionals should understand if and how their company sources data enrichment services. Engage with technologists in relevant business units and product teams to understand processes for sourcing enriched data, including whether teams are sourcing directly via platforms or through vendors. Determine risk and identify steps that may need to be taken in line with existing human rights and responsible sourcing approaches.
  • Assess existing AI sourcing partnerships. Teams that source enriched data directly via platforms can familiarize themselves with the Fairwork Cloudwork Principles and check the Cloudwork ratings to learn how these platforms are performing. Companies that source enriched data via a third party and those that source enterprise AI as a service should ask their providers how they’re managing responsible sourcing for their data enrichment and affirm that this issue is important to you as a client.
  • Aim to engage and improve conditions, not retreat. Digitally distributed work has created new economic opportunities, including for many who were previously excluded from traditional work opportunities due to discrimination, caregiving responsibilities, and geographic location, among other reasons. Companies should not seek to eliminate this type of work from their supply chain, but rather engage and partner with suppliers and the sector to ensure labor rights are respected and that working conditions enable data enrichment professionals to thrive and advance in the field they help to build.
  • Help decision-makers take an informed approach. Teams that develop AI/ML internally are likely to source data enrichment services to support their models. They can leverage PAI’s whitepaper on Responsible Sourcing of Data Enrichment Services to understand and inform decisions they’re making throughout the sourcing process that impact working conditions.
  • Pilot responsible approaches and share learnings. Given the field’s nascent stage in formalizing responsible sourcing practices, thoughtfully planned pilots will help put existing recommendations to the test. Sharing learnings from pilots can help prove the feasibility and value of adopting responsible sourcing practices. Companies interested in demonstrating leadership in piloting approaches laid out in PAI’s paper are invited to contact PAI directly.
  • Track opportunities for collective action. We know that fully addressing these labor risks at scale will require industry or sector collaboration. As we learn more about what companies are well-placed to address through piloting and individual approaches, we will also learn more about where collaboration is necessary for meaningful impact.

We invite business to bring their insights and ideas from these experiences to BSR’s Collaboration IDEAS Process, where we incubate game-changing collaborations. 

Partnership on AI is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society.

This article was originally published on the BSR website "Our insights" and was written by Jessica Custer and Nicole Pamani.

14.11.2024BSR

Sponseret

The Silent G: Six Questions Every Leadership Team Should Ask About Sustainability Governance

04.11.2024BSR

Sponseret

Racing Past the Crossroads: How Sustainability Leaders Can Reassert Ambition

31.10.2024BSR

Sponseret

Adequate Wages vs. Living Wages: Implementation Guidance for Companies

28.10.2024BSR

Sponseret

Collaboration Crossroads: Recognizing When to Part Ways for Greater Impact

14.10.2024BSR

Sponseret

An Impact-Based Approach to Responsible AI

10.10.2024BSR

Sponseret

Sustainability Strategy in the Age of Regulation: Don’t Lose the Plot