While many organizations are beginning to rationally consider the environmental impact of AI, particularly due to the growing energy demand of generated AI, there is still a need for a greater focus on the human costs embedded in the development of AI models.
This International Workers’ Day celebrates the labor movements and workers’ rights around the world, reminiscent of our continued efforts to protect human dignity in the workplace and cultivate equality. Data enrichment workers who perform the key role of AI models of training, validation, and fine-tuning are among many groups of workers who are still overlooked, unrecognized and lacking protection.
It is clear that building responsible AI means building responsible AI data supply chains. Although details of the EU Enterprises’ Sustainability Due Diligence Directive (CSDDD) are resolved, this regulatory debate highlights the need to leverage risk-based due diligence practices based on internationally accepted human rights principles. Specifically, it is important to strengthen human rights and environmental protection across the global supply chain to improve the conditions for data-enriched workers worldwide.
Responsible AI movements must include data workers
AI relies on data, but before that data powers the algorithm, it must first move through the vast global data supply chain built by data enrichment workers. These workers label, classify, and mitigate data and information for use in AI systems. However, despite its important role, data enriched workers are barely visible and underrated. Because the AI
“Responsible AI cannot be achieved without the responsible treatment of those whose labor makes AI possible.”
By proposing pathways for working with PAI, creating key workshops, creating case studies, developing resource libraries and target resources, and responsible data enrichment, we found that ecosystem-wide accountability is necessary to address these issues at scale. It is difficult for the industry to change its practices, as two central and practical challenges stand out.
Fragmentation of internal decisions: Internal decisions that currently have meaningful influence on workers’ conditions are currently distributed among multiple teams and functions. External Complexity: The complexity and scale of the supply chain means that multiple organizations across the complex data supply chain shape decisions that affect workers’ livelihoods.
These challenges are necessary for formalizing and standardizing practices, but are not impossible to overcome. Other industries such as clothing, minerals and manufacturing are facing similar supply chain dilemmas. Their experience demonstrates that existing human rights principles and governance frameworks can also be adapted to the AI
Bridge the human rights and data supply chain
We recommend seeing momentum building with more human rights impact assessments, including impacts on data-enriched workers. Our guidelines on responsible data reinforcement practices are incorporated into OECD’s upcoming due diligence guidance on reliable AI, demonstrating global momentum. However, work is still needed to bridge the gap between responsible data supply chain practices and human rights principles.
Organizations including PAI, The OECD and the United Nations have already developed relationships between human rights due diligence frameworks and responsible data supply chains. To advance this dialogue, we held webinars, human rights due diligence frameworks, ethical data enrichment supply chains, and brought together experts from the UN OHCHR B-Tech initiative, OECD’s responsible business behavior group, Cisco, Intel and Google Deepmind industry leaders. This discussion summarises diverse perspectives across human rights expertise, bridges principles across different supply chains, and experiences the implementation of responsible AI practices. Five important insights emerged from that conversation. We believe this will allow us to better understand and tackle the challenges in developing a responsible AI data supply chain.
The human rights framework provides a strong foundation for protecting data. They will help to prioritize the rights of the most vulnerable people and identify appropriate actions to prevent, mitigate and remedy harm in line with the AI
Why is this important?
Data is an important component of AI. When people who curate, label and enrich that data are unfairly treated, not only will the integrity of those systems, but the safety, quality and reliability will be compromised. Ignoring the conditions of data-enriched workers is not merely a human rights failure, but also a technical and business risk.
By leveraging existing human rights frameworks and intentionally embedding responsible data-enhancing practices, we can build not only more ethical, but also safer and more reliable AI systems.
This International Workers’ Day promises to build AI that respects all contributors, especially workers, that respect the invisible but essential. The foundations of responsible AI already exist within the human rights framework. Now it’s up to us to build it.