Ariana Aboulafia is an attorney with a strong background in public interest advocacy. Her expertise spans disability rights, technology, criminal law, and the First Amendment. She leads the Disability Rights in Technology Policy Project at the Center for Democracy and Technology, which focuses on addressing technology-facilitated disability discrimination. Discrimination against people with disabilities is not new, but AI and algorithmic technologies may pose new challenges. Ariana, who was a guest speaker at PAI’s 2024 Partner Forum, discussed how AI, algorithmic tools, and related technologies are becoming a power multiplier for people with disabilities in employment, education, healthcare, housing, transportation, and more. We shared how this has further entrenched discrimination. AI and other technologies are permeating every aspect of our lives, exacerbating the risks they can pose to people with disabilities and other marginalized communities.
/br>
So why does this problem exist? And what can we do about it? We spoke to Ariana and talked about inclusive design, responsible data collection, and the future as a way to address risk. We talked about our hopes.
Thalia K: In my talk at PAI’s Partner Forum, I talked about the real harm that AI systems are causing to people with disabilities such as Crohn’s disease, diabetes, and ADHD in housing, health care, and education. What steps can organizations take to audit existing systems for disability discrimination to ensure this technology is applicable to everyone?
Ariana A: For organizations already using AI tools and algorithmic systems, it’s important to leverage post-implementation audits to test for any kind of bias or skewed impact on users, including people with disabilities. One concern with post-implementation audits (and this also applies to pre-implementation audits) is that they may test for other types of bias, such as racial or gender bias, without including disability. is. By doing so, these organizations may genuinely believe that their systems are free of bias when in fact they are (depending on the results of their audits, of course). It is also very important that auditing is not just a check item. This means enabling organizations to incorporate audit results into decisions about whether to maintain system effectiveness or change direction.
TK: Your work focuses on integrating inclusive design principles into the creation of AI and algorithmic tools that reduce risks while maximizing potential benefits for people with disabilities. What are some inclusive design practices that developers should consider when creating technology that is accessible to all users, especially users with disabilities?
AA: One of the main principles of inclusive design is to create spaces and systems in a way that is inclusive of people with disabilities, allowing designers to create systems that are more inclusive for everyone, including other marginalized groups. That means you can create it. The benefits of inclusive design are sometimes illustrated by examining the so-called “curb cut effect.” We find that physical spaces with curb cuts are helpful not only for people who use wheelchairs, but also for parents with strollers, travelers with suitcases, and more. This same effect can be seen in the thoughtful and inclusive design of algorithmic systems and AI tools. One of the inclusive design practices that AI developers should use is ensuring that the product is human-centered and that users are in control of their experience. People with disabilities need to be involved not only in their experience as users, but also in the design process, the implementation, auditing, and procurement of AI, algorithms, and AI-integrated tools.
TK: In your talk, you said that to reduce discrimination in these systems, it is essential to involve people with disabilities not only in technology policy but also in the development, deployment, auditing and procurement of all these technologies. I did. How can developers and policymakers include people with disabilities in these processes? How can these relationships be sustained to ensure consistent engagement?
AA: There are many people with disabilities who have unique experiences, not just because of their disability, but because of their expertise in the field. It is important for developers and policymakers to consider people with disabilities when engaging with stakeholders and when hiring talent to help build technology and develop technology policy. And this cannot be done in such a way as to check a checkbox. Instead, we need to represent a realistic and sustainable relationship that respects both the lived and learned experiences of people with disabilities over many years.
TK: You mentioned that data collection is essential to creating comprehensive AI and algorithmic systems, but there are many challenges to getting this right. What are the major risks when collecting data about people with disabilities and other marginalized groups, and how can they be mitigated?
AA: Last year, I co-authored a report explaining some of the reasons why collecting accurate and comprehensive disability data is difficult. This means that there are differences in definitions of disability, social bias, difficulty in making data collection mechanisms accessible, and other issues all contribute to creating an exclusionary data environment. Furthermore, for people with disabilities and other marginalized groups, it is important to ensure that the data collection process takes place with fully informed consent. To ensure this is also possible for participants with disabilities, plain language and other accessible resources should be made available throughout. Collection process. It is also important to ensure that data is collected in a way that protects the privacy of individuals and data, especially if the data is sensitive or identifiable in some way. By implementing policies such as data minimization, purpose restriction, and deletion, data collectors can build comprehensive datasets while addressing some of the privacy-related concerns of people with disabilities and other marginalized populations. can be reduced.
TK: How do communities like Partnership on AI enable efforts to make AI more accessible and fair to people with disabilities?
AA: Ensuring that AI is fully accessible and fair to people with disabilities requires recognizing the issues that affect people with disabilities when they interact with technology, and the use and development of AI. All departments involved must both do their part to improve those issues. . The Partnership on AI community is comprised of people from academia, civil society, and industry who bring together their perspectives to create practical guidance for the responsible use of AI. The Partnership on AI brings together people from these different fields and fosters conversations about the inclusion of disability in technology development and policy, thereby connecting the very people with the skills and resources to ameliorate these issues. creates a valuable opportunity to raise awareness of these issues.
TK: Can you tell us about any ongoing efforts or initiatives you are particularly excited about in your work at CDT?
AA: In 2024, I co-authored three major reports at CDT. One about disability data collection, one about the impact of AI-powered employment tools on disabled workers, and one about asking a chatbot some questions about disabled voting. , was evaluated. the quality of their responses; This year, we will produce work that includes reports and short-form opinion pieces that demonstrate the myriad ways technology can impact people with disabilities across employment, voting rights, transportation, and other areas, including health care. I’m thinking of continuing. As you mentioned at the beginning of this conversation, AI and algorithmic tools are ubiquitous and impacting every aspect of the lives of people with disabilities. In 2025, my work will continue to reflect that, collaborating with many talented colleagues.