For the third and final time this year, Partnership on AI brought together a cross-disciplinary community of partners and collaborators to share expertise, insights, and reflections.
PAI’s 2024 Partner Forum, held on December 4th at the Museum of Pop Culture in Seattle, looks back at this year’s milestones in AI and beyond, and looks ahead to what’s next for the technology and the efforts to manage it. provided an opportunity to AI continues to evolve.
Throughout the forum, three themes emerged throughout the sessions, keynotes, lightning talks, and panels. We have heard repeatedly about the need for greater inclusivity, especially for communities impacted by AI. A new feature of generative AI known as agent AI. and opportunities for public engagement and education.
The need for greater inclusion
Bringing diverse voices together is at the heart of PAI’s mission. Throughout the day, speakers at the Partner Forum emphasized the importance of inclusion in AI development. Jeffrey Jimenez-Kurlander, Program Officer at Surdna Foundation, emphasized the need to engage and empower communities affected by the use of AI development.
“Decisions that affect millions of people are made in boardrooms, and the process must involve the community.” – Jeffrey Jimenez Carlander
One of the communities that faces risks as a result of AI systems is the disability community. Ariana Aboulafia, director of the Disability Rights in Technology Policy Project at the Center for Democracy & Technology, called for including the voices of people with disabilities throughout the AI lifecycle, calling for “the technology community to consider these people first. That is important,” he emphasized. System design and data collection. ”
Dr. Nicole Turner-Lee, author of Digitally Invisible: How the Internet Is Create the New Underclass, points to several examples of harm caused by AI, such as a woman who was wrongly arrested by facial recognition technology, and describes the common causes of harm caused by AI. Pointed out. All victims were “digitally invisible.” If representation and data are inadequate, how can they be included in conversations about the future of AI? Turner Lee’s call to action is clear: “Digital equity is a precursor to AI equity. ”
The coming era of agenttic AI
From algorithmic recommendation systems to generative AI applications, AI is already impacting the way we live and work. Throughout the day, speakers commented on how profound this technology is. “I don’t think any technological advancement will have a greater impact on our grandchildren than artificial intelligence,” said Eric Horvitz, Microsoft’s chief scientific officer.
One of the advances that many in the PAI community are excited about is AI agents. The level of autonomy for these agents can vary significantly from existing AI applications such as chatbots. According to William Bartholomew, director of public policy at Microsoft, agents “can’t react to instructions, but they can decide when to take an action and be proactive in carrying out tasks and completing plans.” It is possible to promote or promote that mission. ”
Agentic AI’s capabilities offer hope for new applications that will benefit society. Lama Nachman, director of the Intelligent Systems Laboratory at Intel Labs, cited past work with Stephen Hawking and said that agent AI “will allow people with disabilities to act with voice, and will work at an intention level rather than an action level. He shared his optimistic thoughts that “we will be able to automate our actions.” ”
Opportunities for public engagement
Once a niche interest of researchers and academics, over the past two years AI has become widely available to the general public. This creates “a huge opportunity in science communication to inform people about how the technology works,” said Polina Zvyagina, director of AI policy and governance at Meta.
This is especially true for those working in the fields of AI and media who are trying to communicate what is true and what is not. “Misappropriations of AI are still happening,” said Sam Gregory, executive director of human rights group WITNESS. “We cannot consistently communicate what is possible and what is not possible.”
As AI is being adopted and deployed by brands with a wide customer base, there is also an opportunity to communicate responsible AI practices. For example, according to Kathy Baxter, principal architect of Salesforce’s Ethical AI practice, Salesforce has created “guidelines for how AI is developed and for customers’ use” that aligns with the company’s values.
what’s next
As 2024 draws to a close, we are grateful for the many opportunities we have had to host partner communities throughout the year. From discussing AI’s potential to advance philanthropy to highlighting the value of multi-stakeholder input on AI policy, we are supporting the academic and citizen communities who are actively shaping the responsible development and governance of AI. We have engaged in important dialogue with social and industry leaders and experts.
We look forward to even more connections and collaborations with our partner community in 2025 as we continue our mission to bring together diverse voices to ensure AI benefits people and society. If you would like to learn more about PAI, our work, and our events, please sign up to stay connected.