Read other speaker Q&As from PAI’s AI Policy Forum here.
In September 2024, Partnership on AI (PAI) will host the AI Policy Forum in New York City, bringing together global thought leaders, industry leaders, and policymakers to discuss the evolving landscape of global AI governance. did. From ethical AI practices to global interoperability, these conversations reflect our efforts to shape inclusive policies around the world. This new Q&A series will highlight some of the leaders who spoke at the forum, providing deeper insight into their AI and governance efforts.
Today we’re sharing an interview with David Wakeling from A&O Shearman, who gave a lightning talk at the AI Policy Forum.
David Wakeling is a partner at A&O Shearman and global head of the firm’s Market Innovation Group (MIG) and AI Advisory practice. He has advised some of the world’s largest companies on the safe implementation of AI into their businesses and is considered one of the world’s leading AI lawyers. He also leads the development and implementation of A&O Sherman’s proprietary AI strategy and chairs the company’s AI Steering Committee, beginning with the rollout of Harvey, an LLM launched in December 2022. It plays a central role in the company’s generative AI deployment. Fine-tuned to suit the law. With this move, the company beginning To deploy generative AI at the enterprise level.
Under David’s leadership, contract matrixis a contract drafting and negotiation tool that utilizes AI. Market Innovation Group (MIG), in partnership with Microsoft and Harvey. This allows A&O Sherman to beginning Law firms around the world collaborate with Microsoft to develop AI tools. ContractMatrix is used extensively within A&O Shearman and is also licensed to clients. Additionally, David is the leader of the company AI advisory practice. As part of this, he led the establishment of the AI Working Group, through which the firm now provides key legal, risk management and We advise on important issues. Adoption challenges posed by generative AI. David previously served as co-chair of A&O Sherman’s Race and Ethnicity Committee.
Speaking at PAI’s AI Policy Forum, David highlighted the challenges associated with navigating the currently fragmented and opaque AI policy and regulatory landscape. He discussed the impact of companies making decisions related to AI adoption without clear guidance and spoke about the need for a unified global high-level AI policy framework. He emphasized the importance of international cooperation between regulators and industry to establish what this high-level AI policy framework and national rules and regulations mean in practice. He argued that effective international multi-stakeholder collaboration supports the development of interoperable consensus-based standards, thereby enabling responsible innovation.
David Wakeling’s talk at PAI’s AI Policy Forum
Talia Khan: What is the biggest misconception the public has about AI and how can we better educate them?
David Wakeling: One of the misconceptions that comes to mind is that AI will take all our jobs. This is problematic because it naturally instills fear in people. Currently available AI systems have many limitations, with hallucinations being an obvious example. At A&O Sherman, we believe that AI will augment, rather than replace, attorneys and staff more broadly. Experts are always involved to verify and fine-tune the AI output. Even as AI systems become increasingly sophisticated, AI cannot completely replace human work. There are many human skills and characteristics that cannot be automated, such as the ability to think critically and strategically or to build effective relationships. I believe concerns can be alleviated by raising awareness of AI’s limitations, demystifying AI, and improving AI literacy more broadly.
TK: Why do you think multi-stakeholder collaboration is important when shaping AI policy and how can we ensure all voices are heard?
DW: Multi-stakeholder collaboration is essential when developing AI policy. That’s why we wanted to be the sole legal advisor to the Partnership on AI (PAI). I strongly believe in the value of interdisciplinary groups. I lead A&O Sherman’s Market Innovation Group (MIG). This group is made up of lawyers, technologists, and developers working together to transform the legal industry. As we build and deploy AI systems, such as ContractMatrix, which we co-developed and launched with Microsoft and Harvey, we were the first law firm in the world to partner with Microsoft to develop an AI tool. ), our legal advice on AI is based on deep insight. Technical expertise and understanding of what actually works. A similar interdisciplinary approach is needed when developing AI policy. Policymakers cannot do it alone. It is important to consider the views, needs, and concerns of all stakeholders affected by AI deployment, and through coalitions such as PAI, we are convening diverse perspectives, including individuals from academia, civil society, and industry. By doing so, we have the opportunity to: this. Meaningful public engagement on AI is also essential if we want to ensure all voices are heard, including those of underrepresented groups.
TK: What do you think are the most pressing challenges in AI today, and how can these challenges be addressed?
DW: The most pressing issue in AI today is ensuring that AI systems are used in a responsible, compliant, and trustworthy manner. As AI systems become increasingly sophisticated and integrated into society, we are concerned that they may perpetuate or exacerbate inequalities, make decisions that are difficult to explain or understand, or behave in ways that are inconsistent with society. There is a high possibility that risks related to AI will materialize, such as Human values and intentions will become greater. To prevent this, organizations must develop and implement responsible AI governance and have risk mitigation tools at their disposal. At A&O Sherman, he founded and led the firm’s AI advisory practice, as well as a client-facing AI working group, to help organizations leverage AI responsibly. Through our AI Working Group, we currently advise more than 80 organizations on the development and deployment of secure AI. To date, we have helped clients achieve transparency and fairness in their use of AI, reduce the risk of intellectual property infringement and litigation, and avoid legal issues related to licensing AI systems. I did.
TK: As more non-tech industries adopt AI, what are some of the ethical challenges that companies need to be prepared to address and how can they effectively navigate these challenges? ?
DW: The introduction of AI raises many ethical challenges related to bias and fairness, privacy, transparency, accountability, and accountability. These challenges often intersect and overlap with the legal risks associated with AI deployment. For example, bias is both a legal risk and an ethical challenge when it leads to discrimination. How can we achieve greater fairness when using AI systems than is legally required? When we deployed generative AI at the enterprise level in December 2022, we Both the ethical challenges and legal risks associated with this had to be addressed, which led the company to roll it out for the first time in the world. The core of our approach was to start with a sandbox. I highly recommend other organizations do this as well. We granted a select group access to Harvey, a legally tailored LLM, in a controlled environment. During this time, we identified use cases and risks, put in place governance mechanisms and robust feedback loops, and implemented a rigorous InfoSec and technology architecture alignment program to ensure that our AI systems are both compliant and ethical. I made it to be used. .
TK: What role do you see the legal profession playing in shaping AI governance, especially in industries that are just beginning to integrate these technologies?
DW: Lawyers have an important role to play in shaping AI governance. We can develop governance mechanisms that enable organizations to comply with existing legal requirements and regulatory standards, which vary widely from jurisdiction to jurisdiction. Our market-leading, multi-jurisdictional AI advisory practice is also comprised of experts across the full spectrum of risk management. Enabling organizations to streamline the legal, ethical, and operational risks associated with developing and deploying AI to unlock value for their use cases, even in the absence of a comprehensive legal framework for AI. I can help. This is especially useful and necessary for those in the early stages of adoption. We have helped clients develop ethical principles for AI, draft clear rules for AI use, and create risk assessments and broader governance frameworks. We take a holistic approach when developing AI governance mechanisms, drawing on the expertise of our in-house data scientists and developers where necessary to ensure that the advice we provide is technically and legally and ensure that it is ethically appropriate.