Robots from the Federal Institute of Technology Zurich play soccer at ITU AI for Good Global … (+)
AFP (via Getty Images)
Amidst the runaway pace of AI development, those responsible for risk management are often chasing to stay ahead.
As stories of rogue bots and AI tools make headlines and consumer AI tools flood the market, public trust in conversational AI has taken a hit. A 2024 Gallup/Bentley University study found that only 23% of U.S. consumers trust companies to handle AI responsibly.
For AI governance and compliance professionals, this is a reality they face every day. With new challenges ahead in 2025, from AI agents to the development of new regulations, we spoke to industry leaders to get their views on the future of AI governance.
The regulatory maze becomes even more complex
Michael Brent, Director of Responsible AI at Boston Consulting Group (BCG), predicts that by 2025, AI governance will revolve around compliance with new regulations.
The EU AI law, which could result in fines of up to €35 million, will become a defining factor in global AI governance.
“The EU’s regulatory approach will serve as a closely monitored test case for organizations and nations to monitor its impact on competitive advantage and business operations,” said Alberta Machine Intelligence Institute’s AI Trust and Safety Director Alyssa Lefebvre Škopak explains (Amii)).
Ms. Lefebvre-Skopak believes that “soft law” mechanisms such as standards, certification, cooperation between national AI safety agencies, and sector-specific guidance will play an increasingly important role in closing regulatory gaps. I predict that it will be. “There will still be fragmentation and complete harmonization will not happen in the foreseeable future,” she admits.
Meanwhile, the situation in the United States is expected to remain fragmented.
Alexandra Robinson, who leads the AI governance and cybersecurity policy team supporting US federal partners at Steampunk Inc. “mirroring” is likely to prioritize reducing barriers to “mirroring.” The state of consumer privacy regulation in the United States. ”
Experts predict that the compliance landscape will take many different forms. Fionn Lee Madan, co-founder of AI governance software company Fairly AI, makes a bold prediction: “ISO/IEC 42001 certification will be the hottest ticket in 2025 as organizations move from talking about AI to addressing actual security and compliance requirements.” Responsibility for AI. ”
Standards and certifications, while voluntary, are becoming essential tools for navigating a complex regulatory environment, and procurement teams are increasingly demanding to ensure trust and compliance from AI vendors. Lee Madan argues.
Agentic AI redefines governance priorities
Generative AI dominated the headlines in 2024, but experts believe 2025 will belong to “agent AI.” These systems can autonomously plan and execute tasks based on user-defined goals, creating unprecedented governance challenges.
“With the proliferation of research on agent workflows, we expect a proliferation of AI governance around AI agents,” predicts Apoorva Kumar, CEO and co-founder of Inspeq AI, a responsible AI operations platform. .
On this basis, José Bello, Co-Chairman of the London Chapter of the International Association of Privacy Professionals (IAPP), argues that the decision-making capabilities of these systems raise difficult questions about autonomy and the safeguards needed to prevent harm. I’m warning you that it will cause this. Similarly, experts like Ms. Lefebvre-Szkopak of AMII are looking forward to important research on the balance between the autonomy of these systems and the accountability of their actions.
The impact on the workforce also looms large, and “this will naturally intensify the discussion and research around the impact of AI on the workforce and how and at what scale to replace employees with AI agents.” she warns.
AI governance moves from ethics to operational reality
“AI governance is no longer just an ethical afterthought, but is becoming standard business practice,” Lefebvre-Szkopak points out.
Giovanni Leoni, Responsible AI manager and associate director at Accenture, said companies are incorporating responsible AI principles into their strategies and recognizing that governance involves people and processes, not just the technology itself. I am.
Looking at governance as part of a larger transformation, Leoni says: “AI governance is a journey to manage change.” This shift reflects a growing recognition that AI governance is a critical component of strategic planning, rather than a separate initiative.
This evolution is further highlighted by Alice Thwaites, head of ethics at Omnicom Media Group UK, who notes that companies are starting to separate the concepts of AI governance, ethics and compliance. “Each of these areas requires its own framework and expertise,” she said, reflecting a maturing understanding of AI challenges.
Meanwhile, Kumar is focusing on the operational aspects of this transformation. With the rise of platforms like Responsible AI Operations (RAIops) and Inspeq AI, enterprises now have the tools to measure, monitor, and audit AI applications, allowing them to integrate governance directly into their workflows.
Environmental considerations will play a larger role in AI governance
Experts predict that environmental considerations are becoming a core governance concern. IAPP’s Bello emphasizes that mitigating the environmental impact of AI is a shared responsibility between providers and adopters.
Providers must take the lead by designing energy-efficient systems and adopting transparent carbon reporting practices. Adopters should then adopt sustainable practices in cloud use, prioritizing greener data centers and minimizing redundancy. Ethical decommissioning of AI systems is also important to prevent unnecessary environmental degradation.
Key drivers of progress in AI governance
What’s driving progress in AI governance? Industry leaders provide key insights, each highlighting distinct but interconnected factors.
BCG’s Michael Brent emphasized the role of active business engagement, saying, “The single biggest factor accelerating progress in AI governance is active investment by companies, including establishing responsible AI teams. ”.
From a practical perspective, Inspeq AI’s Apoorva Kumar points to real-world implications, saying, “The loss of trust and reputation has already taken a huge toll on companies like DPD, Snapchat, and Google Gemini. “Failures will drive further advances in AI governance.”
On the corporate side, Ms. Lefebvre-Szkopak emphasized the importance of leveraging purchasing power, saying, “Organizations can use their purchasing power to demand high standards from AI providers, demanding transparency, documentation, and test results. There is a need to do so.”
Finally, as AI becomes more pervasive, Bello emphasized the need for education, saying, “AI literacy is becoming recognized as a critical requirement across the industry.”
Each perspective reinforces the notion that progress in AI governance requires action across multiple dimensions, including corporate commitment, transparency, and an increased focus on literacy and accountability.
The way forward: Clear challenges and complex solutions
In summary, the path to improved AI governance will not be easy. Some of the more optimistic predictions, such as increased investment in AI compliance, are tempered by the ongoing complexity of both the theoretical framework and operational challenges of AI governance.
Global harmonization remains an elusive goal, especially given recent developments in the United States. Organizations continue to grapple with a combination of “soft power” mechanisms such as frameworks, standards, and protocols without clear regulatory guidelines for specific use cases.
At the same time, emerging AI trends such as agent AI are about to introduce a new wave of complex risks that will test the adaptability of responsible AI practitioners. The key difference remains: a holistic, human-centered approach to responsible AI development and a narrow focus on risk management at the highest level.
What is clear is that no single team can tackle these challenges alone. Steampunk’s Ms. Robinson aptly sums it up: “My motto for 2025 is to move from extractive AI compliance to effective engagement. For those of us committed to AI governance, creating and deploying safe, reliable, and responsible AI This means meeting people where they are, giving product owners a 500-question AI risk assessment, and expecting more than just complaints. You cannot.”
The AI governance landscape in 2025 is expected to be as complex as ever, but the contours of a more structured and workable framework for AI governance are emerging.