Follow ZDNET: Add us as your preferred source Google.
Important points of ZDNET
The regulatory landscape is evolving, creating new demands. Business leaders can use compliance to guide AI innovation. Internal and external partners can help your organization deliver results.
The AI gold rush is putting new pressure on governments and other public institutions. As businesses seek to gain competitive advantage from emerging technologies, governing bodies are working diligently to implement rules and regulations that protect individuals and their data.
The most prominent law regarding AI is the EU’s AI law. However, global law firm Bird & Bird has developed AI Horizon Tracker, which analyzes 22 jurisdictions and presents a broad regional approach.
Related article: 5 ways Lenovo’s AI strategy delivers real results
Digital and business leaders must find ways to comply with these rules. However, while compliance can be a burden, it is not necessarily an obstacle. These five business leaders offer five ways to leverage governance to guide your AI exploration.
1. Explore within constraints
Art Hu, global CIO at Lenovo, says there is no single answer to the question of how to effectively balance AI innovation and governance.
“Industry, sector and government responses will vary, in some cases, depending on the responsibility,” he said.
Hu told ZDNET that, as a general rule, business leaders should pay attention to the upcoming rules and regulations that need to be followed in the AI era.
Related article: 5 ways to prevent your AI strategy from breaking down
“The penalties for getting things wrong are now very steep. We have significant tail risks in a way we haven’t seen before,” he said, suggesting that executives should focus on carefully guided exploration of AI.
“I think it goes back to the toolbox that you can build and how you foster innovation. Generally speaking, you explore within your constraints, using a whitelist or some type of sandbox, because you don’t want exploration to get stuck with these long-tail negative outcomes.”
2. Collaborate with partners
Paul Neville, director of digital, data and technology at the UK’s Pensions Regulator (TPR), suggested that business leaders need to recognize that AI will not only revolutionize the way technology operates in organizations today, but also bring about breakthrough change.
“I’ve said this at several conferences, but I’ll say it again: I think the future is just automating what we’re doing now, and it’s going to be automated a little bit faster,” he said.
“First, I don’t think that approach is particularly visionary. Second, it doesn’t get past today’s problems. Visionary leaders need to envision how things could change.”
Neville told ZDNET that pioneering executives are helping other professionals imagine a better future. “If we think AI is going to be a little faster than it is now, we’re not going to get what we need from AI. I think there could be fundamentally different work patterns and opportunities.”
Related article: This company’s AI success was built on 5 key steps – see how they work
At TPR, Neville’s team is working with the UK government to understand how new rules and regulations can guide effective AI exploration.
“There’s new legislation, there’s new pension legislation, there’s quite a lot of technology and new customer experiences that will be required,” he said.
“We are working closely with governments to ensure we deliver modern digital services, and the legislation supports that. AI can help us create things that are more interactive, interesting, repetitive and visual at the same time. That’s the opportunity.”
3. Manage bespoke cases
Martin Hardy, director of cyber portfolio and architecture at Royal Mail, said he believed companies could use compliance as a route to explore AI and manage risk.
“In cyber, we do a lot of threat modeling, but a lot of it is very generic and low-level, and it’s in these bespoke, niche cases that my security architect role adds value,” he said.
“By letting AI do 80% of the work, we no longer have to work from a blank sheet of paper and can say, ‘Oh yeah, we need to put this security control in place.’ It means we can give our security professionals more time to focus on what could potentially happen, such as specific attackers of concern in our space, and this approach really adds value.”
Also, are you scared of AI layoffs? 5 ways to future-proof your career before it’s too late
Hardy told ZDNET that business leaders also need to be aware of the risks of relying on AI and data-intensive technologies. The message is clear. Use AI, but proceed with caution.
“By putting all the data into the system, if the AI model is compromised, we have a blueprint of where the weak points of attack are,” he said.
“So it’s a Catch-22 situation. If you don’t use AI, others will and you’ll be left behind. If you use AI and aren’t careful, you could be part of the crowd that gets attacked.”
4. Foster important relationships
Ian Raffle, head of data and insights at UK car breakdown specialist RAC, said managing the balance between governance and innovation is all about internal culture.
“Everything comes back to the people,” he says. “Success is about applying the right technology, but I think it’s also about using that technology properly. And it’s all about having the right people.”
Ruffle told ZDNET that establishing a strong culture is paramount, especially when working with trusted in-house experts, as senior leaders cannot be expected to be aware of every potential threat or risk at a granular level.
ALSO READ: Gemini vs. Copilot: A comparison of AI tools for 7 everyday tasks reveals a clear winner
“We have to empower people to care about the individuals that this data represents,” he says.
“It’s a cultural thing for me. Nurturing relationships with data protection officers and information security teams is more important in the long run than moving forward with cutting-edge technology.”
In short, balancing governance and innovation is difficult, and keeping people in the loop is critical to success.
“It’s definitely a tightrope we have to walk,” Raffle said. “There are reasons why I think organizations need humanity to think through these issues effectively.”
5. Ask important questions
Eric Mayer, chief clinical information officer for transformation at Imperial College London and Imperial College Healthcare NHS Trust, said professionals using data for AI projects need to be careful that the work they do to comply with governance doesn’t create new problems. “If you clean up the data too much, you’re probably biasing the AI, and that’s the problem.”
To overcome this challenge, Mayer told ZDNET that the team continues to have regular dialogue with regulators and is focused on coming up with answers to key questions. “What KPIs do we need in terms of datasets to support regulatory approval of AI to ensure that it works as intended when we bring it into the real world? What was the quality of the data? How many duplicates, how many missing values? What are the actual data definitions?”
Related article: Cloud-native computing is poised to explode thanks to AI inference work
The lesson for other digital leaders is that when you try to clean up data for a new project, you may inadvertently remove variables that could be useful in the future. Mayer advised other professionals to take proactive steps.
“Ultimately, you want data in its rawest form. But if you need to clean or transform data, you need to know exactly how you transformed and documented it,” he said.
“That’s the fundamental element. That’s the part we absolutely have to get right. We have to think about how we can say, ‘Yes, this is safe to implement.’ And long-term success requires continuous validation.”

