neil sands
New Zealand businesses must embrace the responsible use of generative artificial intelligence (AI) to overcome New Zealanders’ deep skepticism about technology. If left unchecked, this could stifle innovation, the Law Society reported in a webinar hosted on Thursday.
Speaking at the Generative AI webinar Overcoming the Law, presenter Hannah King cited a KPMG report showing New Zealanders are more distrustful of AI than any other country in the world.
The report found that just 44% of New Zealanders think the benefits of AI outweigh the risks, just 23% think current guardrails are strong enough to make AI safe to use, and 81% think regulation is needed.
“We have the lowest rates of acceptance, excitement and optimism around AI in the world,” said King, a partner at Keeley Thompson Caisley.
“So while New Zealanders seem to want regulation, they remain skeptical about the question of whether the benefits of AI outweigh the risks. Certainly, it seems we as a nation want a comprehensive regulatory approach.”
King said responsible and widespread adoption of AI is key to realizing the technology’s potential.
“Far from being a constraint, responsible AI is emerging as a key differentiator that enables innovation to scale safely, sustainably and inclusively,” she said.
Responsible AI is defined by the World Economic Forum (WEF) as “building and managing AI systems that maximize benefits while minimizing risks to humans, society, and the environment.”
“Gaps and overlaps”
King said approaches to AI regulation vary globally, creating problems for companies operating internationally.
“We are seeing a fragmented regulatory approach that is creating, or starting to create, some hurdles for companies that need to address this gap,” she said.
“There are gaps and overlaps. So there are issues of compliance, social trust and what that means for clients looking to do business around the world.”
King said many countries are taking a risk-based approach to AI regulation, with a focus on protecting core values such as privacy, non-discrimination and security.
“Of course, governments don’t have a great track record of keeping up with emerging technologies. The complex and evolving field of AI certainly raises a number of legal, national security and human rights concerns,” she said.
“The speed at which this technology has developed in such a short period of time is astonishing, and various jurisdictions around the world are racing to catch up and passing laws to govern its use.”
He said Australia regulates AI through existing legislation, complemented by sector-specific policies and voluntary frameworks, rather than specific legislation.
Australia is also establishing an AI Safety Institute aimed at developing effective protections and identifying future risks.
The U.S. federal government focuses primarily on innovation and deregulation, while regulatory efforts in state jurisdictions center on privacy and copyright protection.
In contrast, King said the European Union (EU) is likely taking a “slightly more prescriptive” approach, passing an AI law in May 2024 that establishes four risk levels, from unacceptable to minimal.
He said EU law had extraterritorial application and could impact New Zealand companies providing AI applications within the EU, similar to when the General Data Protection Regulation was introduced in 2018.
King said multinational companies seeking a consistent governance approach are likely to view EU law as a “global high-water mark” for AI regulation.
“Light touch”
New Zealand has no independent AI legislation, and the government has adopted what Government Digitalization Minister Judith Collins KC describes as a “light-touch” approach, using existing legislation in conjunction with government and regulator guidance and industry self-regulation.
But King said the position could change in such a rapidly evolving field.
“In 2024, a cabinet paper was published stating that regulatory intervention should only be considered for the purpose of creating innovation and addressing serious risks,” she said.
“We noted the need to leverage existing frameworks, leverage international action rather than developing standalone legislation, and prioritize agile options.
“So that’s where we are now. Maybe in two years we’ll be sitting in front of you with a completely different framework and development in place.”
King said it is up to companies to proactively develop AI policies that include the responsible use of technology, which fosters trust and fosters innovation.
“I think the real way to really support the use of AI, support the development and support AI-backed innovation is to ensure that we have a foundation for responsible AI… So I think it’s really important for New Zealand businesses to strengthen their approach to responsible AI.
“Because the more we don’t have to do that, the more doubts, concerns, and hesitations about using AI will grow, which will hinder us from using AI in the future.”

