new zealand development
1. Financial Markets Authority launches “regulatory sandbox”
On December 10, the FMA announced its intention to conduct a “regulatory sandbox” trial from January to July 2025. The Sandbox aims to provide a controlled environment for companies in the financial services sector to test innovative technologies, including AI products. It is under the supervision of FMA. The purpose of the trial is to help companies test upcoming AI products and other technologies to ensure compliance with regulatory and supervisory expectations before product launch, fostering innovation and reducing costs. is to reduce. Applicants must have either an initial concept, a minimum viable product, or a fully functional product ready for testing.
The sandbox also provides an opportunity for the FMA to ensure that regulations are effective and fit for purpose. If the trial period is successful, FMA will decide whether to make the sandbox initiative permanent in late 2025. The sandbox initiative is consistent with the approach of other jurisdictions such as the UK, Australia and Singapore.
Banks, financiers, fintechs and providers of AI-driven financial services should consider this opportunity to test new services in a controlled environment and potentially contribute to the appropriate design of financial regulation in New Zealand. There is. Applications for the sandbox pilot are expected to open in early 2025.
2. Proposal of new biometric authentication code
The Privacy Commission (OPC) has confirmed its intention to issue a biometric privacy code (Code). The announcement follows a consultation on the draft code earlier this year (summary here). Based on that consultation, the Code was amended to improve understanding and clarify proposed obligations, including new notification requirements.
Although not specific to AI, this code is relevant to many companies that use AI systems that process “biometric information” (i.e., an individual’s physical or behavioral characteristics such as their face, fingerprints, or voice). Masu. This could include retailers using facial recognition systems for security purposes. Companies processing biometric information should carefully consider this code, which, although partially simplified from its original draft, is still likely to impose significant new compliance burdens.
In particular, the Code introduces new obligations to implement a “proportionality test” for the collection of biometric information (requiring consideration of whether less privacy-invasive alternatives are available), and notification to individuals. It imposes obligations and restrictions on use. Biological information. In connection with disclosure obligations, government agencies must provide a separate notice governing biometric processing practices, separate from the standard privacy policy.
The OPC has also published draft guidance that provides examples of how the Code may be applied. Submissions of the revised draft code and guidance are expected to be submitted by March 14, 2025. The final code is expected to enter into force in mid-2025.
International expansion
3. Australian task force recommends comprehensive AI legislation
In Australia, the recently established Task Force on the Implementation of Artificial Intelligence has released a report on the impact of AI technologies here. The report makes 13 recommendations for the Australian Government, including introducing comprehensive ‘economy-wide’ AI legislation to regulate the use of AI, rather than relying on existing frameworks and piecemeal regulation. outlined. The task force also classifies AI according to risk, uses a principles-based approach to defining high-risk uses of AI, including non-exhaustive examples, and develops stricter requirements for high-risk uses. It was recommended that this be made possible. The report says any AI that impacts people’s rights in the workplace should be classified as a high-risk use, including automated AI tools for resume scanning, shift rostering, and performance evaluation. I am doing it.
If adopted, this recommendation would bring Australia more closely aligned with similar approaches in other jurisdictions, such as the EU, which have adopted comprehensive AI legislation (outlined here). This avoids a standalone AI bill, as pointed out in a recent cabinet paper, stating that “further regulatory intervention should only be considered to unleash innovation or address serious risks.” This is in contrast to the intentions of the New Zealand government.
4. European Data Protection Board (EDPB) issues guidance on AI and personal data
The EDPB issued a formal opinion on December 18, providing critical analysis of data protection issues related to AI. Although not directly applicable in New Zealand, this guidance provides valuable insight for businesses looking to establish strong data protection controls when training and using AI models.
The opinion considers, among other things, whether AI models can be considered “anonymous.” This is relevant in a privacy context as anonymised information does not typically constitute ‘personal information’ (defined in the New Zealand Privacy Act 2020 to include ‘information about an identifiable individual’). The EDPB opinion outlines two criteria that an AI model must meet to be considered anonymous. (i) First, there must be a “slim” chance of directly extracting personal data from the model. (ii) queries to the model must not produce identifiable personal information, even unintentionally; If these conditions are not met, the EDPB will consider the AI model not to be anonymous and privacy laws will apply.
This opinion highlights the need for companies to carefully consider anonymity claims related to AI systems and ensure that appropriate technical safeguards and detailed privacy impact assessments (PIAs) support those claims. I’m doing it.
Looking to the future
From domestic initiatives such as the FMA’s regulatory sandbox and proposed biometric codes to international developments such as the EDPB opinion, AI regulation is rapidly evolving. Staying informed and proactive is essential, not only to ensure compliance, but also to take advantage of new opportunities in this rapidly changing technology environment. We can keep New Zealand businesses developing or deploying AI technologies up to date on these important changes, ensuring they remain well-positioned to adapt to a future increasingly shaped by AI. We encourage you to participate in the above discussion.
If you have any questions about the matters covered in this article, or if you require assistance with submitting your biometric code or applying to join the FMA’s Regulatory Sandbox Initiative, please contact us using the contact details listed or at your usual Belgary Please contact your advisor. .
This article was written with the assistance of Hannah Giles, a summer intern in Bergary’s Summer Internship Program.