Artificial intelligence (AI) is transforming the world, from diagnosing hospital diseases to catching fraud in banking systems. But it also raises urgent questions.
One problem is looming as the G7 leader prepares to meet in Alberta. How can you build a powerful AI system without sacrificing privacy?
The G7 Summit is an opportunity to set a tone for how democracies manage emerging technologies. Regulations are progressing, but they cannot succeed without strong technical solutions.
In our view, what we call federated learning (or FL) is one of the most promising yet often overlooked tools and deserves to be at the heart of the conversation.
Read more: Inspired by media theorist Marshall McClehan, six ways AI can partner with us in creative research
As a researcher in AI, cybersecurity and public health, I saw a data dilemmas firsthand. AI thrives on data, many of which thrive deeply and personally personally – medical history, financial transactions, critical infrastructure logs. The more centralized data, the greater the risk of leaks, misuse, or cyberattacks.
The UK National Health Agency has suspended a promising AI initiative on fear of data processing. In Canada, concerns have emerged about the storage of personal information (including immigration and health records) in foreign cloud services. Trust in AI systems is vulnerable. Once it breaks, innovation will halt.
Canadian media/Shaun Killpatrick
Why is centralized AI responsible for increasing?
The dominant approach to training AI is to put all your data in one intensive place. On paper, it is efficient. In reality, it creates security nightmares.
Centralized systems are attractive targets for hackers. They are particularly difficult to regulate when data flows across national or sectoral boundaries. And they concentrate too much power in the hands of a small number of data holders and tech giants.
However, instead of bringing the algorithm into the algorithm, FL brings the algorithm to the data. Local institutions train AI models with their own data, whether hospitals, government agencies or banks. Only model updates, not raw data, are shared with the central system. It’s like a student doing homework at home and submitting only the final answer, not the notebook.
This approach dramatically reduces the risk of data breaches while maintaining the ability to learn from large trends.
Where is it already working?
FL could be a game changer. When combined with techniques such as privacy differences, secure multi-party calculations, or isomorphic encryption, it can dramatically reduce the risk of data leaks.
In Canada, researchers are already using FL to train cancer detection models across the province without moving sensitive health records.

(Shutterstock)
The project, which includes Canada’s Primary Care Sentinel Surveillance Network, demonstrates how FL is used to predict chronic diseases such as diabetes, and maintains all patient data firmly within local boundaries.
Banks use it to detect fraud without sharing customer identity. Cybersecurity agencies are looking for ways to coordinate jurisdictions without publishing logs.
Read more: Healthcare AI: Potential and pitfalls in app-based diagnostics
Why the G7 needs to act now?
Governments around the world are competing to regulate AI. Canada’s proposed AI and data law, the European Union’s AI law, and the US executive order on safe, secure and reliable AI are all major steps. However, these efforts may be insufficient without a safe way to collaborate on data-intensive issues, such as the pandemic, climate change, and cyber threats.
FL allows various jurisdictions to cooperate on shared agendas without compromising local control or sovereignty. Turn policy into practice by enabling technical collaboration without the usual legal and privacy complexity.
And equally important, adopting FL sends political cues. This means that democracy can lead to not only innovation, but ethics and governance.
It’s not just a G7 summit in Alberta. The state has a thriving AI ecosystem, and institutions such as the Alberta Machine Intelligence Institute and industry from agriculture to energy, generating a huge amount of valuable data.
Interdisciplinary task force: energy companies that use local data to monitor soil health, energy companies analyze emission patterns, public agencies that model wildfire risks – everything works together and protects data. It’s not a futuristic fantasy – it’s a pilot program waiting to happen.
Canadian Press/Amber Bracken
The foundation of trust?
AI is as reliable as the systems behind it. And much of today’s systems are based on outdated ideas about centralization and control.
FL offers a new foundation for privacy, transparency and innovation to work together. There’s no need to wait for the crisis to take action. The tool already exists. What is missing is the political will that will lift them from promising prototypes to standard practice.
If the G7 is serious about building a safer and fairer AI future, FL should be a central part of that plan, not a footnote.