Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

KREA 1 Image Model launches with excellent aesthetic controls and custom training for AI art generation | AI News Details

June 16, 2025

Startups raise $17 million Series A to help businesses

June 16, 2025

Ericsson and AWS bet on AI to create self-healing networks

June 16, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, June 16
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»Why G7 should accept “federated learning”
Cybersecurity

Why G7 should accept “federated learning”

versatileaiBy versatileaiJune 12, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Artificial intelligence (AI) is transforming the world, from diagnosing hospital diseases to catching fraud in banking systems. But it also raises urgent questions.

One problem is looming as the G7 leader prepares to meet in Alberta. How can you build a powerful AI system without sacrificing privacy?

The G7 Summit is an opportunity to set a tone for how democracies manage emerging technologies. Regulations are progressing, but they cannot succeed without strong technical solutions.

In our view, what we call federated learning (or FL) is one of the most promising yet often overlooked tools and deserves to be at the heart of the conversation.

Read more: Inspired by media theorist Marshall McClehan, six ways AI can partner with us in creative research

As a researcher in AI, cybersecurity and public health, I saw a data dilemmas firsthand. AI thrives on data, many of which thrive deeply and personally personally – medical history, financial transactions, critical infrastructure logs. The more centralized data, the greater the risk of leaks, misuse, or cyberattacks.

The UK National Health Agency has suspended a promising AI initiative on fear of data processing. In Canada, concerns have emerged about the storage of personal information (including immigration and health records) in foreign cloud services. Trust in AI systems is vulnerable. Once it breaks, innovation will halt.

French President Emmanuel Macron will give a speech at the AI ​​Litigation Summit held in Paris in February 2025.
Canadian media/Shaun Killpatrick

Why is centralized AI responsible for increasing?

The dominant approach to training AI is to put all your data in one intensive place. On paper, it is efficient. In reality, it creates security nightmares.

Centralized systems are attractive targets for hackers. They are particularly difficult to regulate when data flows across national or sectoral boundaries. And they concentrate too much power in the hands of a small number of data holders and tech giants.

However, instead of bringing the algorithm into the algorithm, FL brings the algorithm to the data. Local institutions train AI models with their own data, whether hospitals, government agencies or banks. Only model updates, not raw data, are shared with the central system. It’s like a student doing homework at home and submitting only the final answer, not the notebook.

This approach dramatically reduces the risk of data breaches while maintaining the ability to learn from large trends.

Where is it already working?

FL could be a game changer. When combined with techniques such as privacy differences, secure multi-party calculations, or isomorphic encryption, it can dramatically reduce the risk of data leaks.

In Canada, researchers are already using FL to train cancer detection models across the province without moving sensitive health records.

Scientists wear scientists, masks and goggles drop liquid into a test tube and digital graphics floats in the air nearby
Artificial intelligence is used to train cancer detection models.
(Shutterstock)

The project, which includes Canada’s Primary Care Sentinel Surveillance Network, demonstrates how FL is used to predict chronic diseases such as diabetes, and maintains all patient data firmly within local boundaries.

Banks use it to detect fraud without sharing customer identity. Cybersecurity agencies are looking for ways to coordinate jurisdictions without publishing logs.

Read more: Healthcare AI: Potential and pitfalls in app-based diagnostics

Why the G7 needs to act now?

Governments around the world are competing to regulate AI. Canada’s proposed AI and data law, the European Union’s AI law, and the US executive order on safe, secure and reliable AI are all major steps. However, these efforts may be insufficient without a safe way to collaborate on data-intensive issues, such as the pandemic, climate change, and cyber threats.

FL allows various jurisdictions to cooperate on shared agendas without compromising local control or sovereignty. Turn policy into practice by enabling technical collaboration without the usual legal and privacy complexity.

And equally important, adopting FL sends political cues. This means that democracy can lead to not only innovation, but ethics and governance.

It’s not just a G7 summit in Alberta. The state has a thriving AI ecosystem, and institutions such as the Alberta Machine Intelligence Institute and industry from agriculture to energy, generating a huge amount of valuable data.

Interdisciplinary task force: energy companies that use local data to monitor soil health, energy companies analyze emission patterns, public agencies that model wildfire risks – everything works together and protects data. It’s not a futuristic fantasy – it’s a pilot program waiting to happen.

A burnt-out neighbour with mountains in the distance
A devastated neighbourhood in Jasper, Alta. August 19, 2024. The wildfires caused evacuation and extensive damage in the national park and Jasper Town Site.
Canadian Press/Amber Bracken

The foundation of trust?

AI is as reliable as the systems behind it. And much of today’s systems are based on outdated ideas about centralization and control.

FL offers a new foundation for privacy, transparency and innovation to work together. There’s no need to wait for the crisis to take action. The tool already exists. What is missing is the political will that will lift them from promising prototypes to standard practice.

If the G7 is serious about building a safer and fairer AI future, FL should be a central part of that plan, not a footnote.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAccording to the report, few manufacturers have implemented AI security controls
Next Article iterate.ai secures $6.4 million to bring secure and scalable AI to the edge of the enterprise
versatileai

Related Posts

Cybersecurity

Senator Gounardes’ AI Safety Bill passes the state Senate

June 13, 2025
Cybersecurity

American AI Advocacy: Mourenar, a Bipartisan Group introduces advanced AI Security Preparation Methods

June 12, 2025
Cybersecurity

Overcoming ethical and security risks when integrating AI

June 12, 2025
Add A Comment

Comments are closed.

Top Posts

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Don't Miss

KREA 1 Image Model launches with excellent aesthetic controls and custom training for AI art generation | AI News Details

June 16, 2025

Startups raise $17 million Series A to help businesses

June 16, 2025

Ericsson and AWS bet on AI to create self-healing networks

June 16, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?