Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025

The growing issues regarding social media AI

May 19, 2025

Introducing the Hebrew LLMS open leaderboard!

May 19, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, May 19
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Tools»Harden the security of models in the ML community
Tools

Harden the security of models in the ML community

By January 2, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


We are pleased to announce our partnership with Protect AI as part of our long-standing commitment to providing a secure and reliable platform for the ML community.

Protect AI is a company founded with a mission to create a safer world powered by AI. They are developing a powerful tool to enable them to continue the rapid pace of AI innovation without compromising security: Guardian.

Our decision to partner with Protect AI is based on our community-driven approach to security, our active support of open source, and our full range of security x AI expertise.

Interested in joining a security partnership or providing scan information on the hub? Please contact us at security@huggingface.co.

Model security review

To share your model, serialize the weights, configurations, and other data structures you use to interact with your model for easier storage and transfer. Some serialization formats are vulnerable to nasty exploits such as arbitrary code execution (looking at you in a pickle), making shared models that use these formats potentially dangerous.

Hugging Face has become a popular platform for model sharing, so we want to protect our community from now on. That’s why we’re developing tools like picklescan and integrating Guardian into our scanner suites.

Pickle is not the only exploitable form. See how to exploit Keras Lambda layers to execute arbitrary code. The good news is that Guardian detects both of these exploits, as well as other file format exploits. For the latest scanner information, please visit the Guardian Knowledge Base.

Read all our security documentation here: https://huggingface.co/docs/hub/security 🔥

integration

When integrating Guardian as a third-party scanner, we took the opportunity to improve the front end to display scan results. Now it looks like this:

As you can see here, when a pickle import scan occurs, an additional Pickle button appears.

As you can see from the screenshot, you don’t need to do anything to get this benefit. Guardian automatically scans all public model repositories as soon as you push your files to the hub. This is an example repository you can check out to see the feature in action: mcpotato/42-eicar-street.

Please note that we have over 1 million model repositories, so you may not see scans for your models at this time. It may take some time to catch up 😅.

We have already scanned hundreds of millions of files in total, because we believe that empowering our community to share models in a secure and frictionless way will help grow the field as a whole.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTop 7 Free Unfiltered AI Image Generators
Next Article VinAI returns to CES 2025 with cutting-edge automotive AI technology

Related Posts

Tools

Introducing the Hebrew LLMS open leaderboard!

May 19, 2025
Tools

Subscribe to Enterprise Hub with your AWS account

May 19, 2025
Tools

Building cost-effective enterprise RAG applications using Intel Gaudi 2 and Intel Xeon

May 18, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Don't Miss

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025

The growing issues regarding social media AI

May 19, 2025

Introducing the Hebrew LLMS open leaderboard!

May 19, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?