Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Benchmarking large-scale language models for healthcare

June 8, 2025

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

Research papers provide a roadmap for AI advancements in Nigeria

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, June 8
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»Picklescan vulnerability allows hackers to bypass AI security checks
Cybersecurity

Picklescan vulnerability allows hackers to bypass AI security checks

By March 12, 2025No Comments2 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Sonatype researchers reveal a critical vulnerability in Picklescan. Learn how these flaws affect the security, embracing faces and best practices of developers’ AI models.

Sonatype cybersecurity researchers have identified several vulnerabilities within Picklescan. This is a tool used to examine Python Pickle files of malicious code. These files, commonly used to store and retrieve machine learning models, pose security risks due to their ability to execute arbitrary code during the process of retrieving saved data.

A total of four vulnerabilities were found according to an analysis of Sonatype shared with HackRead.com.

CVE-2025-1716 – Attackers can bypass tool checks and execute harmful code. CVE-2025-1889- Unable to detect hidden malicious files due to relying on file extensions. CVE-2025-1944– can be used by manipulating the ZIP archive file name to malfunction the tool. CVE-2025-1945 – Malicious files cannot be detected if certain bits in the ZIP archive have been changed.

Please note that platforms such as Face-Closing use Picklescar as part of their security measures to identify malicious AI models. The discovered vulnerabilities can pose a threat, as malicious actors can bypass these security checks, which could lead to “arbitrary code execution” for developers who rely on the open source AI model. This means that an attacker may have full control over the system.

“Given the role of pickles within the broader AI/ML hygiene posture (when used with Pytorch), the vulnerabilities discovered by Sonatype can be bypassed (at least partially) by threat actors, and can be leveraged by target developers who leverage open source AI.

The good news is that the Pickle Scun Maintenance quickly dealt with vulnerabilities, released version 0.0.23, patched flaws, and minimizing the chances of malicious actors exploiting them.

Sonatype’s Chief Product Officer Mitchell Johnson encourages developers to avoid using pickle files from unreliable sources whenever possible, and instead use safer file formats. If you need to use pickle files, you should only load them in a safe and controlled environment. Furthermore, it is important to verify the integrity of the AI ​​model through cryptographic signatures and checksums and implement multi-layer security scans.

The findings highlight the growing need for sophisticated and reliable security measures in AI/ML pipelines. To mitigate risk, organizations should adopt practices such as using safer file formats, adopting multiple security scan tools, and monitoring suspicious behavior when loading pickle files.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI Security 2025: Why you need to build data protection is why it’s not bolted
Next Article It relies on AI to risk legal research, says SC Justice: Report

Related Posts

Cybersecurity

Rubrik expands AI Ready Cloud Security’s AMD partnership to reduce costs by 10%

June 3, 2025
Cybersecurity

Zscaler launches an advanced AI security suite to protect your enterprise data

June 3, 2025
Cybersecurity

Why AI behaves so creepy when faced with shutdown

June 3, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Don't Miss

Benchmarking large-scale language models for healthcare

June 8, 2025

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

Research papers provide a roadmap for AI advancements in Nigeria

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?