Sonatype researchers reveal a critical vulnerability in Picklescan. Learn how these flaws affect the security, embracing faces and best practices of developers’ AI models.
Sonatype cybersecurity researchers have identified several vulnerabilities within Picklescan. This is a tool used to examine Python Pickle files of malicious code. These files, commonly used to store and retrieve machine learning models, pose security risks due to their ability to execute arbitrary code during the process of retrieving saved data.
A total of four vulnerabilities were found according to an analysis of Sonatype shared with HackRead.com.
CVE-2025-1716 – Attackers can bypass tool checks and execute harmful code. CVE-2025-1889- Unable to detect hidden malicious files due to relying on file extensions. CVE-2025-1944– can be used by manipulating the ZIP archive file name to malfunction the tool. CVE-2025-1945 – Malicious files cannot be detected if certain bits in the ZIP archive have been changed.
Please note that platforms such as Face-Closing use Picklescar as part of their security measures to identify malicious AI models. The discovered vulnerabilities can pose a threat, as malicious actors can bypass these security checks, which could lead to “arbitrary code execution” for developers who rely on the open source AI model. This means that an attacker may have full control over the system.
“Given the role of pickles within the broader AI/ML hygiene posture (when used with Pytorch), the vulnerabilities discovered by Sonatype can be bypassed (at least partially) by threat actors, and can be leveraged by target developers who leverage open source AI.
The good news is that the Pickle Scun Maintenance quickly dealt with vulnerabilities, released version 0.0.23, patched flaws, and minimizing the chances of malicious actors exploiting them.
Sonatype’s Chief Product Officer Mitchell Johnson encourages developers to avoid using pickle files from unreliable sources whenever possible, and instead use safer file formats. If you need to use pickle files, you should only load them in a safe and controlled environment. Furthermore, it is important to verify the integrity of the AI model through cryptographic signatures and checksums and implement multi-layer security scans.
The findings highlight the growing need for sophisticated and reliable security measures in AI/ML pipelines. To mitigate risk, organizations should adopt practices such as using safer file formats, adopting multiple security scan tools, and monitoring suspicious behavior when loading pickle files.