We are pleased to announce that we are partnering with Wiz with the goal of improving security across the platform and the AI/ML ecosystem.
Wiz researchers worked with a face-to-face hug about the security of our platform and shared their findings. Wiz is a cloud security company that helps customers build and maintain software in a secure way. In addition to publishing this study, we take advantage of the opportunity to highlight the security improvements of several related embraces.
Hugging Face recently integrated Wiz for vulnerability management. This is a continuous and proactive process to free the platform from security vulnerabilities. Additionally, we use Wiz for Cloud Security Attitude Management (CSPM) to ensure that our cloud environment is configured safely and remains secure.
One of our favorite Wiz features is our overall view of vulnerabilities, from storage to computing to networking. With multiple Kubernetes (K8S) clusters running and resources across multiple regions and cloud providers, creating central reports in a single location with a complete context graph for each vulnerability is extremely useful. We also automatically fixed detected issues with products, built on their tools, especially in spaces.
As part of the collaboration, Wiz’s security research team identified the drawbacks of sandboxed computational environments by running arbitrary code within the system thanks to pickles. It is important to remember that reading this blog and Wiz Security Research Paper will help you solve all the problems related to exploitation and remain passionate about the threat detection and incident response process.
Hugging the security of your face
Holding Face, we take security seriously as AI evolves rapidly. Despite Hugging Face’s biggest name and multiple partnerships and business relationships, it continues to allow users and the AI community to responsibly experiment and operate AI/ML systems and technologies. As we are dedicated to protecting our platform and democratizing AI/ML, our community can contribute and become part of this paradigm shift event that impacts us all. I write this blog and reaffirm my commitment to protecting users and customers from security threats. It also embraces Face’s philosophy of supporting controversial pickle files and discusses the common responsibility of moving away from pickle format.
In the near future there will be many other exciting security improvements and announcements. The publication not only discusses security risks to the Hugging Face Platform community, but also covers best practices for systematic security risks and mitigation of AI. We continue to work on making our products, infrastructure and AI community safe, and look forward to follow-up security blog posts and whitepapers.
Open Source Security Collaboration and Tools for Community
We value transparency and collaboration with our community very highly, including participation in vulnerability identification and disclosure, cooperation in solving security issues, and security touring. Below are examples of security victory that emerged from collaborations that help the entire AI community reduce security risks.
Picklescan was built in collaboration with Microsoft. Matthieu Maitre started the project and given that there is a unique internal version of the same tool, we joined forces to contribute to Picklescan. If you would like to know about https://huggingface.co/docs/hub/en/security-pickle safetensors developed by Nicolas Patry, please refer to the following documentation page: Safetensors is a joint initiative with Euletherai & Stability AI and is audited by the Trail of Bits. https://huggingface.co/docs/safetensors/en/indexIt has a robust bug bounty program and has many great researchers from around the world. Researchers who have identified a security vuln may require about joining our program through security@huggingface.co Malware Scanning: https://huggingface.co/docs/hub/en/security-malware Secrets Scanning: https://huggingface.co/docs/hub/security-secrets As previously mentioned, we’re also collaborating with Wiz to lower Platform security risks We are launching a set of security publications that address the security issues facing the AI/ML community.
Security Best Practices for Open Source AI/ML Users
AI/ML has introduced new vectors for attacks, but for many of these attacks, the light giant has been well known for many years. Security experts should ensure that relevant security controls are applied to AI resources and models. Additionally, the resources and best practices for using open source software and models are listed below.
Pickle File – Unstable Elephant in the Room
Pickles’ files are at the heart of most research conducted by Wiz and other recent publications, about security researchers hugging their faces. Pickle files are long considered to have a security risk associated with them. For more information, see https://huggingface.co/docs/hub/en/security-pickle.
Despite these known security flaws, the AI/ML community still uses pickles frequently (or similarly trivial exploitable forms). Many of these use cases are less risky or for testing purposes, making pickle files more familiar and usable than safe alternatives.
As an open source AI platform, the following options remain:
Do nothing about pickle files that prohibit pickle files. Not only do you find the intermediate ground that allows you to use pickles, but also rationally and practically reduce the risks associated with pickles files
For now, I have selected option 3. This option is a burden on the engineering and security teams, and we put great efforts into reducing risk while ensuring the AI community has access to the tools of their choice. Some of the key fighters we have implemented to the risks associated with pickle are:
The labeling model with clear documentation that outlines the risks of developing automated scan tools using scan tools, and clear warnings that outline the development model for automated scan tools using security vulnerabilities provided a safe solution to use instead of pickles (safetencers). Describes potential vulnerabilities within them
We intend to continue to be leaders in protecting and protecting the AI community. Part of this is monitoring and addressing risks associated with pickle files. Although pickle’s sunset support is not an issue either, as part of this decision, we are doing our best to balance the impact on our community.
It is an important note that upstream open source communities and large tech and security companies are primarily silent about contributing to solutions, defining their philosophy, and embracing both solutions mitigating and developing and implementing implementations.
Ending remarks
I spoke extensively to Nicholas Patry, creator of Safetenser, who writes this blog post.
Start actively replacing pickle files with safeteners. As mentioned before, pickle contains inherent security flaws and may not be supported in the near future. Open Security Issues/PRS upstream in your favorite library and press the secure defaults as upstream as possible.
The AI industry is changing rapidly, with new attack vectors/exploits constantly being identified. Huggingface has one of our kind communities and we frequently partner with you to ensure we can maintain a safe platform.
To avoid potential liability and violations of law, don’t forget to responsibly disclose your security balloon/bug via the appropriate channel.
Want to participate in the discussion? Contact us as security@huggingface.co or follow us on LinkedIn/Twitter.