AI-based attacks, artificial intelligence, machine learning, data governance
It also addresses AI vulnerabilities and governance challenges
Anna Delaney (Anna Madeline)•
February 7, 2025
Deepseek, a highly open source AI model built by Chinese companies, is being scrutinized for multiple security tests and data leaks that expose user information and API keys. Zscaler CISO Sam Curry joined this week’s ISMG Editors panel to discuss AI security, risk management, and future changes to the US policy in the future.
See also: From silos to synergy: Gen AI coordinates it with security team
“It didn’t take long for an independent researcher to find the error,” Curry said. “So you know, it’s the quality of time and security from this old-fashioned market. Engineering is about quality, time and resources. No matter how good your innovation is, you still have a big impact. You can give, but you say you “D is better to have the quality right when you get out there.”
Curry emphasized that AI security requires a new approach. “It’s not enough to say “Well, I don’t put any delicate things”… It can be interpolated, guessed, and manufactured quality things that you care about You can,” he said. “You have to treat it like you’re dealing with something very intelligent, very intelligent, in a similar situation of trust, and I don’t think we’re doing it. Not the body of other algorithms or code.
Curry joined ISMG’s Anna Delaney, Director of Productions. Tom Field, Vice President of Editorial. Discussed by Michael Novinson, managing editor at ISMG Business.
The significance of security vulnerabilities and data breach that emerged in the weeks since the release of DeepSeek-R1 on January 20th. Best practices to ensure AI models against hostile attacks and supply chain risk. The potential impact of a new US executive order on business and policy frameworks.
The ISMG Editors panel runs weekly. Don’t miss out on previous installments, including the January 24th edition of the future challenges of US cybersecurity programs and the January 31st edition of the Deepseek AI disruption and security risks.