Chloé Messdaghi and Ben Lorica discuss AI security. This is becoming increasingly important as AI-driven applications are deployed in the real world. There is a knowledge gap. Security workers don’t understand AI, and AI developers don’t understand security. Be aware of all available resources. Next year will see AI security certification and training. Also, bring together everyone in your organization, including AI developers and experts, to develop AI security policies and playbooks.
Check out other episodes of this podcast or the full-length version of this episode on the O’Reilly Learning Platform.
Learn faster. Dig deep. See even more.
About real-world podcast generation AI: In 2023, ChatGpt has put AI on everyone’s agenda. In 2025, the challenge is to turn these agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experiences to make AI work for your company.
timestamp
0:00:Introdution0:24: How is AI security different from traditional cybersecurity? 0:44: AI is a black box. There is no transparency or explainability. Transparency shows how the AI works, and explainability shows how it makes decisions. Black boxes are difficult to secure. 2:12: There is a huge knowledge gap. Companies aren’t doing what they need to do. 2:24: When you talk to executives, do you differentiate between traditional AI and ML and new generative AI models? 2:43: We also talk about older models. But that’s what I should do. We’ve had AI for a while, but security hasn’t been part of that conversation for a while. 3:26: Where do security people learn how to secure AI? There is no certification. We are playing a massive catch-up game. 3:53: What is the state of awareness about AI incident response strategies? 4:15: Even in traditional cybersecurity, we’ve always had the problem of making sure our incident response plans aren’t ad hoc or expired. Much of it involves being aware of all the technologies and products your company uses. It’s hard to protect if you don’t know everything in your environment. Adopting AI-related cybersecurity measures. In North America, 70% of organizations said they had implemented one or two of the five security measures. 24% adopted two to four measures. 6:35: What’s the first thing you’re thinking about updating your incident response playbook? 6:51: Make sure you have all the right people in the room. Departmental silos are still problematic. CISOs can be fired or overruled even in the room when it comes to decisions. There are concerns about limiting innovation and product launch dates. CTOs, data scientists, ML developers, and all the appropriate people need to ensure that it’s safe and that everyone is taking precautions. 7:48: For enterprise AI with a mature cybersecurity incident playbook that you want to update: What AI brings is that you need to include more people. 8:17: We must recognize that there is an AI knowledge gap and that data scientists and training lack security training. Security people don’t know where to turn for education. There are not many courses or programs out there. This year we will see many developments. 9:16: It’s important to be aware of everything related to AI. We need to have more conversations about AI ethics. It is important to take a look at the bipartisan US House AI Task Force report. 10:25: Globally, we recommend checking out the OECD AI Policy Hub. There is also the World Economic Forum Presidio AI Framework. Check out Owasp, Miter Atlas, DASF, and NIST AI frameworks.