Despite growing concerns about AI security, risk, and compliance, practical solutions remain elusive. NIST released NIST-AI-600-1, “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” on July 26, 2024, but most organizations have yet to understand and begin to implement its guidance. The first step is establishing an internal AI council. In AI governance. As AI adoption and risks increase, it’s time to understand why it’s important to sweat the small and not-so-small things, and where we’re going from here.
Data protection in the age of AI
Recently, I attended the annual membership conference of ACSC, a nonprofit organization focused on improving the cybersecurity defenses of businesses, universities, government agencies, and other organizations. From the discussion, the key focus for CISOs, CIOs, CDOs, and CTOs today is centered around protecting proprietary AI models from attacks and protecting proprietary data from being incorporated into public AI models. It is clear that
While a minority of organizations are concerned about the former problem, organizations in this category recognize the need to protect against prompted injection attacks that cause model drift, hallucinations, or complete malfunction. Masu. In the early stages of AI adoption, there were no well-known incidents comparable to the 2013 Target breach to illustrate how an attack might unfold. Most of the evidence at this point is academic. However, executives deploying their own models are considering that it is only a matter of time before a large-scale attack becomes public, which could result in brand damage or even greater damage. They’re starting to focus on how to protect the integrity of their companies.