In this Help Net Security interview, Appfire CISO Doug Kersten explains how treating AI like humans can change the way cybersecurity professionals use AI tools. Masu. He explains how this change will foster a more collaborative approach while recognizing the limitations of AI.
Kersten also discusses the need for strong oversight and accountability to ensure AI is aligned with business objectives and secure.
Treating AI like humans can accelerate its development. Can you elaborate on how this approach changes the way cybersecurity professionals interact with AI tools?
Treating AI like humans is a perspective shift that fundamentally changes the way cybersecurity leaders work. This shift has led security teams to think of AI as a collaborative partner against human failure. For example, as AI becomes more autonomous, organizations should focus on aligning its use with business goals while maintaining reasonable control over AI sovereignty. However, when designing policies and controls, organizations must also consider the potential for AI to manipulate truth and produce inappropriate outcomes in the same way that humans do.
Although AI has very innovative capabilities and undoubtedly brings innovative added value, it can also be fooled and potentially deceive users. This human characteristic requires evaluating AI security controls as well as human-centered controls. AI prompt creation training is a practical example, as the goal is to get accurate responses from the AI by ensuring that the language used is interpreted the same way on both sides. This is a very human concern, and few, if any, advances in technology have had this impact.
As a result, rapid advances in AI, from evolving capabilities to emerging vendors, are creating the most dynamic environment we have ever experienced. Cybersecurity leaders must adapt quickly and work closely with legal, privacy, operations, and procurement teams to ensure strategic alignment and comprehensive oversight when working with AI. Traditional best practices such as controlling access and minimizing data loss still apply, but must evolve to accommodate the flexibility and user-driven yet human-like nature of AI.
“Trust but verify” is at the heart of AI interactions. What are some common mistakes cybersecurity teams can make when blindly trusting AI-generated output?
The “trust but verify” principle is at the heart of cybersecurity best practices. This principle is also used in AI, which leverages the speed and efficiency of AI while applying human expertise to ensure that its output is accurate, reliable, and aligned with organizational priorities. I guarantee it. Blindly trusting AI output can lead to security breaches and poor cybersecurity team decisions. Like humans, AI is powerful, but not foolproof. They can make mistakes, spread bias, and produce output that is inconsistent with organizational goals.
One common mistake is relying too much on the accuracy of the AI without questioning the data on which it was trained. An AI model is only as good as the data it consumes. If that data is incomplete, biased, or outdated, the output may be flawed. Cybersecurity teams need to validate recommendations generated by AI against established knowledge and real-world situations. Security-wise, this helps eliminate false positives.
Another risk is the inability to monitor hostile operations. Attackers can target AI systems and exploit their algorithms to generate false information while hiding real threats. Without proper oversight, teams can unknowingly rely on compromised output, leaving systems vulnerable.
What does effective human oversight of AI look like in the context of cybersecurity? What frameworks and processes need to be in place to ensure ethical and accurate AI decision-making? Do you have it?
Effective human oversight must include policies and processes for mapping, managing, and measuring AI risks. It should also include accountability structures so that teams and individuals are empowered, accountable, and trained.
Organizations must also establish a context for framing the risks associated with AI systems. AI actors responsible for one part of a process rarely have complete visibility or control over other parts. Interdependencies among relevant AI stakeholders can make it difficult to predict the impact of AI systems, including ongoing changes to them.
Performance metrics include analysis, assessment, benchmarking, and ultimately monitoring of AI risks and associated impacts. Measuring AI risk includes tracking metrics around trusted characteristics, social impact, and human-AI dependencies. Sometimes trade-offs are necessary, and this is an important point in human interaction. To ensure reliability, all indicators and measurement methods must adhere to scientific, legal and ethical norms and be carried out in a transparent process.
Effectively managing AI requires risk prioritization and a plan for regular monitoring and improvement. This includes ongoing risk assessment and treatment to ensure organizations are able to adapt to the rapid changes in AI.
A very interesting framework that was recently released is the National Institute of Standards and Technology (NIST) AI Risk Management Framework. This framework was designed to better manage the risks to individuals, organizations, and society associated with AI and to measure trustworthiness.
Since AI makes decisions that affect security outcomes, where does the responsibility lie when an AI system makes a mistake or makes a wrong decision?
Responsibility begins with AI creators, the teams responsible for training and integrating AI systems. These teams must ensure that their AI tools are built on robust, diverse, and ethical datasets with clearly defined parameters for how decisions are made. When an AI system fails, understand where the failure occurs, whether it is due to biased data, flawed algorithms, or unexpected vulnerabilities in the system design. It is important to do so.
But accountability doesn’t end there. Security leaders, legal teams, and compliance personnel must work together to create governance structures that ensure proper accountability for AI-driven decisions, especially in sensitive areas such as cybersecurity. These structures should include clear escalation processes that allow for rapid intervention to mitigate negative consequences when AI systems make incorrect decisions.
Human oversight will always be a key element in ensuring AI accountability. AI tools should never operate in isolation. Decision makers must remain actively engaged with the output of AI, continually evaluating the effectiveness of the system, and ensuring that the system meets ethical standards. This oversight allows organizations to hold both the technology and the people managing it accountable for mistakes and bad decisions.
While AI can provide valuable insights and automate critical functions, humans across technical, security, legal, and leadership teams must ensure accountability when mistakes occur.
Do you foresee a time when AI requires less human oversight, or do you foresee human interaction always being an important part of the process?
Today’s AI is designed to assist, not replace, human judgment. From a security perspective, it is highly unlikely that AI will operate independently without human cooperation. Allowing AI to operate autonomously without human oversight can create unintended gaps in your security posture. With this in mind, cybersecurity teams must be constantly involved and providing oversight. This continued cooperation ensures that AI acts as a trusted partner rather than a potential liability. But no one can predict the future, and today’s AI is likely to become a more individualistic, human-like technology capable of operating independently.