Artificial intelligence, which was the double-edged sword of the cybersecurity industry, promises to help researchers and experts detect threats more quickly, has further reduced barriers to entry to threat actors by democratizing access to malicious code.
At least that’s what I thought before talking to Danny Jenkins, CEO of ThreatLocker, who advocates a zero-trust approach to protecting hardware, infrastructure and networks.
Speaking to me at the company’s annual Zero Trust World event, Jenkins said, “It’s really bad for (AI) to prevent it.” By chatting with him, he recognized the valuable skills that human workers continue to offer in the post-attack world, introducing the concept that generative AI plays a role in some areas of business, not all.
An old battle
“How do you know whether it’s an IT management tool or a hacker tool? How do you know if it’s a backup tool or a data removal tool?” Jenkins asked. “Both perform exactly the same function. AI is really bad at determining intent.”
Ultimately, determining what’s good and bad in cybersecurity is very context-dependent, and ThreatLocker knows this. Therefore, the company focuses on the need for humans to know what is being done in the environment, making it easier to spot anomalies.
While artificial intelligence has been shown to flag some malicious code, attackers can trick AI into the functionality of malware files with some minor changes to misclassify the threat as benign.
Anyway, funded threat actors, including nation-state groups and advanced persistent threat (APT) groups, even test attacks against the latest AI-driven tools, which are described as cat and mouse games.
How can AI help your cybersecurity strategy?
Every day poses slightly different threats as rapid AI development far outweighs laws and guidance. Without knowing where to stand one day or the next, defending the threat to a zero trust approach to cybersecurity addresses AI-driven threats from a slightly different perspective.
At this point, I began chatting with Jenkins’ colleague, Chief Product Officer Rob Allen. “The only skill you need is to ask the right questions in the right way, and you’ll get the code or answers you need,” he said of the AI tools.
In addition to the technical elements of malicious code, generative AI helps threat actors create content for attacks – whether it’s dozens of variations of phishing email copies, it’s set up to help people resolve from money and other sensitive data, avoiding fake content from some detection tools and fraudulent websites.
Jenkins, who said that AI is a “buzzword” that was thrown out primarily for marketing purposes, summed up “it makes our work even more difficult and not easy.”
The consensus is that AI can best serve as assistants for highly skilled IT and cybersecurity teams and have the ability to enhance threat detection and response, but it can help plug in talent shortages, but cannot replace the best human judgment factors for effective security.
Looking ahead, there’s nothing like a magic pill. Even if there is, it doesn’t sound like it’s just AI. But what it did is add another string to the bow of the company that is willing to accept it. Combining artificial intelligence, talent and default Denny’s zero trust approach, we provide the most rounded solution.