Daniel Slim/Getty Images
The Defense Counterintelligence and Security Agency, which grants security clearance to millions of American workers, is using AI to speed up its operations. A “black box” is not allowed, says the director.
Emily Baker-White and Rashi Shrivastava, Forbes Staff
Defense Counterintelligence and Security Agency (DCSA) Director David Catler asked more than 13,000 Pentagon employees before allowing them to examine information about American citizens. “Does my mother know that the government can do this?”
Cutler calls the “mom test” about how the DCSA, the vast agency that grants and denies U.S. security clearances to millions of workers, does its job. This is a common sense check. And that’s how Catler thinks about his agency’s use of AI.
DCSA is the agency responsible for investigating and approving 95% of security clearances for federal employees and must complete millions of investigations each year. This gave government agencies access to vast amounts of personal information, and in 2024 DCSA turned to AI tools to organize and interpret that data.
This doesn’t include ChatGPT, Bard, Claude, or any other fancy generative AI models. Instead, they mine and organize data the way Silicon Valley tech companies have long done it, using systems that better represent their work than most large-scale language models. For example, Catler said the most promising use case for these tools is to prioritize existing threats.
If not used carefully, these tools can compromise data security and introduce bias into government systems. But Cutler was still optimistic that some of AI’s less sexy features could be a game-changer for government agencies, as long as they’re not “black boxes.”
“You need to understand why it’s reliable and how it works,” Cutler told Forbes. “When we use these tools for the purposes we describe, we demonstrate that the tools do what they say they do, do it objectively, and do it in a highly compliant and consistent manner. I need it.”
Many people may not even think that the tools Cutler is describing are AI. He is excited about the idea of DCSA building a heat map of the facilities it secures, plotting risks across facilities in real time and updating it as other government agencies receive new information about potential threats. I was doing it. Such a tool could help DCSA “determine where to put the (metaphorical) fire truck,” he said. It does not reveal new information. It simply presents existing information in a more useful way.
Matthew Scherer, senior policy advisor at the Center for Democracy and Technology, told Forbes that AI can help collate and organize information that has already been collected and verified, but it can’t help make important decisions, such as pointing out red flags. said this is the next step. There can be risks during the background check process and data collection from social media profiles. For example, AI systems still struggle to distinguish between multiple people with the same name, leading to misidentifications.
“I would be concerned if an AI system was making any recommendations or ratings for specific applicants,” Scherer said. “Then we move into the realm of automated decision-making systems.”
Cutler said the department is moving away from using AI to identify new risks. However, even with prioritization, privacy and bias issues can arise. When it contracts with AI companies (Cutler declined to name the partners), DCSA asks what kind of private data it feeds into its own algorithms, and what those algorithms do with it once it gets that data. It is necessary to consider whether it is possible. A company that provides AI products to the public accidentally leaked personal data entrusted to it by its customers. This is a breach of trust that would be catastrophic if it happened with data held by the Department of Defense itself.
AI also has the potential to introduce bias into the Department of Defense’s systems. Algorithms reflect the blind spots of the people who create them and the data on which they are trained, and DCSA relies on oversight from the White House, Congress, and other executive branch agencies to prevent bias in the system. A 2022 Rand Corporation report explicitly warned that AI could introduce bias into security clearance screening systems “as a result of programmer bias or historical racial disparities.”
Cutler acknowledged that societal values that influence algorithms, including those in the Department of Defense, change over time. He said the ministry is now much more tolerant of extremist views than in the past, but somewhat more tolerant of former alcohol and drug addicts who are now in recovery. That’s what it means. “In many places in America, it was literally illegal to be gay until recently,” he says. “This may have been a bias that the system needed to automatically resolve.”
More from Forbes