Defense IT warns of “hagatsu” and uncertainty in AI output, and emphasizes the need for rigorous verification.
Implementing artificial intelligence for national security requires agility, data integrity and strategic adaptation, defense officials said Thursday at the Defense IT Summit in Arlington, Virginia.
“There’s constant innovation there. How can you take what’s there and apply it really quickly and make sure it’s worth it?” Deepak Seth is the chief engineer of the Emerging Technology Bureau at the Defense Information Systems Agency (DISA). “If it’s not worth it, stop (need) and spin towards the next thing.”
Agility in collaboration with industry and other government agencies can provide Pentagon offices and units with more innovative AI solutions in national security, Seth added.
“There’s a lot of innovation happening in the commercial space that can really be applied to government,” Seth said. “It’s a combination, and we’re working with government partners in academia as well. I think we’re looking at these to solve some of these challenges.”
Implementing AI requires high-quality data to train and operate the model, officials said. So uncertainty can be a data challenge, says Raj Dasgupta, a research scientist at the Navy Institute, at the event.
“The main problem arises from being able to assess the quality of the data, not the data,” Dasgupta said. “If you know for sure that your data is bad, it’s good because you can discard it (bad data). If you know that a particular data is good, you can use it. But with data, if you don’t know if it’s good or bad, that’s where the main problem arises.”
“Essentially, when you look at the data, you just look at the data – especially when Genai technology comes out, you don’t know if the data is real data or an AI hallucination,” Dasgupta said.
Organizations need to validate data in their AI model implementations to ensure transparency and effectiveness, said Joel Krooswyk, Federal CTO of GitLab during the panel.
“We’ve done a lot of validation of every model that comes up. What answer is we getting? Are we seeing the hallucination points we want to avoid?” Krooswyk said. “You can’t check all the answers, that’s what humans are in the loop.”
Focusing on game theory and hostile attacks, Dasgupta said that bringing AI to fighters is important to strengthen national security, but its integrity is important to its mission.
“What we need to do is basically develop AI technology to respond to the enemy and be one step ahead,” says Dasgupta. “The size of data is something we’ve never seen before. …We have a huge amount of data right now, but we’re not that sure about the source of the data.”
Krooswyk said agencies need to quickly adopt AI for effectiveness strategically and promptly. Simply deploying it to staff could produce scatter plots and results, he said.
“It’s really important that we understand. If we’re deploying it in our organization, where are the best benefits and why?” Krooswyk said. “I think it’s important that we can find those pockets within our organization. “This is the organizational standard for using it here.”
The scale and scope of AI presents challenges to institutions that need planning. Seth pointed out that the rise of AI requires complex decisions.
“The amount of use cases is increasing,” Seth said. “These models are essentially trained with a huge amount of data across the Internet corpus, which are extremely difficult to develop, build, or train, taking into account the enormous amount of calculations needed.”
Dasgupta added that the evolution of AI requires a combination of hostile decisions that AI uses to strengthen national security.
“AI has changed at an exceptional speed,” Dasgupta said. …What we have to make sure is that we are actively and willing to model, model which topology we can do, and essentially try to build defenses against it. ”