As Congress continues to investigate the role AI should play in government, House Oversight Committee members are questioning the use and procurement of AI tools for government work, and privacy concerns what technology brings when it is not regulated.
Expert witnesses testified on June 5 that while the federal government is currently using AI in several capabilities, several factors, such as the government’s outdated technical systems and data practices, could prevent federal workers from making the most of their technology. They also said the government’s current procurement practices on AI technology will delay adoption.
“The question before us is not whether the federal government should adopt AI, but whether we’re going to lead or follow,” says Bhavin Shah, founder and CEO of MoveWorks, an AI company that contracts both private and local governments.
Many federal departments use AI approved under current procurement standards, like the Department of Health and Human Services, which uses AI for medical research and the outbreak of follow-up diseases.
Shah said federal workers are “worthy” with quick access to AI-powered tools that the private sector uses to maintain efficiency and cost savings. He said MoveWorks spent three and a half years and $8.5 million to achieve FedRamp status. This is a standardized approach the federal government uses to assess the security of cloud-based tools.
“This is an exorbitant barrier for small AI innovators, the companies that develop the most cutting-edge solutions,” Shah said.
Ylli Bajraktari, president and CEO of the tech-centric think tank Special Competition Research Project, said that while the US is still leading AI research, the slow adoption of AI for government use is a “major drawback” for countries like China.
“We are hampered by bureaucratic inertia, a lack of outdated IT infrastructure and workforce AI literacy,” Bajraktari said. “Overcoming these barriers is key to winning the global technological competition.”
Vajraktalari’s proposal to the committee included establishing an AI “space race” style council in the White House, increasing to $32 billion in research and development spending for non-defensive AI, launching a targeted AI talent strategy that promotes AI literacy and attracts the talent of top STEMs from overseas. He also proposed overhauling the procurement process to rapidly integrate AI into government systems and strengthen global partnerships between AI and cybersecurity.
However, simplification and rapid adoption of AI in government is a major security and data privacy risk, testified Bruce Schneier, a fellow and lecturer at Harvard Kennedy School. Schneier warned lawmakers this week that recent actions by Elon Musk, who just resigned as the government’s Department of Efficiency Leader or Dozi leader, highlighted the dangers of free use of AI.
In February, Musk said the Doge team was using AI to help federal workers make decisions about their work. Doge also reportedly gained access to sensitive data from various federal sectors, including the Treasury payment system, Social Security and other demographic data. In April, 48 lawmakers raised concerns with Musk using AI systems not permitted in these and other datasets.
Musk’s actions, contrary to what the committee aimed, said Schnayer would be released responsibly and protect the interests and rights of Americans. He said the data for Americans is integrated and fed into non-vapor AI models.
“We need to assume that our enemies have a copy of all our data. “And your data can be used against you.”
Mask’s behavior and access to American data, as well as his reported drug use, were central to the 40-minute discussion before the June 5th hearing. There, Republicans once again tried to block Democrats with a 21-20 vote and force Musk to testify before Congress.
“He weaponized public services for our government, our at-risk Americans and his own financial interests,” said Rep. Stephen Lynch, a representative ranking member of the Massachusetts Democrats and House Oversight Committee.
Linda Miller, founder and chief growth officer of Tracklight, a platform for fraud detection for government programs, said in her testimony that Doge’s actions are an example of why it’s difficult to impose private sector innovation on governments.
“My fellow panelists have made very wise suggestions about turbocharge RIPs, replacing the efforts of legacy IT systems and removing procurement barriers, but we must be realistic about how competent the government is, no matter how necessary these rapid changes,” she said.
Miller said the best use of AI in government work today is to automate routine processes and repetitive tasks, and to free federal workers for higher levels of work. She recommended that Congress consider supporting a “regulatory sandbox” to speed up the adoption of AI. This allows regulators to consider a managed environment where they can try and test AI systems before they are released at scale.
“While changes to the wholesale of Legacy IT and federal acquisition systems can take years, AI projects can create innovative labs that begin to showcase the art of possibilities in conceptual regulation sandboxes, carefully controlled environments,” says Miller.
https://missouriindopendent.com/2025/06/06/repub/congress-hears-of-rewards-security-of-government-use-of-ai/