The U.S. House Task Force on Artificial Intelligence has released its report on artificial intelligence following nearly a year of meetings and discussions with over 100 experts in the field.
The task force, which included 12 Democrats and 12 Republicans, was charged with compiling a comprehensive “road map” for Congress to implement safeguards against artificial intelligence misuse and boost the development of artificial intelligence technology in the United States.
“Artificial intelligence has the potential to enhance the lives of Americans, but it also poses serious threats – from fraud and identity theft to election integrity and more,” said Colorado’s Democratic U.S. Rep. Brittany Pettersen, who sat on the task force.
“After months of bipartisan collaboration, I’m proud to help release this report which will serve as a blueprint for Congress to enact policies that help harness the potential of this emerging technology while ensuring strong guardrails and consumer protections,” Pettersen said. “This report is an important step toward ensuring Congress meets the moment and the United States remains a global leader in AI.”
The 253-page report includes 66 key findings and 85 recommendations but does not propose any specific legislative measures. The task force adopted seven guiding principles when compiling the report that include:
Ensuring there are no precedents or existing laws already addressing the specific issues outlinedPromoting AI innovationProtecting Americans against AI risks and harmsEmpowering the U.S. government with AIAffirming the use of a sectoral regulatory structureTaking an incremental approach to enacting AI-related legislationKeeping humans at the center of AI policy.
The task force investigated the use of AI in 15 areas, including government use, data privacy, national security, and intellectual property.
Government Use
The report encourages the federal government and Congress to be “wary” of algorithm-informed decision-making in government affairs.
In addition, task force members recommended adopting AI standards for federal government use and improving cybersecurity of federal systems to protect against AI’s negative impacts.
“Irresponsible or improper use (of AI) fosters risks to individual privacy, security, and the fair and equal treatment of all citizens by their government,” the report stated.
Federal Preemption of State Law
Several states, including Colorado, have passed legislation related to AI. According to the task force, the federal government can use these state-level statutes as a tool to “accomplish various objectives.”
However, “federal preemption presents complex legal and policy issues that should be considered.”
The report found that federal preemption has both positive and negative aspects and that it can allow states to be subject to certain limits on regulation.
The task force recommended the federal government study the laws, rules, and regulations in each state when it comes to AI in different industries.
Data Privacy
“As AI systems amass and analyze vast amounts of data, there are increasing risks of private information being accessed without authorization,” the report stated. “Thoughtful and effective data privacy policies and protections will support consumer confidence in the responsible development and deployment of AI systems.”
Currently, Americans have very few avenues for recourse if their privacy is negatively impacted by AI, but federal privacy laws have the potential to increase the effectiveness of state laws related to AI and data privacy. The task force recommended exploring ways to promote access to data in “privacy-enhanced” ways while ensuring any privacy laws that come out of Congress are “generally applicable and technology-neutral” to cover all forms of AI, current and future.
National Security
With many countries, including US adversaries, incorporating AI technology into their military programs, it’s crucial for the American military to have a thorough understanding of different AI systems and implement them in its defense strategy, the report stated.
The report called for expanded AI training within the Department of Defense continued oversight of autonomous weapons policies, and international collaboration with American allies on developing AI for military use.
Research, Development, and Standards
The task force recommends that Congress implement an open research environment in which research processes and data are accessible to all entities to maintain the U.S.’s status as a leader in AI research and development.
The report found that further investments in AI research and development will increase competitiveness with American adversaries such as China and expand access to and adoption of AI technology among Americans.
The report also called for promoting public-private partnerships for AI research and development and implementing standards for the evaluation and testing of AI technology.
Civil Rights & Civil Liberties
“AI models, and software systems more generally, can produce misleading or inaccurate outputs” that can deprive Americans of their basic rights, the report stated.
The task force found that the federal government must understand the potential harm misleading or inaccurate AI systems can pose to mitigate potential rights violations.
The report recommended always having a human available to identify and remedy potential flaws when AI is being used in “highly consequential” decisions and informing users when AI is being used in situations where decisions are being made to protect against discrimination.
Education and Workforce
According to the report, the U.S. has a “significant gap” in its workforce of AI-literate professionals, which is only growing.
“Educating and training American learners in AI topics will be critical to continued U.S. leadership in AI technology and for America’s economic and national security,” the report stated.
As AI becomes increasingly common in the workplace, the task force recommended that the government invest in K-12 STEM and AI education to promote AI literacy and broaden pathways to the AI workforce. At the same time, the government should monitor labor laws and worker protections to ensure workers are not being taken advantage of when it comes to AI adoption in the workplace.
Intellectual Property
Generative AI has sparked widespread debates about intellectual property rights for creatives such as artists, musicians, and designers.
“Generative AI poses a unique challenge to the creative community,” the report stated, adding that creators are often unaware AI developers are using their work.
The report recommends clarifying IP laws, regulations, and agency activity to better inform the legal community about what is and isn’t legal and counter the increasing use of deepfakes to harm others.
Content Authenticity
When tackling inauthentic content such as deepfakes, the task force recommended a “risk-based, multipronged approach” in which the responsibilities of AI developers, content producers, and content distributors are clearly outlined.
While the report found that synthetic content “has many beneficial uses,” it can also harm individuals and create a sense of distrust among users.
The report recommended ensuring victims of harmful synthetic content have access to tools and resources they may need for support.
Open & Closed Systems
The report also discussed open and closed AI systems. In an open system, an AI model’s underlying code and data are publicly accessible. They can be built upon, while a closed system is only available to the developers.
The report found that open models encourage innovation and competition among AI developers and that “limited evidence” exists that open models should be restricted.
Despite this, the task force recommended the federal government continue monitoring open-source models for potential risks.
Energy Usage & Data Centers
The electrical grid has been significantly impacted by the advancement of AI technology, particularly due to large data centers with high energy demands.
While the report found AI to be “critical” to U.S. economic interests and national security, it poses a multitude of challenges to the country’s energy sector.
“Planning properly now for new power generation and transmission is critical for AI innovation and adoption,” the report stated, adding that AI itself can play a role in modernizing America’s energy sector.
Small Business
Many small businesses lack the understanding and financial resources to implement AI, the report found. Providing them with education and resources to improve AI literacy is essential in order to help small businesses thrive, the task force stated, advocating for the federal government to reduce “compliance burdens” for small businesses that operate with the assistance of AI.
Agriculture
According to the report, AI technology has the potential to change the agriculture industry, increasing food availability, lowering food prices, and encouraging economic growth.
Because many agricultural communities lack reliable internet connection, AI adoption in the agriculture industry has been slow, the report stated. However, increased AI use by the USDA could help provide more agriculture programs to American communities and reduce costs for farmers and ranchers.
The report recommended the federal government direct the USDA to “better utilize” AI in program delivery and continue to explore how AI technology could help land managers improve forest health.
Health Care
AI has the potential to make significant improvements to the American healthcare system by improving diagnostic accuracy, streamlining operations. speeding up drug development and automating routine tasks, the report found.
However, there currently aren’t any uniform standards for medical data when it comes to AI, which makes it difficult for advancements to be made.
The task force recommended the government “maintain robust support” for medical research related to AI and create incentives and guidance to encourage risk management of AI technologies in the healthcare sector, as well as developing uniform standards for liability related to AI issues.
Financial Services
According to the report, the financial services industry has been using AI technology for decades.
“The ideal environment for continued growth would allow AI innovation to thrive while protecting consumers and maintaining market integrity,” the task force wrote. “By focusing on fostering innovation, enhancing customer experiences, and ensuring financial inclusion, AI can significantly improve the financial sector’s efficiency and accessibility.”
While AI has the potential to expand access to financial products and services, smaller firms may be at a disadvantage due to financial barriers to adopting AI.
The report recommends fostering an environment where financial services firms can “responsibly” adopt AI technology and encourage industry regulators to gain a better understanding of AI while suggesting a “principles-based” regulatory approach.
‘I don’t want a future where China’s leading on AI’: Pettersen on the role AI plays in US government
In an interview with Colorado Politics, Pettersen said the United States is in a race against China when it comes to crafting AI policy, and she’s afraid China could win.
“I really worry about areas like this where we need to be leading the way globally and making sure that China is not the one doing that,” Pettersen said. “I don’t want a future where China’s leading on AI. It needs to be the United States, and we have to come together in Congress to bring comprehensive, pragmatic, bipartisan solutions. It cannot matter (which party) has the majority. This needs to continue to be a bipartisan effort.”
Pettersen said she is confident artificial intelligence will remain a priority in the Trump administration but admitted that the Task Force on Artificial Intelligence faced some obstacles due to a “dysfunctional congress” and election season. However, she and other Colorado members of Congress agree that federal legislation on artificial intelligence is far more effective than “patchwork” measures passed at the state level.
AI legislation in Colorado
During the 2024 legislative session, Colorado passed a first-of-its-kind law that aims to address “algorithmic discrimination”, defined in statute as any condition in which AI increases the risk of “unlawful differential treatment” that then “disfavors” an individual or group of people on the basis of age, color, disability, ethnicity, genetic information, race, religion, veteran status, English proficiency and other classes protected by state laws.
Gov. Jared Polis has tasked Attorney General Phil Weiser with creating audit policies and identifying high-risk artificial intelligence practices to ensure the law is effectively implemented. The measure’s prime sponsor, Sen. Robert Rodriguez, D-Denver, has promised to amend it during the 2025 legislative session to minimize any unintended consequences.
Thelma Grimes contributed to this story.