Files / United States

Congressional Research Service Report: Regulation of Artificial Intelligence—U.S. and International Approaches and Congressional Considerations

This report provides an in-depth analysis of the evolution of the current U.S. artificial intelligence regulatory landscape, the policy dynamics between federal and state levels, key administrative actions and their shifts, and offers a horizontal comparison of the differentiated governance models in the European Union, the United Kingdom, and China. It aims to provide Congress with a comprehensive analytical framework for balancing decision-making between technological innovation and risk regulation.

Detail

Published

10/01/2026

Key Chapter Title List

  1. Introduction: Defining Artificial Intelligence and Regulatory Considerations
  2. AI Governance and Regulation in the United States
  3. Federal Laws Involving AI
  4. State-Level AI Laws
  5. Congressional AI-Related Activities
  6. Executive Branch AI Actions
  7. The U.S. Path to Regulating AI
  8. AI Governance and Regulatory Paths of Selected Countries and International Organizations (United Kingdom, European Union, China, Multilateral and Bilateral Governance Activities)
  9. Congressional Policy Considerations and Options (Including Utilizing Existing Frameworks, Creating New Regulations or Authorities, Supporting U.S. AI Development and Deployment, Engaging in International Regulatory Efforts)

Document Introduction

This report, published by the Congressional Research Service (CRS) in June 2025, aims to systematically outline the regulatory paths, policy debates, and legislative considerations at both the domestic and international levels against the backdrop of the rapid development of artificial intelligence technology. The core concern of the report is how to effectively harness the potential benefits of AI technology, such as improving government operational efficiency and worker productivity, while mitigating its potential challenges, including bias, inaccurate outputs, privacy violations, and security risks. The report notes that although Congress has introduced hundreds of AI-related bills, fewer than 30 had been enacted as of May 2025, and no broad federal regulatory law or ban on AI development or use has been established, resulting in a "patchwork" regulatory landscape characterized by cautious federal legislation and active state-level legislation.

The report first defines the complex and rapidly evolving technological concept of AI and reviews existing domestic legal frameworks in the United States, including the National Artificial Intelligence Initiative Act of 2020, the CHIPS and Science Act, and the Advancing American AI Act. These laws primarily focus on promoting federal AI research and development, coordinating AI applications within the government, and establishing advisory mechanisms. In the absence of unified federal regulations, states such as California, Colorado, and Washington have pioneered a series of state-level laws concerning generative AI output labeling, training data disclosure, consumer protection, and the establishment of AI task forces, sparking debates over regulatory consistency and corporate compliance burdens.

The report provides a detailed analysis of the policy evolution within the U.S. executive branch, highlighting the varying emphases across different administrations. It points out that the Biden administration's Executive Order (E.O. 14110) focused on AI safety was revoked by the Trump administration in early 2025. It was replaced by a new Executive Order (E.O. 14179) with the policy core of "maintaining and enhancing U.S. global AI leadership," marking a shift in policy focus from emphasizing safety and trustworthiness to prioritizing innovation, economic growth, and national security concerns. Current federal regulatory efforts are primarily concentrated on utilizing existing agency authorities for assessment and enforcement, exploring the need for additional authorities, and securing voluntary commitments from the industry.

At the international comparative level, the report analyzes three typical governance models: the UK's "principles-based, non-legislative, cross-sectoral" agile regulatory framework, which emphasizes leveraging existing regulators; the EU's comprehensive, risk-based horizontal regulatory system established through the Artificial Intelligence Act, imposing varying levels of requirements and prohibitions on AI systems based on their risk level; and China's vertical, technology-specific regulatory framework, represented by measures such as the Interim Measures for the Management of Generative Artificial Intelligence Services, where the government plays a leading role in private sector development, and policies are heavily influenced by national security and economic development goals. The report also outlines multilateral governance initiatives by the Organisation for Economic Co-operation and Development (OECD), the Group of Seven (G7), and the United Nations.

Finally, the report presents multi-dimensional policy considerations and options for Members of Congress. Congress could choose to maintain the status quo, relying on existing agencies like the Federal Trade Commission (FTC) to use their authorities to regulate AI. Alternatively, it could consider creating new cross-sectoral authorities or broad regulations, introducing requirements for transparency, impact assessments, and third-party audits. Simultaneously, Congress could actively support domestic AI research, development, and deployment by supporting the National Artificial Intelligence Research Resource (NAIRR) pilot, establishing regulatory sandboxes, and strengthening the National Institute of Standards and Technology's (NIST) work on the AI Risk Management Framework (RMF). Furthermore, the report emphasizes the importance of engaging in international regulatory cooperation to promote trade and regulatory interoperability, while balancing international competition and the risks of a "race to the bottom" in this process.