California Frontier Artificial Intelligence Policy Report
Research on an Interdisciplinary Policy Framework for the Governance of Generative Artificial Intelligence and Foundational Models, Based on Historical Case Comparisons and Multi-source Evidence Analysis, Provides Forward-Looking Pathway Guidance for California to Balance Innovation Incentives with Risk Prevention and Control.
Detail
Published
22/12/2025
Key Chapter Title List
- Introduction
- Building AI Policy on Evidence and Experience: Understanding the Broader Context
- Transparency
- Adverse Event Reporting
- Scoping
- Summary of Feedback and Revisions
Document Introduction
In September 2024, California Governor Gavin Newsom commissioned Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered AI; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley, to lead the formation of a Joint California Policy Task Force. The aim was to develop effective approaches for California to support the deployment, use, and governance of generative AI, and to establish appropriate guardrails to minimize substantial risks. This report is the final product released by the task force on June 17, 2025. It represents the independent academic work of the co-chairs and participating scholars, not the official positions of their affiliated institutions.
The report focuses on frontier AI, an emerging technological paradigm driven by foundation models, aiming to provide California policymakers with an evidence-based policy-making framework. The report does not advocate for or against any specific legislative or regulatory proposals. Instead, it examines the best available research on foundation models and distills a set of policy principles to guide California on how to approach, evaluate, and govern frontier AI, with a core ethos of "trust but verify." The report recognizes that as a global hub for AI innovation, California has a unique opportunity and responsibility to address significant risks that could have profound impacts on the state and the world, while continuing to support the development of frontier AI.
The report employs a multidisciplinary, integrated approach, drawing extensively on diverse evidence such as empirical research, historical analysis, and modeling simulations. Its main content revolves around four core governance issues: First, Transparency. The report points out systemic opacity in key areas of the current AI industry and proposes specific pathways to enhance transparency through disclosure requirements, third-party assessments, and whistleblower protections. Second, Adverse Event Reporting. The report advocates for a government-led reporting system to continuously monitor the real-world impacts of AI, identify unforeseen risks, and provide data support for regulatory decisions by collecting information on harms occurring post-deployment. Third, Scoping. The report delves into how to reasonably determine the scope of entities covered by policy through setting thresholds, analyzing the pros and cons of thresholds based on different dimensions such as developer attributes, cost, model performance, and downstream impact. Finally, the report specifically summarizes the public feedback received since the draft release in March 2025 and the revisions made accordingly.
To solidify the foundation of its policy recommendations, Chapter 2 of the report conducts a dedicated comparative analysis of historical cases, drawing lessons from the early design and governance of the internet, the regulation of consumer products (such as tobacco), and policy responses in the energy sector (addressing climate change). These cases reveal the path-dependent effects of early design choices, the crucial role of transparency in generating comprehensive evidence, and the importance of trusting expertise while verifying it through third-party assessment. Furthermore, the report briefly lists successful cases in areas such as pesticide regulation, building codes, and automobile seat belts, demonstrating that well-designed governance frameworks can indeed balance promoting innovation with safeguarding public safety.
Overall, this report provides a roadmap reference for California's policy innovation in the rapidly evolving and uncertain field of frontier AI. It is based on extensive evidence, considers multiple stakeholder interests, and emphasizes dynamic adaptability. The principles and mechanisms it proposes aim not only to address currently known risks but also to build a governance ecosystem capable of continuously generating evidence and iteratively learning.