Files / Emerging technologies

"Draft Report of the California AI Frontier Joint Policy Working Group"

Construction of a policy framework based on interdisciplinary evidence, focusing on transparency, incident reporting, and regulatory scope definition for frontier artificial intelligence models, to balance innovation incentives and risk management.

Detail

Published

22/12/2025

Key Chapter Title List

  1. Introduction
  2. Encouraging Innovation and Implementing Safety Guardrails
  3. Broader Policy Landscape and California's Potential Impact
  4. Building AI Policy on Evidence and Experience: Understanding the Broader Context
  5. Transparency
  6. Adverse Event Reporting
  7. Defining Scope
  8. Next Steps

Document Introduction

This report was commissioned by California Governor Gavin Newsom in September 2024 and drafted under the joint leadership of Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered AI; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley. As a draft for public comment, the report aims to develop an effective approach for California to support the deployment, use, and governance of generative AI, including establishing appropriate safeguards to minimize significant risks. The report emphasizes that its content represents the academic work of the three co-leads and does not represent the positions of their affiliated institutions.

The core objective of the report is to provide an evidence-based policy-making framework to guide California's governance of frontier AI. It systematically reviews research from multiple disciplines including computer science, economics, engineering, informatics, law, and public policy, distilling eight key policy principles. These principles emphasize that targeted interventions should balance technological benefits with significant risks; policy-making should be based on empirical research and reliable analytical techniques; and early-stage technology design and governance choices are path-dependent and crucial. Furthermore, the report advocates for measures such as increasing transparency, strengthening third-party risk assessments, protecting whistleblowers, and establishing an adverse event reporting system to address current systemic information opacity in critical areas, thereby enhancing accountability, competitiveness, and public trust.

To construct this framework, the report conducted an in-depth situational analysis. It draws on three historical case studies—internet development and governance, consumer product regulation, and energy policy—to illustrate the importance of early policy windows, the critical value of public transparency, and the necessity of relying on industry expertise while validating it through independent evaluation. The report clearly points out that there are significant gaps in current evidence regarding the capabilities and risks of frontier AI models, especially concerning malicious use, system failures, and systemic risks, with diverging expert opinions. Therefore, creating mechanisms that can proactively and frequently generate evidence is crucial for improving governance effectiveness.

Based on the above analysis, the report then delves into three core areas of governance tools. Regarding transparency, the report notes the current widespread lack of transparency among foundational model developers in key areas such as training data, safety practices, and downstream impacts, and proposes specific recommendations for improving information disclosure, enhancing third-party risk assessments, and providing legal protections for whistleblowers. Regarding adverse event reporting, the report argues the benefits and challenges of establishing a government-managed reporting system designed to collect information on post-deployment incidents to identify risks, promote coordination, and prevent costly accidents. Regarding the definition of regulatory scope, the report analyzes the complexities of using thresholds to determine which entities should be subject to policy constraints, evaluates the pros and cons of different threshold design approaches based on developer attributes, cost, model performance, or downstream impact, and notes that training compute is currently the most attractive cost-based threshold but is best used in combination with other metrics.

Overall, this report provides California legislators and regulators with a rigorous, multi-dimensional academic foundation to guide future laws and regulations concerning frontier AI governance. It advocates for a "trust but verify" governance philosophy, seeking a balance between encouraging continuous innovation and ensuring public safety, aiming to position California as a leader in the global AI governance landscape.