California at the Crossroads: Leading AI Governance Through Collaboration
Written By Daniel Diamond
Edited By Sami Babayandarjazi
Image Sources: Modern Healthcare & Creative Commons
Public-private cooperation is essential for robust Artificial Intelligence (AI) governance goals. As a global tech hub, California has become the focal point for North America's tech policy development and implementation. The state’s position as an incubator for innovation also makes it a testing ground for collaborative regulatory efforts.
When Senator Scott Wiener introduced SB 1047 – Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also known as the “AI Safety Bill,” many saw it as a groundbreaking step toward accountability in the AI sector. However, Governor Gavin Newsom’s subsequent veto of the bill underscored the complexities of crafting effective AI legislation. Citing concerns over the bill’s “scope and protections,” Newsom highlighted the need for more inclusive input from AI developers and stakeholders across various sectors.
California, home to the world’s fifth-largest economy and leading generative AI firms has a unique opportunity to lead in both AI innovation and governance. During the 2023-2024 legislative session, state lawmakers considered over 30 bills addressing AI, with 17 signed into law, putting California on-route to join the international conversation on AI regulation of large-scale AI models, together with other AI policy leaders of the European Union and China. Newsom’s veto of SB 1047 sparked debate over the state’s readiness to regulate AI effectively on a larger scale. In his message to the California State Senate, the Governor expressed concerns that the bill could create a “false sense of security” by focusing solely on large-scale AI models while overlooking the equally significant threats posed by smaller-scale systems. This raised broader questions about whether any legislation could truly balance the perspectives of policymakers, industry experts, and affected communities.
The answer comes from the 2023 Writers Guild of America (WGA) strike as it serves as a compelling case study of how public-private cooperation can lead to meaningful regulatory outcomes. Following the strike, studios agreed to ban the use of AI to generate scripts based on datasets of pre-written material. Performers also secured protections requiring consent for the use of their likenesses or “digital replicas.” These laws now mandate clear contractual terms and professional representation for performers during negotiations, marking a milestone in safeguarding creative professionals’ rights while allowing for the responsible integration of GenAI technologies. By enshrining protections for writers and performers through dialogue with major studios and unionized creatives, California demonstrated its ability to balance innovation with ethical considerations.
California’s leadership in addressing AI challenges positions it uniquely to shape global governance frameworks. By fostering partnerships between policymakers and private sector stakeholders, the state can establish a model for equitable and forward-thinking AI regulation. Building on examples like the WGA strike and targeted legislative successes, Governor Newsom and California lawmakers have the opportunity to solidify the state’s role