California’s Draft Ai Law Would Protect More Than Just People
Few places in the world have more to gain from a flourishing AI industry than California. Few also have more to lose if the public’s trust in the industry were suddenly shattered.
In May, the California Senate passed SB 1047, a piece of AI safety legislation, in a vote of 32 to one, helping ensure the safe development of large-scale AI systems through clear, predictable, common-sense safety standards. The bill is now slated for a state assembly vote this week and, if signed into law by Governor Gavin Newsom, would represent a significant step in protecting California citizens and the state’s burgeoning AI industry from malicious use.
[time-brightcove not-tgx=”true”]Late Monday, Elon Musk shocked many by announcing his support for the bill in a post on X. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”
The post came days after I spoke with Musk about SB 1047. Unlike other corporate leaders who often waver, consulting their PR teams and lawyers before taking a stance on safety legislation, Musk was different. After I outlined the importance of the bill, he requested to review its text to ensure its fairness and lack of potential for abuse. The next day he came out in support. This quick decision-making process is a testament to Musk’s long-standing advocacy for responsible AI regulation.
Last winter, Senator Scott Weiner, the bill’s creator, reached out to the Center for AI Safety (CAIS) Action Fund for technical suggestions and cosponsorship. As CAIS’s founder, my commitment to transformative technologies impacting public safety is our mission’s cornerstone. To preserve innovation, we must anticipate potential pitfalls, because an ounce of prevention is worth a pound of cure. Recognizing SB 1047’s groundbreaking nature, we were thrilled to help and have advocated for its adoption ever since.
Read More: Exclusive: California Bill Proposes Regulating AI at State Level
Targeted at the most advanced AI models, it will require large companies to test for hazards, implement safeguards, ensure shutdown capabilities, protect whistleblowers, and manage risks. These measures aim to prevent cyberattacks on critical infrastructure, bioengineering of viruses, or other malicious activities with the potential to cause widespread destruction and mass casualties
Anthropic recently warned that AI risks could emerge in “as little as 1-3 years,” disputing critics who view safety concerns as imaginary. Of course, if these risks are indeed fictitious, developers shouldn’t fear liability. Moreover, developers have pledged to tackle these issues, aligning with President Joe Biden’s recent executive order, reaffirmed at the 2024 AI Seoul Summit.
Enforcement is lean by design, allowing California’s Attorney General to act only in extreme cases. There are no licensing requirements for new models, nor does it punish honest mistakes or criminalize open sourcing—the practice of making software source code freely available. It wasn’t drafted by Big Tech or those focused on distant future scenarios. The bill aims to prevent frontier labs from neglecting caution and critical safeguards in their rush to release the most capable models.
Like most AI safety researchers, I am in large part driven by a belief in its immense potential to benefit society, and deeply concerned about preserving that potential. As a global leader in AI, California is too. This shared concern is why state politicians and AI safety researchers are enthusiastic about SB 1047, as history tells us that a major disaster, like the nuclear one at Three Mile Island on March 28, 1979, could set a burgeoning industry back decades.
Regulatory bodies responded to the partial nuclear meltdown by overhauling nuclear safety standards and protocols. These changes increased the operational costs and complexity of running nuclear plants, as operators invested in new safety systems and complied with rigorous oversight. The regulatory challenges made nuclear energy less appealing, halting its expansion over the next 30 years.
Three Mile Island led to a greater dependence on coal, oil, and natural gas. It is often argued that this was a significant lost opportunity to advance toward a more sustainable and efficient global energy infrastructure. While it remains uncertain whether stricter regulations could have averted the incident, it is clear that a single event can profoundly impact public perception, stifling the long-term potential of an entire industry.
Some people will view any government action on industry with suspicion, considering it inherently detrimental to business, innovation, and a state or country’s competitive edge. Three Mile Island demonstrates this perspective is short-sighted, as measures to reduce the chances of a disaster are often in the long-term interest of emerging industries. It is also not the only cautionary tale for the AI industry.
When social media platforms first emerged, they were largely met with enthusiasm and optimism. A 2010 Pew Research Center survey found that 67% of American adults who used social media believed it had a mostly positive impact. Futurist Brian Solis captured this ethos when he proclaimed, “Social media is the new way to communicate, the new way to build relationships, the new way to build businesses, and the new way to build a better world.”
He was three-fourths correct.
Driven by concerns over privacy breaches, misinformation, and mental health impacts, public perception of social media has flipped, with 64% of Americans viewing it negatively. Scandals like Cambridge Analytica eroded trust, while fake news and polarizing content highlighted social media’s role in societal division. A Royal Society for Public Health study showed 70% of young people experienced cyberbullying, with 91% of 16-24-year-olds stating social media harms their mental wellbeing. Users and policymakers around the globe are increasingly vocal about needing stricter regulations and greater accountability from social media companies.
This did not happen because social media companies are uniquely evil. Like other emerging industries, the early days were a “wild west” where companies rushed to dominate a burgeoning market and government regulation was lacking. Platforms with addictive, often harmful content thrived, and we are now all paying the price. The companies—increasingly mistrusted by consumers and in the crosshairs of regulators, legislators, and courts—included.
The optimism surrounding social media wasn’t misplaced. The technology did have the potential to break down geographical barriers and foster a sense of global community, democratize information, and facilitate positive social movements. As the author Erik Qualman warned, “We don’t have a choice on whether we do social media, the question is how well we do it.”
The lost potential of social media and nuclear energy was tragic, but it’s nothing compared to squandering AI’s potential. Smart legislation like SB 1047 is our best tool for preventing this while protecting innovation and competition.
The history of technological regulation showcases our capacity for foresight and adaptability. When railroads transformed 19th-century transportation, governments standardized track gauges, signaling, and safety protocols. The advent of electricity led to codes and standards preventing fires and electrocutions. The automobile revolution necessitated traffic laws and safety measures like seat belts and airbags. In aviation, bodies like the FAA established rigorous safety standards, making flying the safest form of transportation.
History can only provide us with lessons. Whether to heed them is up to us.