The whole story of SB 1047 is long and complicated, but the gist of it is actually quite simple. By and large, the artificial intelligence industry does not want to be regulated. It especially doesn’t want to be liable for harms caused by its AI models. Since SB 1047 regulates the industry and uses liability to enforce those regulations, much of the industry doesn’t want the bill.
Industry insiders can’t say this explicitly, so they make other arguments instead (often arguing against versions of the bill that don’t exist). It’s not super surprising that these arguments don’t really hold up to scrutiny.
SB 1047 mainly mandated that the largest AI developers implement safeguards to mitigate catastrophic risks. If a covered company’s AI model causes a disaster, defined as “mass casualties” or $500 million or more in damage and the company’s safeguards were not in line with industry best practices and/or relevant government standards, then the company could be liable for damages and additional financial penalties. The bill also included protections for AI whistleblowers.
It would have been the first law in the United States to mandate that these companies implement safeguards to mitigate catastrophic risks, breaking from the tradition of using the voluntary AI safety commitments preferred by the industry and national lawmakers.
Opposing the California Senate bill, OpenAI, Meta, Microsoft, and Google argued that AI regulations should happen at the federal level instead.
However, Republicans have committed to obstruct significant national legislation and undo Joe Biden’s executive order on AI, the nearest thing to a national AI regulation. These companies also have far less to fear from federal lawmakers, who are in their pocket to a greater degree. In August, eight House Democrats from California published a letter against the bill that was full of industry talking points.
Former House Speaker Nancy Pelosi shortly followed the congressional letter with her own statement against the bill, in what appears to be the first time she’s opposed a piece of state-level legislation from a member of her own party.
Pelosi’s 2023 financial disclosure reports her husband owns between $16 and $80 million in stocks and options in Google, Amazon, Microsoft, and Nvidia. And Pelosi’s daughter Christine is expected to run against the bill’s author, state senator Scott Wiener, for the former Speaker’s congressional seat upon her retirement. This belief has prompted speculation that she is trying to damage Wiener and elevate Christine.
Pelosi’s odd pattern of San Francisco endorsements also makes more sense when you realize they have all fought with Wiener. Pelosi has conspicuously not endorsed Wiener in his reelection campaign.
As I discussed previously in the Nation, opponents of SB 1047 assert that there is a “massive public outcry” against the bill and highlight imagined and unsubstantiated harms that will befall sympathetic victims like academics and open-source developers. However, the bill aims squarely at the largest AI developers in the world and has statewide popular support, with even stronger support from tech workers.
This fundamental dynamic remained mostly the same but ultimately resulted in some strange bedfellows: billionaire Elon Musk is lined up with social justice groups and labor unions in supporting the bill, while former House Speaker Nancy Pelosi, progressive House Congressman Ro Khanna, Trump-supporting venture capitalist Marc Andreessen, and AI “godmother” Fei-Fei Li are all opposed.
Yet since the congressional letter and Pelosi’s opposition statement in August, the momentum was almost entirely with bill supporters.
Anthropic differentiated itself from other leading AI companies by writing in an open letter that the revised bill’s “benefits likely outweigh its costs.” Its letter followed a round of amendments made directly in response to the company’s earlier concerns.
SB 1047 passed both chambers of California’s legislature with strong majorities, and it continued to poll well. These polls were all conducted by a pro-regulation nonprofit, but the top-line result didn’t change when the polling shop let an opponent write the con arguments.
Over 110 current and former employees of the top five AI companies (all of which are based in California) published a letter in favor of the bill, arguing that “the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.”
Letters urging Newsom’s signature were signed by SAG-AFTRA and Hollywood stars like Ava DuVernay, Jane Fonda, J. J. Abrams, Shonda Rhimes, Alec Baldwin, Pedro Pascal, Jessica Chastain, Adam McKay, and Ron Perlman.
Late-stage endorsements came in from powerful groups like ParentsTogether and the California branch of the AFL-CIO, joining earlier support from groups like the SEIU, the Latino Community Foundation, and the National Organization for Women.
This expansive coalition was ultimately not enough to overcome the bitter opposition of the tech industry and powerful national Democrats like Pelosi.
OpenAI, Meta, and Google all argued that the bill could spook AI companies into leaving California. However, SB 1047 would have applied to any covered AI company doing business in the state, which is the world’s fifth-largest economy and home to the top five generative AI companies. Anthropic CEO Dario Amodei dismissed this notion as “just theater.”
The most powerful allies of industry were national Democrats who came out against the bill. But they had a big assist from “godmother of AI” and Stanford professor Fei-Fei Li, who published an op-ed in Fortune falsely claiming that SB 1047’s “kill switch” would effectively destroy the open-source AI community. Li’s op-ed was prominently cited in the congressional letter and Pelosi’s statement, where the former Speaker said that Li is “viewed as California’s top AI academic and researcher and one of the top AI thinkers globally.”
Nowhere in Fortune or these congressional statements was it mentioned that Li founded a billion-dollar AI startup backed by Andreessen Horowitz, the venture fund behind a scorched-earth smear campaign against the bill.
The same day he vetoed SB 1047, Newsom announced a new board of advisers on AI governance for the state. Li is the first name mentioned.
Newsom published a three-page letter explaining his veto, where he takes a generally pro-regulation tack while making arguments that also don’t withstand scrutiny.
For example, he writes:
By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047.
In other words, AI models large and small can both be risky — so we shouldn’t regulate either?
Also, the discovery that has underpinned the entire explosion of generative AI is that model capabilities scale with model size. There are high-risk applications of AI, like its use in welfare, hiring, and parole decisions, that should be scrutinized, strictly regulated, or banned entirely. But the risk of a model causing the kinds of catastrophes SB 1047 was targeting — like enabling a large-scale cyberattack or aiding in the creation of novel bioweapons — is far greater in systems that are larger and more powerful than the current state of the art.
And if you think Newsom would have signed a more expansive version of the bill that targets smaller models as well, I have a bridge to sell you.
Later, he writes:
Let me be clear — I agree with the author — we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable.
Let me be clear, Newsom’s decision means that there will be no mandatory safety protocols for the development of the largest and most powerful AI models. The dozen-plus AI bills Newsom signed don’t address this, and none of them faced anywhere near the industry resistance that SB 1047 received.
Unlike Newsom, the smartest critics of the legislation, like AI journalist Timothy B. Lee, merely argue that it may be premature, saying, “We should wait until more advanced models exist so we have a better idea of how to regulate them.”
Dean W. Ball, a research fellow at George Mason University’s libertarian-leaning Mercatus Center, told me in a phone interview, “There’s a legitimate disagreement about whether regulation at this time is necessary.” The whole debate, in his view, is about whether near-future AI systems will be capable of causing the kind of disasters the bill aims to prevent. Ball thinks that if “model capabilities develop in the way that certainly SB 1047 supporters seem to think they will, the need for such regulation will become more apparent.”
Supporters tend to be more worried about being too late than too early. Bill cosponsor Teri Olle, director of Economic Security California, said in a phone interview, “The last time we had this kind of a moment was with social media,” but “we blinked and as a result we are now still trying to pick up the pieces.”
The Congressional letter claimed:
Unfortunately, SB 1047 is skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement.
Newsom took a similar tack in his veto letter, writing:
A California-only approach may well be warranted — especially absent federal action by Congress — but it must be based on empirical evidence and science. The U.S. Al Safety Institute, under the National Institute of Science and Technology, is developing guidance on national security risks, informed by evidence-based approaches, to guard against demonstrable risks to public safety.
He touts other efforts to manage AI risks “that are rooted in science and fact,” and the dozen-plus bills “regulating specific, known risks posed by Al” that he’s signed in the last thirty days.
Opponents of AI safety regulations want you to believe that they come at the expense of regulating existing harms of the technology.
But as I wrote in Jacobin in January:
The debate playing out in the public square may lead you to believe that we have to choose between addressing AI’s immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration.
But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it’s capitalism versus humanity.
Even though SB 1047 failed to become law, it succeeded in bringing this picture into focus. SB 1047 was the product of a long, deliberative, democratic process.
In response to attacks by AI investors, Wiener wrote in July, “SB 1047 is the product of hundreds of conversations my team and I have had with a broad range of experts, including both supporters and critics and including startup founders, large tech companies, academics, open source advocates, and others.”
One Big Tech lobbyist told me that Wiener is “extremely earnest in the way he approaches this, like a lot of politicians I deal with are not.”
Wiener hosted multiple town halls with tech founders, amending the bill significantly in response to their feedback. The bill had consistent, strong support in the legislature and polled well, with support rising over time as its core components were clarified.
All of that work was nullified by the decision of a single man, whose incoherent stated objections mask his real motivations.
But in another sense, this was just another mask-off moment for Gavin Newsom, who hand-delivered a massive victory to Big Tech, at the expense of democracy and every person who might someday be harmed by AI models built by companies locked in a fierce race for primacy and profit.
Copyright for syndicated content belongs to the linked Source link