More on SB 53
Third party obligations, whistleblower problems, and thresholds
As promised in my previous post, I want to run through some of the other thoughts I have on SB 53, one of California’s spate of AI bills advancing in the current session.
<!!! WE INTERRUPT THIS NEWSLETTER TO SUGGEST THAT YOU AT SOME POINT CHECK OUT THE HILARIOUS NEW CENTER FOR THE ALIGNMENT OF AI ALIGNMENT CENTERS. Ok sorry about that, back to work. !!!>
I’ll quickly describe the outlines of SB 53 and then talk about three categories of issues:
unclear third party obligations,
easily abused whistleblower protections, and
politically understandable but logically inconsistent thresholds for what companies are covered.
What SB 53 does
SB53’s primary goal is to require covered companies to publicly explain how they approach certain issues in AI model development. Similar transparency requirements were in Sen. Weiner’s SB 1047. But that bill also included vague and impossible to comply with substantive behavioral regulations on how companies develop AI models. SB 53 doesn’t include such regulations, and thus certainly is less onerous and problematic than SB 1047.
In broad strokes, SB 53 requires covered companies to:
Publish and maintain a Frontier AI Framework. The framework must cover processes for assessing/mitigating catastrophic risk; governance; cybersecurity; third‑party evaluations; and internal‑use risk management.
Publish a transparency report at deployment. That report must include core product facts plus summaries of catastrophic‑risk assessments and involvement of third‑party evaluators. It can be part of a system/model card.
Report critical safety incidents to California’s Office of Emergency Services (OES) within 15 days (24 hours if imminent risk) and submit periodic internal‑use risk summaries to OES.
SB 53 also:
Establishes AI-specific whistleblower protections on top of California’s pretty robust general whistleblower protections.
Creates a “CalCompute” consortium to develop a framework and report for a public cloud computing cluster dedicated to AI development. (I’m not going to address this because I don’t think it matters.)
I’ve already discussed in my previous post how SB 53’s definition of "catastrophic risk” falls short and could be fixed. Let’s look at three other issues.
SB 53 needs clarity about third party modifications
SB 53 (c)(1) requires “large frontier developers” to publish a “transparency report” for each new frontier model. These reports must include model-specific information such as intended uses, supported languages, and applicable use restrictions. Section (c)(2) further requires a summary of the results of any catastrophic risk assessments.
Importantly, the requirements are not limited to new models. They are also triggered by the release of “a substantially modified version of a existing frontier model.”
But modified by whom? The rule clearly applies when the original developer themselves modifies the model. But what if an unaffiliated third party substantially alters the model? Must the original developer update the transparency report?
In many cases, the original developer may not even know about the modification. Even if notified, obtaining the necessary information could be impractical or impossible, particularly if the modifier is uncooperative — or, in the case of open-weights models, anonymous. Yet SB 53 appears to hold the original developer responsible, subjecting them to fines for circumstances beyond their control.
Solution: This problem could be resolved by adding a single sentence clarifying that model modifications made by unaffiliated third parties do not trigger new obligations for the original developers under (c)(1) or (c)(2).
SB 53’s whistleblower provisions have serious side-effects
One part of SB 53 hasn’t gotten much scrutiny: its whistleblower protections for employees who report certain AI-related risks.
(Why do so many AI laws, including SB 53, have whistleblower provisions? I think this is downstream from the founding myths of major labs, who have always considered themselves to be doing something transformational and potentially catastrophic. This ethos has seeped into employee culture and activism. Think Anthropic’s split from OpenAI. But set that general question aside for now.)
California law already has significant general protections for whistleblowers. SB 53 adds to those protections, including provisions that:
protect “covered employees who report certain risks or violations to authorized recipients;
require employers to notify employees about these protections
require large frontier developers create an internal, anonymous reporting channel for covered employees to anonymously report certain risks or violations and receive updates about those reports
Most of these provisions apply to frontier model developers of all sizes, not just the “large” entities subject to the transparency obligations described above.
I am not a labor lawyer, so I do not understand all the ins and outs of these provisions, and certainly not in the context of California law. But I do want to point out two practical concerns.
SB 53 as a “layoff shield”
I’ve heard some folks refer to SB 53 as the “Can’t Fire AI Doomers Act.” But the legislation makes it harder to layoff any covered employee. It enables a covered employee to obtain fast injunctive relief on a low “reasonable cause” showing. This standard is easy to meet under the bill’s vague performance standards like “loss of control”, “deceptive techniques,” or “no meaningful human oversight.” The bill also bars courts from staying this injunctive relief while a company appeals.
In practice, an employee facing termination could file a whistleblower claim, seek a injunction, and freeze the firing decision for months with hardly any evidence. That dynamic creates a de facto layoff shield and heavy settlement pressure even when the underlying claim is weak.
The bill could address this by:
Requiring a prima facie link between the protected report and the adverse action;
Limiting emergency orders to maintaining status‑quo employment remedies (reinstatement/pay, no operational directives);
Adding fee‑shifting or a bond to reduce bad‑faith or clearly meritless filings; and
Keeping injunctive relief but allow expedited interlocutory review.
Anonymous reporting as a lawsuit goldmine
Another problem with the whistleblower provisions is that the anonymous reporting process required by SB 53 creates a discovery goldmine attractive to any plaintiff’s lawyer. In particular, the bill mandates quarterly board‑level briefings on every whistleblower disclosure. This manufactures a paper trail that plaintiffs can use to allege board knowledge and failed oversight in all kinds of corporate lawsuits, unlocking punitive actions and fees. The result will be companies engaged in careful CYA, box‑checking governance, not better safety practices.
The bill could address this by:
Preserving attorney-client and work‑product protections for anonymous reports and all summaries.
Clarifying that summaries do not waive privilege and are shielded from third‑party discovery (except to regulators under protective order).
Routing disclosures through the audit/ethics committee, not the full board, with escalation only for material items.
SB 53’s small developer carveout
Above I’ve repeatedly mentioned “covered developers.” That’s because the latest version of SB 53 doesn’t apply to all companies that develop large frontier models — only those that cross a certain revenue threshold.
Specifically, SB 53 limits the definition of a “large frontier developer” to companies that had annual gross revenues over $500 million in the prior year. Those revenues are not AI‑specific; they are company‑wide revenues.
Economists generally disfavor these kinds of thresholds because imposing different costs on parties that are otherwise engaged in the same regulated activity distorts markets. Thresholds also introduce cliff effects that distort firm growth (see France’s fifty-employee threshold) and can thin out the pipeline of challengers that would otherwise scale to compete head‑to‑head with larger incumbents. And they can advantage large buyers who outsource the regulated activity to smaller, exempted parties. Many of those general concerns also apply here, but I want to highlight two key problems.
Effect on large, non-AI companies
Because SB 53’s $500 million threshold is untethered to AI activity, the bill’s provisions potentially apply to thousands of companies, including many that do not currently focus on AI model development. According to the NAICS Association, in 2024 there were around 9,100 U.S. companies with sales greater than $500 million. These companies include tech companies, but also banks, hospital systems, retailers, and manufacturers.
If any one of these companies wants to use a frontier model, it faces a choice: buy or license it from someone else or build it themselves. On the margin, SB 53 discourages non-AI companies from building their own frontier models.
Practically speaking, given today’s industry landscape, non-AI companies will almost certainly license foundation models from others regardless of whether SB 53 becomes law. Developing the types of generalized frontier models that SB 53 targets is very expensive and technically challenging; it will be much more efficient for most companies to license a foundation model from a developer and then customize it.
Importantly, SB 53 doesn’t apply to large companies that use someone else’s model. The bill only regulates the developer that “has trained, or initiated the training of, a foundation model” where that person (not someone else) “has used, or intends to use,” at least 10^26 integer or floating-point operations. (22757.11(h)) There are no obligations on downstream entities that fine‑tune or otherwise create derivative models of someone else’s foundation model. The upshot: a $500 million‑plus company can avoid being a “large frontier developer” under SB 53 by licensing someone else’s base model and customizing it.
I think this practice will become the common method of frontier model application by large companies. That’s why clarifying liability for third party modifications is important, as I discussed above.
And mark my words: if this practice does become common, California will push legislation to regulate those large companies, also.
Pragmatic (but logically inconsistent) carveout for small frontier model developers
So, SB 53 doesn’t apply to large companies that use someone else’s frontier model. But it also carves out small companies (as measured by revenue) that build their own frontier models from its most prescriptive obligations.
There’s a practical case for this design. Compliance programs at the scale SB 53 contemplates are expensive; exempting smaller firms arguably prevents the law from becoming a moat for incumbents and could encourage new entrants to attempt truly large‑scale training.
But the carveout also cuts against the bill’s rationale. If Legislature is worried about risks from capabilities (the size and power of the model, that risk is unrelated to the developer’s revenue. A 10^26‑FLOP model doesn’t become safer because a lean startup trained it. I personally think the risks are small and SB 53’s requirements won’t reduce them much in any case; its costs outweigh its benefits for companies of all sizes. But under the bill’s own risk‑reduction rationale, a revenue‑based exemption makes little sense.
I’m not naïve about legislating — it takes compromise. This carveout reflects coalition‑building by Senator Wiener rather than a principled, risk‑based approach. And the Legislature all but admits the exemption is provisional, noting in its findings that, “In the future, foundation models developed by smaller companies or that are behind the frontier may pose significant catastrophic risk, and additional legislation may be needed at that time.”
Firms relying on this carveout should plan as though it will narrow or vanish as soon as the political will exists.
Conclusion
As the Legislature moves into final consideration of SB 53, there’s still time for narrow fixes that preserve its transparency goals without creating easily avoidable collateral damage. Fix the fundamental mischaracterization of risk. Clarify that unaffiliated third‑party modifications do not trigger new obligations for original developers. Keep whistleblower protections, but calibrate emergency relief and privilege so they don’t function as a layoff shield or a discovery goldmine. And align coverage with risk rather than with company‑wide revenue thresholds. (Well, there may not be an easy fix to that last one.)
If those changes can’t be made now, they should be first on the agenda for any follow‑on measure. And whether SB 53 passes in its current form or not, I hope state legislatures learn from these shortcomings and avoid them in the future.



Where does it say "There are no obligations on downstream entities that fine‑tune or otherwise create derivative models of someone else’s foundation model. "?
That might be the intent, but I don't see it reflected in the language of the bill:
"The quantity of computing power described in paragraph (1) shall include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model."
If I am fine-tuning, why isn't isn't the computing of the original training run included?