Trustible AI Newsletter 45: Why SB 53 Won’t Have a Big Impact
Plus Armilla AI and Trustible’s new integrated risk offering, AI agents aren’t always leaving behind a paper trail, and model specs 101
Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter!
At Trustible, we’re on a mission, giving AI Governance professionals the information, insights, and tools they need. Our legal and technical analysts help filter through the noise and identify what’s meaningful for enterprises, and what’s just hype. To ensure we’re advancing that mission, we’ve been listening to your feedback over the past few months, and we’re going to be revamping our newsletter going forward.
Here’s what we’re changing and what you can expect going forward:
Technical Insights
AI technology is evolving at a rapid pace and it’s hard for anyone to keep up, let alone contextualize what new developments will mean for enterprise use of AI. Our machine learning experts will use this section to translate technical into plain english and connect it to the challenges organizations face.
Policy Round-Up
Our regular policy roundup will continue as an overview of the major AI policy headlines from the past two weeks. We won’t be able to cover everything around the world, but we’ll focus on the developments that most impact practitioners.
AI Incident Spotlight
This new section will be a deep dive explainer on a recent incident captured in the AI Incident Database. Our goal will be to provide actionable recommendations on how to prevent similar incidents for enterprises.
Trustible’s Take
This will be our editorial team’s take on the most notable news relevant to AI Governance professionals. This section will focus a lot on trying to be the voice of pragmatic AI, and to try and cut through the hype found on traditional and social media platforms.
News & Updates
We’ll regularly publish summaries of our more in-depth whitepapers, research, and blog, along with new announcements from Trustible. We want our newsletter to be insightful and actionable for anyone working in AI Governance, but we also want to let you know how we’re building solutions to many of the issues discussed!
With that, in today’s edition (5-6 minute read):
Trustible’s Take: Why SB 53 Won’t Have a Big Impact
AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management
AI Incident Spotlight - AI Agents Aren’t Always Leaving a Paper Trail
Technical Insight - What To Know about ‘Model Specs’
Global & U.S. Policy Roundup
1. Trustible’s Take: Why SB 53 Won’t Have a Big Impact
There has been a lot of fanfare about Governor Newsom signing SB 53, a frontier AI safety bill. Many proponents argue it will have a major impact in regulating AI. We’re not sure. Especially after its various amendments and changes, we think it’s very unlikely to make a large impact in the AI space. Here’s why:
SB 53 is almost entirely a ‘subset’ of the EU AI Act’s requirements
While there are a few differences included in SB 53, including clear whistleblower protections for frontier model provider employees, many of the ‘core’ safety framework requirements are extremely similar to the ‘safety and security’ requirements in Chapter 3 of the EU’s Code of Practice for GPAI providers. A major sign that this alignment was intentional was the use of the same computer thresholds to establish ‘frontier models’ (SB 53), as ‘GPAI Models with Systemic Risk’ (EU AI Act). OpenAI actively lobbied Newsom to align these requirements, and notably neither endorsed nor denounced the bill.
Most frontier labs are already compliant, and proposed enforcement is weak
The requirements of the ‘frontier AI framework’ described in SB 53 reads exactly like Anthropic’s proposal for it, in alignment with OpenAI’s and Google’s framework, and there’s even a clear path for xAI to update theirs to align with the requirements. Given the laws’ very high threshold for ‘frontier’ models (10^26 FLOPS), that’s likely the whole list. In addition, only the California AG is able to enforce the law and can only impose civil penalties for non-compliance. It’s unclear whether the frontier labs will need to do anything in order to comply, and the risks of doing so are relatively low in the short term.
It doesn’t address any meaningful AI issues, nor clarify the legal environment
SB 53 doesn’t address many of the immediate AI policy issues that downstream deployers and users are struggling with. For example, despite frontier model requirements, the bill does not address copyright issues, liability transfers, or content watermarking. The focus on only ‘catastrophic risks’ is unlikely to make a big impact for high risk sectors trying to understand how to simply not break existing customer relationships and laws when deploying AI systems.
Key Takeaway: It’s unclear whether SB 53 is ‘regulation’ or ‘regulatory capture’ by frontier model providers. For most downstream AI system builders, the biggest impact will likely be receiving 300 pages of documentation, instead of the current 150 pages.
2. AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management
AI adoption is accelerating, but readiness still lags behind. Nearly 59% of large enterprises are already working with AI and plan to expand investment, yet only 42% have deployed AI at scale. At the same time, incidents of AI failure are rising sharply; the Stanford AI Index recorded a 26× increase in AI incidents since 2012, and more than 140 AI-related lawsuits are currently pending in U.S. courts.
The message is clear: as organizations race to integrate AI into products, operations, and decisions, risk management has to evolve just as quickly. That’s why last week, Trustible and Armilla AI announced a new partnership to tackle these challenges.
Together, we’re connecting the dots between AI governance and AI insurance, helping enterprises both prevent and protect against emerging AI risks. Trustible helps organizations operationalize responsible AI governance, while Armilla, provides affirmative AI insurance, explicitly covering risks that traditional cyber or E&O policies often exclude, such as model errors, generative AI copyright and libel issues, and regulatory penalties.
By working together, Trustible and Armilla create a feedback loop between good governance and improved insurability, enabling organizations to innovate confidently while minimizing and transferring residual risk.
You can learn more about the partnership here.
3. AI Incident Spotlight - AI Agents aren’t always leaving a paper trail (AI Incident 1218)
What Happened: Cybersecurity researchers have identified that in some instances, searches from Microsoft 365 Copilot don’t get properly registered in a document’s audit log. Any human ‘look-up’ or access to a file in Microsoft 365 is logged in a dedicated ‘audit trail’ which is an essential part of appropriate access controls. However despite Copilot citing answers from a certain document, there is not always a permanent record that Copilot read information from that file.
Why it Matters: Generative AI ‘answering’ systems can accidentally break normal access control rules and share information from documents a user may not otherwise have access to (data leakage). Not logging this access appropriately can exacerbate this issue, or even encourage this attack vector because. Most enterprise IT/ security policies require strict access controls and audit logs to help detect unauthorized access or use, and at least in some instances, Copilot may not obey the normal control expectations.
How to Mitigate: Without additional information, our theory is that documents get stored in their ‘embedding representation’ inside of 365 so that they can be searched over by an LLM. This means the information being accessed by 365 Copilot is not the ‘original’ document that stores the audit log. In addition, registering every ‘system’ access may bloat the audit log too heavily and there are not yet standards for logging AI system vs human accesses. For now, we recommend keeping highly sensitive documents out of files/folders indexed by 365 until this issue is fixed.
4. Technical Insight - What To Know about ‘Model Specs’
OpenAI recently made some significant changes to their ‘Model Spec’. The latest changes focus on better behavior for agents, trying to reduce ‘sycophancy’, and incorporated insights from a recent ‘public alignment’ project OpenAI has been running. Their model spec outlines how OpenAI has fine-tuned their models to enforce certain nuanced situations. It’s the best tactical representation of both their top level AI principles, and the specific guardrails built into their systems. While OpenAI is the only provider to use this exact format, other frontier model providers publish their versioned ‘System Prompts’ (Anthropic, Grok), which serve a similar purpose, although are not as structured.
Why is it relevant: OpenAI’s model spec is one of the most detailed documents about how frontier model providers are trying to align their models. While system cards give in depth insights into technical details, the model spec is consumable by non-technical experts, and contains more actionable information. Knowing what a model is supposed to do is essential for helping establish if the model is malfunctioning (acting outside of the spec), or if it allows certain behaviors that the deployer may want to block on their own. While recent legislation like SB 53 focus only on ‘catastrophic risks’ and will require reports on mitigations efforts towards those, the model spec contains relevant information for understanding whether the system has specific guardrails against things like data leakage, generating images based on a person, or how to handle chats about sensitive topics like sexuality. Publishing model specs, or similar documents, could be the next type of ‘transparency’ document frontier models may be required to publish in the future. The author of the Trump Administrations’ ‘AI Action Plan, Dean Ball, recently proposed such an idea in his newsletter, even while arguing for federal pre-emption of other AI regulations.
Key Takeaway: OpenAI’s model spec is better documentation to examine than their system cards for non-technical AI governance professionals trying to understand the risks of deploying or using OpenAI models.
5. Policy Round-Up
Trustible’s Top AI Policy Stories
The Problems with Sora 2. OpenAI launched its new video generation model, Sora 2, a couple of weeks ago. Since then, Sora 2 has raised fresh concerns over its environmental impact, ability to spread misinformation, and IP infringement.
Our Take: When using new models, AI governance professionals should consider metrics like the model’s impact on resources (e.g., energy and environment), as well as understand what types of outputs are being generated and the appropriate ways to use them.
The EU’s AI Breakup with the US. The European Commission released the Apply AI Strategy, which will invest approximately €1 billion in the EU’s AI industry to reduce its reliance on the US and China.
Our Take: A new EU ecosystem will provide companies with new choices for AI models and tools that take a different approach to AI safety and security than the US.
Fears Over the AI Bubble. There have been growing concerns over an “AI bubble” in the economy that is reminiscent of the dot-com bubble from the late-1990s.
Our Take: AI is here to stay (whether or not a bubble exists or bursts) and that means AI governance is not going anywhere.
California Regulates Companion Chatbots. Governor Newsom signed SB 243 into law, which protects minors and other vulnerable groups from AI companions.
Out Take: Chatbots are generally thought to be low risk use cases, but the new law underscores how companies need to have insights into the safeguards around their chatbots.
In case you missed it, here are additional AI policy developments:
United States Congress. Two bipartisan AI bills were recently introduced in the Senate. The AI LEAD Act would impose a “duty of care” standard for AI system developers and would classify AI systems as products, as opposed to platform. The AI Risk Evaluation Act would establish the advanced AI evaluation program through the Department of Energy.
Asia. AI-related policy developments in Asia include:
China. The Chinese government is attempting to crackdown on Nvidia chip imports as it seeks to promote its own homegrown chip industry. It has also been reported that Chinese government officials were caught using ChatGPT to create tools for mass surveillance and social media monitoring.
Vietnam. The Ministry of Science and Technology is seeking public input on a comprehensive AI law.
Europe. ASML’s Chief Financial Officer criticized the EU for overregulating AI, claiming that the difficulty with AI in Europe is “because [the EU] started with regulating, to keep AI under the thumb.”
North America. AI-related policy developments in outside of the U.S. in North America include:
Canada. OpenAI is looking to Canada for cheaper energy and as part of the deal would help build new data centers in Canada as it pushes to expand its sovereign AI industry.
Mexico. Salesforce announced that it would invest approximately $1 billion in Mexico over the next five years in an effort to expand AI adoption.
South America. OpenAI signed a letter of intent that would invest up to $25 billion for a large-scale data center, which is expected to be built in the Argentine Patagonia.
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team