<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Trustible Newsletter]]></title><description><![CDATA[Our bi-weekly newsletter covers top news & analysis in AI policy, AI governance best practices, and product updates. ]]></description><link>https://insight.trustible.ai</link><generator>Substack</generator><lastBuildDate>Fri, 24 Apr 2026 07:54:01 GMT</lastBuildDate><atom:link href="https://insight.trustible.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Trustible]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[trustible@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[trustible@substack.com]]></itunes:email><itunes:name><![CDATA[Trustible]]></itunes:name></itunes:owner><itunes:author><![CDATA[Trustible]]></itunes:author><googleplay:owner><![CDATA[trustible@substack.com]]></googleplay:owner><googleplay:email><![CDATA[trustible@substack.com]]></googleplay:email><googleplay:author><![CDATA[Trustible]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What the AI Audit Ecosystem Can Learn From the Delve Scandal]]></title><description><![CDATA[Lessons from a SOC 2 scandal, agent memory risks, and what a mistranslated space mission reveals about AI deployment]]></description><link>https://insight.trustible.ai/p/what-the-ai-audit-ecosystem-can-learn-from-the-delve-scandal</link><guid isPermaLink="false">https://insight.trustible.ai/p/what-the-ai-audit-ecosystem-can-learn-from-the-delve-scandal</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Thu, 16 Apr 2026 12:03:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!K2ZY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!K2ZY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!K2ZY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 424w, https://substackcdn.com/image/fetch/$s_!K2ZY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 848w, https://substackcdn.com/image/fetch/$s_!K2ZY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 1272w, https://substackcdn.com/image/fetch/$s_!K2ZY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!K2ZY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4350415,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/194321409?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!K2ZY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 424w, https://substackcdn.com/image/fetch/$s_!K2ZY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 848w, https://substackcdn.com/image/fetch/$s_!K2ZY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 1272w, https://substackcdn.com/image/fetch/$s_!K2ZY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2349a27f-d369-43d6-bdf5-ecaa71520da7_2600x1463.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>1. What the AI Audit Ecosystem Can Learn From the Delve Scandal</h3><p>The cybersecurity world has spent the past few weeks reeling from a major scandal involving Delve, a YC-backed compliance startup that promised to get companies SOC 2 compliant in days using proprietary AI. SOC 2 is a general cybersecurity standard that most startups need before selling software to enterprises. According to a series of<a href="https://deepdelver.substack.com/"> Substack posts</a>, Delve didn&#8217;t have much in the way of AI, allegedly stole another company&#8217;s IP, and was auto-generating fraudulent SOC 2 reports through offshore firms. Delve has since been disowned by its investors, lost its most notable customers, and sparked an ongoing public debate about the &#8220;race to the bottom&#8221; in the SOC 2 world.</p><p>There&#8217;s plenty of AI directly involved in the Delve scandal, but there are also important lessons for the developing AI assurance and audit ecosystem. While many criticize SOC 2 as too light, consisting mostly of check-the-box activities, it can be a useful education for early-stage startups learning which basic security controls to put in place, and it&#8217;s often the first stepping stone towards heavier certifications. The real issue is less about the standard itself and more about the incentives surrounding it. The first problem is that SOC 2 lacks a strong auditor certification and enforcement ecosystem. It was created by the<a href="https://www.aicpa-cima.com/"> AICPA</a>, a trade association of public accountants, originally to set standards for sharing confidential financial data with auditors, and has since been extended to cover SaaS platforms broadly. Unlike ISO, the AICPA does not formally certify and credential its auditors. The second problem sits on the demand side. Many enterprise procurement teams don&#8217;t understand how startups work and demand unqualified SOC 2 reports, even though the intent of the standard is to provide transparency about risks that can then be negotiated. Procurement teams will often use &#8220;findings&#8221; (auditor observations about control gaps) as an excuse to eliminate vendors rather than as a starting point for risk-based discussions. This creates intense market pressure for performative compliance over honest disclosure, and rewards bad actors over those being transparent. Given how quickly AI is evolving, any audit or assessment will have limitations. Businesses that start demanding &#8220;perfect&#8221; AI audits risk creating the same dangerous incentives, reducing the amount of meaningful risk management done for procured AI systems.</p><p><strong>Key Takeaway:</strong> The incentives in an assurance ecosystem matter as much as the standards themselves. Right now, most risk information about AI isn&#8217;t being disclosed because teams worry that disclosures will be treated as admissions of liability or make their systems less attractive. Policymakers should think hard about how to make the opposite true, where transparent disclosure of risks and audit findings is rewarded, not punished, in the market.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Like what you&#8217;re reading so far? Subscribe for bi-weekly AI updates from Trustible.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>2. Tech Explainer: Understanding Agent Memory</h3><p>Memory is a core component of AI agents, but the term can refer to several different things. As agents increasingly operate across multiple sessions and workflows, how they store and retrieve information has direct implications for transparency, data rights, and security.</p><p>Short-term memory refers to the data passed to a model during a single interaction, typically conversation history, system prompts, and tool outputs. Developers may use summarization to condense previous interactions to fit a model&#8217;s context window, which can result in performance degradation if important details are not preserved in the summary.</p><p>Long-term memory refers to the use of external databases that track information across multiple interactions with an AI Agent. A simple form might include a database of previous conversations (episodic memory); a more complex form might include a knowledge base that summarizes information across sessions (semantic memory). For semantic memory, new records are often created through agentic processes that analyze previous conversations, meaning the agent is deciding what to remember. The agent interacts with the memories using tools, similarly to interactions with other external resources.</p><p>Depending on the nature of the store, it may be difficult to audit and manage the long-term memories.  Deleting a specific chat from an episodic store may be straightforward, but summarized semantic knowledge is harder to disentangle. If two conversations contributed to a stored fact and one is deleted, the system may not be able to determine whether the fact should be forgotten.</p><p>Agent memories also present a vector for adversarial attacks. If a malicious actor gains access to the long-term memory database, they can plant bad data and execute indirect prompt injections that persist across sessions.</p><p><strong>Key Takeaway: </strong>While memory can make agents more effective, it introduces new governance challenges. Users of external AI systems need to understand how their &#8220;memories&#8221; are managed, while developers need to account for the transparency, legal and security risks associated with long-term memory stores.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yP_k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yP_k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 424w, https://substackcdn.com/image/fetch/$s_!yP_k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 848w, https://substackcdn.com/image/fetch/$s_!yP_k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 1272w, https://substackcdn.com/image/fetch/$s_!yP_k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yP_k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1091626,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/194321409?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yP_k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 424w, https://substackcdn.com/image/fetch/$s_!yP_k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 848w, https://substackcdn.com/image/fetch/$s_!yP_k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 1272w, https://substackcdn.com/image/fetch/$s_!yP_k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8418dba6-fd87-46b2-872a-ec6463c092f5_2600x1463.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A Korean news broadcast of the Artemis II rocket launch on April 1, 2026</figcaption></figure></div><h3>3. Incident Spotlight: When the Model Doesn&#8217;t Know What It Doesn&#8217;t Know (<a href="https://incidentdatabase.ai/cite/1446/">Incident 1446</a>)</h3><p>During KBS&#8217;s live YouTube broadcast of the Artemis II launch on April 1, the broadcaster&#8217;s AI real-time translation system rendered the mission control phrase &#8220;Roger, roll, pitch&#8221; as &#8220;Roger, roll, b*tch&#8221; in Korean subtitles.<a href="https://www.koreaboo.com/news/kbs-airs-btch-in-ai-subtitles-apologizes/"> </a>The error spread quickly on social media and KBS issued an apology the same day. The fix was straightforward: disable rewind, remove the clip, and commit to improving profanity filtering. Case closed, apparently.</p><p>But that resolution actually obscures the more interesting governance failure here.</p><p><strong>Why It Matters:</strong> KBS&#8217;s response framed this as a profanity filtering problem, and their proposed fix, strengthening the profanity filter, treats it as one. The AI system mistranslated &#8220;pitch&#8221; as an English expletive, then rendered its Korean equivalent, because it failed to recognize the aerospace context of the communication.<a href="https://www.thestar.com.my/aseanplus/aseanplus-news/2026/04/04/south-korean-tv-under-fire-over-profanity-glitch-in-ai-subtitles-for-artemis-ii"> </a>The underlying issue isn&#8217;t that a bad word slipped through a filter; it&#8217;s that the system had no representation of the domain it was operating in. Aviation and mission control communication is highly formalized, uses a specific vocabulary, and is nothing like the natural language corpus these models are typically trained on. Adding profanity filtering is a patch on a context problem. The next domain-specific failure, whether in a medical broadcast, a legal proceeding, or a financial earnings call, will produce a different kind of error that the patched filter won&#8217;t catch.</p><p>This incident also sits at the edge of a broader unresolved question: who bears accountability when an AI-generated output causes harm during a live, unedited broadcast? KBS worked with an unnamed external partner for the translation system, and its apology references &#8220;close consultation with relevant departments and external companies&#8221; to prevent recurrence.<a href="https://www.thestar.com.my/aseanplus/aseanplus-news/2026/04/04/south-korean-tv-under-fire-over-profanity-glitch-in-ai-subtitles-for-artemis-ii"> </a>That language is telling. When the vendor relationship is opaque and the system is live, the contractual and editorial accountability structure is rarely established in advance.</p><p><strong>How to Mitigate:</strong> The immediate lesson isn&#8217;t &#8220;add profanity filters&#8221;; it&#8217;s &#8220;don&#8217;t deploy general-purpose translation models in specialized domains without domain adaptation or human review gates.&#8221; For broadcasters and enterprises running real-time AI outputs in public-facing contexts, the risk controls should mirror those applied to any live content: a human in the loop capable of interrupting the stream, domain-specific fine-tuning or prompt configuration for the subject matter, and a vendor contract that clearly allocates responsibility for output errors. Some broadcasters using AI captioning tools have begun requiring vendor indemnification clauses for live content errors specifically. That&#8217;s the right instinct, though the contractual frameworks are still nascent.</p><p><strong>Key Takeaway:</strong> Profanity filtering is not a substitute for domain-appropriate AI deployment. Any organization running AI-generated outputs in a live or real-time context, in any specialized domain, should establish both a human interrupt capability and clear vendor accountability before go-live, not after the first incident.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wTTl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wTTl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 424w, https://substackcdn.com/image/fetch/$s_!wTTl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 848w, https://substackcdn.com/image/fetch/$s_!wTTl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!wTTl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wTTl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1755243,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/194321409?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wTTl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 424w, https://substackcdn.com/image/fetch/$s_!wTTl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 848w, https://substackcdn.com/image/fetch/$s_!wTTl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!wTTl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff43e5bfa-2543-41e0-8161-8f56d62fd3f6_1920x1440.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>4. Trustible AI Governance Market Guide</h3><p>The AI governance software market is crowded, confusing, and increasingly hard to navigate. Search &#8220;AI governance platform&#8221; and you&#8217;ll find hundreds of products sharing the same label: AI firewalls, privacy compliance tools, model monitoring services, cybersecurity GRC products with a new AI module. They&#8217;re all technically accurate descriptions, and they&#8217;re all describing fundamentally different products built for different teams solving different problems. That confusion leads to real procurement mistakes, organizations buying the wrong tool, or forcing a point solution into a coordination role it was never designed for.</p><p>To help cut through the noise, we put together a market guide that maps 16 distinct categories of platforms claiming some version of &#8220;AI governance,&#8221; organized by where they sit in the technology stack. For each, we describe what it actually does, who buys it, and where it falls short on the broader governance mandate. Whether you&#8217;re building a program from scratch, writing an RFP, or just trying to make sense of a vendor pitch that landed in your inbox, it&#8217;s designed to give you a clearer frame for evaluation. <a href="https://trustible.ai/post/types-of-ai-governance-platforms/">Read the full guide here.</a></p><h3>5. Policy Round Up</h3><p><strong>Fannie Mae:</strong> Fannie Mae has released their first<a href="https://singlefamily.fanniemae.com/news-events/lender-letter-ll-2026-04-governance-framework-use-artificial-intelligence-and-machine-learning"> AI governance framework</a> for the use of AI in mortgage lending. It includes guidelines for policies and procedures that sellers and servicers must abide by if utilizing AI/ML in the selling/servicing of Fannie Mae loans. It emphasizes transparency, risk management, and requires an owner of the AI use case to assume the responsibility of implementing and maintaining this framework.</p><ul><li><p><strong>Our take:</strong> This follows in line with guidance released by Freddie Mac, one of the other major players in mortgage lending. While there may not be federal guidance, regulated industries are pushing forward with risk management practices.</p></li></ul><p><strong>OMB Compliance:</strong> Deadlines for compliance with high-impact AI risk management practices have<a href="https://fedscoop.com/federal-agencies-ai-inventory-risk-management-deadline/"> recently passed</a>. Several agencies also missed their deadlines for posting updated AI inventories, a crucial step in determining what cases are high-impact. Additionally, a requirement of OMB&#8217;s AI Acquisition guidance directs agencies to contribute to a repository of AI acquisition best practices. Monday&#8217;s<a href="https://fedscoop.com/agency-ai-procurement-gao-report/"> new GAO report</a> found agencies struggled with this due to a lack of centralized documentation and agency-level guidance.</p><ul><li><p><strong>Our take:</strong> While the goal of the OMB guidelines is to streamline AI adoption, the requirements can actually be quite a heavy lift for agencies.</p></li></ul><p><strong>CA EO (3/30):</strong> Governor Newsom signed a<a href="https://www.gov.ca.gov/2026/03/30/as-trump-rolls-back-protections-governor-newsom-signs-first-of-its-kind-executive-order-to-strengthen-ai-protections-and-responsible-use/"> new state Executive Order</a> on the state&#8217;s procurement of AI services. It aims to provide a new process for the state&#8217;s AI procurement in effort to allow CA to separate from federal government&#8217;s processes in light of the Trump Administration&#8217;s recent &#8220;contracting missteps&#8221; (i.e. Anthropic). It gives the state the ability to do its own assessment of the policies and safeguards of AI companies and utilize the tool upon approval, even if this goes against the federal government&#8217;s supply chain risk designations.</p><ul><li><p><strong>Our take:</strong> The Trump Administration has thus far expressed intent to allow states to have rights over their AI procurement processes, but it is unclear whether a state has the ability to override a national security designation. We can expect there to be more legality questions once the specific guidelines come out around August.</p></li></ul><p>In case you missed it:</p><ul><li><p>China: The Chinese government has<a href="https://www.geopolitechs.org/p/china-rolls-out-interim-regulations"> released</a> <em>Interim Measures for the Management of Anthropomorphic AI Interaction Service</em>. It covers protections, especially for children, who are interacting with AI services that both 1) possess anthropomorphic features and 2) provide emotional interaction. It is a novel step forward in the already pretty rich well-developed Chinese AI regulatory landscape.</p></li><li><p>xAI sues Colorado: xAI is<a href="https://www.ft.com/content/55e8cba9-d09c-4f94-b710-4ab447b987f9?syn-25a6b1a6=1"> suing the state</a> of Colorado over their anti-discrimination AI law, set to begin enforcement in June. xAI argues that this would be a first amendment, free speech violation and would force them to &#8220;embed the State&#8217;s preferred views&#8221; into their systems.</p></li></ul><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[Sora's Death Ushers in the Era of Enterprise AI]]></title><description><![CDATA[IAPP Summit Recap, The Benefits and Risks of Model Distillation, and DOGE&#8217;s use of ChatGPT]]></description><link>https://insight.trustible.ai/p/soras-death-ushers-in-the-era-of</link><guid isPermaLink="false">https://insight.trustible.ai/p/soras-death-ushers-in-the-era-of</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Thu, 02 Apr 2026 16:05:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!IO9p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IO9p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IO9p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 424w, https://substackcdn.com/image/fetch/$s_!IO9p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 848w, https://substackcdn.com/image/fetch/$s_!IO9p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 1272w, https://substackcdn.com/image/fetch/$s_!IO9p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IO9p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:266854,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/192969427?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IO9p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 424w, https://substackcdn.com/image/fetch/$s_!IO9p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 848w, https://substackcdn.com/image/fetch/$s_!IO9p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 1272w, https://substackcdn.com/image/fetch/$s_!IO9p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86c0fd96-02d2-4b8e-bd33-02a6de37a56a_2400x1350.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>1. Sora&#8217;s Death Ushers in the Era of Enterprise AI</h3><p>OpenAI recently <a href="https://www.nytimes.com/2026/03/24/technology/openai-shutting-down-sora.html">announced that they were sunsetting Sora</a>, their groundbreaking consumer oriented AI video app. Sora&#8217;s latest models proved capable of generating highly realistic videos with very simple text prompts. While Sora had a variety of guardrails in place, and enforced content watermarking, videos it generated <a href="https://incidentdatabase.ai/apps/discover/?hideDuplicates=1&amp;is_incident_report=true&amp;s=Sora">were implicated in at least 19 incidents</a>, and may have likely been used in many other deep fake related incidents. OpenAI cited a desire to focus more on enterprise applications, especially as competition from Anthropic and their breakout Claude Code system recently overtook OpenAI&#8217;s ChatGPT amongst new users and in the Apple App Store.</p><p>We think this is just the beginning of a strategic pivot of many AI companies away from consumer applications of AI, over to enterprise related uses. The consumer world is rife with legal issues, and regulators are starting to take notice. Even the <a href="https://www.blackburn.senate.gov/2026/3/technology/blackburn-releases-discussion-draft-of-national-policy-framework-for-artificial-intelligence/3b3b6458-b6c7-478b-9859-374949586765">most conservative AI legislative proposals in the US</a> have strong protections against non-consensual sexual content, and age restrictions for many AI systems. In addition to regulatory pressures, businesses are finding Agentic AI to be more useful than simple chat-based interfaces, and Anthropic&#8217;s focus on coding, and on business safe AI has forced <a href="https://www.entrepreneur.com/business-news/openai-issued-a-code-red">OpenAI to declare a &#8216;code red&#8217; to stay competitive.</a> Finally, the costs of AI are still being heavily subsidized, and geopolitical events are likely to exacerbate the situation making consumer oriented apps highly unprofitable in the short term. B2B opportunities will be seen as less risky from a legal and regulatory perspective, and will be easier to prove value and align the &#8216;real&#8217; costs of AI to its enterprise value. Other OpenAI projects may get cancelled, such as their efforts of <a href="https://www.wsj.com/tech/ai/openai-adult-mode-chatgpt-f9e5fc1a?gaa_at=eafs&amp;gaa_n=AWEtsqdD6ElSxD6QYbPvV860pVnw5tiXEz1t5VZI83MLqYQrMIDA28pjjhBrpxdGpAk%3D&amp;gaa_ts=69c83a52&amp;gaa_sig=tFLuaEF6PbEXvPbsfxZDZOnH4hlkqUaRfP9nCBowm_mzsNDnfn57tV3bDfjgdc-BtiSiiFJY2Nzq2J7-SDJFFw%3D%3D">creating an &#8216;adult&#8217; version of ChatGPT.</a></p><p><strong>Key Takeaway:</strong> When it comes to choosing an AI partner in the B2B world, reputation and principles will matter. Ethics aside, choosing a model provider that invests heavily in preventing illicit content, simply reduces risk, and choosing that company isn&#8217;t performative ethics, but rather just risk reduction.</p><h3>2. Tech Explainer: Promises and Pitfalls of Distillation</h3><p>In <a href="https://www.macrumors.com/2026/03/25/apple-google-gemini-distill-models">a new partnership</a>, Apple is using Gemini models to train effective smaller models that can be run directly on phones through a process called <strong>distillation</strong>. In addition to creating effective smaller models, the process can also be used by adversarial parties to reverse engineer models (Anthropic recently <a href="https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks">wrote a report</a> on three chinese labs using this process to attack their models). The process works by having a smaller &#8216;student&#8217; model learn to mimic the outputs of a larger &#8216;teacher&#8217; model, rather than training from scratch on raw data. With LLMs, the smaller model is typically trained on input+reasoning traces from the larger models. This process works, because the reasoning traces give the model the most important pieces of information needed to function properly. The result is a smaller model that can run locally, without sending data to an external API, which can reduce latency and alleviate some privacy-concerns.</p><p>Distillation comes with risks. Because the smaller model has fewer parameters, it can&#8217;t capture as much nuance and may lead to performance degradation - in general, it may be best to use the smaller model for more specialized tasks. For example, Apple may only need Gemini to be good at question-answering inside Siri, so they can skip teaching the student model coding capabilities. Deployers need to run fresh evaluations calibrated to the distilled model&#8217;s narrower scope, not just port over benchmarks from the original. In addition, <a href="https://arxiv.org/html/2601.03868">recent research</a> showed that knowledge distillation may lead to systematic degradation of safety alignment and <a href="https://www.nist.gov/news-events/news/2025/09/caisi-evaluation-deepseek-ai-models-finds-shortcomings-and-risks">increase susceptibility</a> to jailbreaks.</p><p><strong>Key Takeaway: </strong>On the surface, distillation may be appealing because it can create smaller models that encapsulate key abilities of larger models without relying on external APIs. However, in practice, they require additional safety post-training, strong guardrails and new evaluations, which can make the whole process more time and resource intensive.</p><h3>3. Incident Spotlight: DOGE&#8217;s ChatGPT Grant Review (<a href="https://incidentdatabase.ai/cite/1402/">Incident 1402</a>)</h3><p><strong>What Happened:</strong> DOGE <a href="https://www.nytimes.com/2026/03/07/arts/humanities-endowment-doge-trump.html">fed grant descriptions into ChatGPT</a>, asking it to determine whether each was &#8220;DEI,&#8221; then logged the chatbot&#8217;s yes/no responses in a spreadsheet that replaced a list previously compiled by NEH staffers as the operative document for terminating grants. Of 1,163 grant proposals analyzed this way, 1,057 were flagged and just 42 were kept. The process was ad hoc by design: the DOGE staffer behind the methodology had assembled his own &#8220;Detection List&#8221; of identity-based traits before running grant descriptions through the model. Depositions later confirmed that the NEH&#8217;s acting chair hadn&#8217;t known ChatGPT was used in the selection process at all.</p><p><strong>Why It Matters:</strong> Setting aside the political element, there are a number of issues here. Firstly, there&#8217;s no strong evidence that the team deploying ChatGPT had proper training on AI systems. Their prompt seems notably simplistic, and arguably gave enormous authority to the AI system to interpret DEI, and process the grants accordingly. Even small limitations in tools, such as poor text extraction, context window lengths, or hallucinations could have caused massive impacts. In addition, the DOGE team did not seem to have a firm grasp of the documents they were working with, and therefore could not act as qualified &#8216;humans in the loop&#8217;. The automated decision making nature of their request would likely be qualified as &#8216;high impact AI&#8217; under the Trump admin&#8217;s latest guidance for AI use in non-classified settings, although this was published after the DOGE work was supposedly done.</p><p><strong>How to Mitigate:</strong> This is fundamentally a process design failure. Any organization using AI for consequential screening decisions should define and document classification criteria before deployment, not derive them post hoc from a vague policy directive. AI-generated classifications should be treated as inputs to human review, not substitutes for it, with clear audit trails that distinguish model output from final decision.</p><p><strong>Key Takeaway:</strong> If your organization is using AI to screen or classify anything with legal or financial consequence, someone with both domain expertise and working knowledge of the model&#8217;s limitations needs to own the classification logic. Deploying a general-purpose chatbot as a compliance filter without either is a massive liability.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L9uV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L9uV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 424w, https://substackcdn.com/image/fetch/$s_!L9uV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 848w, https://substackcdn.com/image/fetch/$s_!L9uV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 1272w, https://substackcdn.com/image/fetch/$s_!L9uV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L9uV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4446139,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/192969427?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L9uV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 424w, https://substackcdn.com/image/fetch/$s_!L9uV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 848w, https://substackcdn.com/image/fetch/$s_!L9uV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 1272w, https://substackcdn.com/image/fetch/$s_!L9uV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a712348-0270-4b9d-a607-2b5fca532d73_2800x1575.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI Governance Year 2 Session | IAPP Global Privacy Summit</figcaption></figure></div><h3>4. IAPP Recap</h3><p>Earlier this week at the IAPP Global Privacy Summit, we <a href="https://trustible.ai/post/what-ai-governance-looks-like-after-year-one/">co-hosted a panel</a> that focused on what AI governance looks like after Year One. The room assumed policies were written, intake processes established, and governance committees defined. The conversation was about what comes next.</p><p>CTO Andrew Gamino-Cheong moderated with Kimberly Zink (Chief Privacy Officer, Korn Ferry) and Derek Han (AI, Cyber and Privacy Partner, Grant Thornton). Four scenarios drove the discussion.</p><p><strong>On Model Changes:</strong> Every use case should have a documented set of evaluations before a deprecation notice arrives, not assembled under pressure. What counts as a &#8220;substantial modification&#8221; needs to be defined in advance.</p><p><strong>On Periodic Reviews:</strong> Governance intensity should scale with risk level, not apply uniformly. Model drift doesn&#8217;t announce itself. Sampling actual outputs against deployment guardrails matters more than a calendar reminder.</p><p><strong>On Regulatory Updates:</strong> Nearly 7 in 10 businesses report difficulty understanding EU AI Act obligations. The root cause is inventory quality. If your AI inventory doesn&#8217;t capture use case category, PII usage, automated decision-making, and deployment geography, you can&#8217;t answer a scope question quickly.</p><p><strong>On Program Iteration:</strong> Track the metrics that tell you whether governance is actually working: volume reviewed, high-risk flags, cycle time, risk mitigated. The harder conversation is agentic AI. Manual governance workflows weren&#8217;t built for systems that act autonomously, chain decisions, and scale faster than any review queue. Organizations need to start building AI-assisted governance now.</p><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[Why AI Monitoring is Hard]]></title><description><![CDATA[Plus, Evaluating AI Evaluations, AI Personality Theft, and Policy Updates]]></description><link>https://insight.trustible.ai/p/why-ai-monitoring-is-hard</link><guid isPermaLink="false">https://insight.trustible.ai/p/why-ai-monitoring-is-hard</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Thu, 19 Mar 2026 12:15:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NaSx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NaSx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NaSx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 424w, https://substackcdn.com/image/fetch/$s_!NaSx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 848w, https://substackcdn.com/image/fetch/$s_!NaSx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 1272w, https://substackcdn.com/image/fetch/$s_!NaSx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NaSx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png" width="1200" height="628" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/da2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:628,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NaSx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 424w, https://substackcdn.com/image/fetch/$s_!NaSx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 848w, https://substackcdn.com/image/fetch/$s_!NaSx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 1272w, https://substackcdn.com/image/fetch/$s_!NaSx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda2352c3-0ff4-48b0-a79c-bb7597e891ce_1200x628.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(Source: Claude)</p><h3><strong>1. Why AI Monitoring is Hard</strong></h3><p>When enterprises say they want to monitor their AI systems, they rarely mean the same thing. For a security team, monitoring means watching for adversarial attacks and prompt injection. For a compliance officer, it means tracking regulations and court cases that could impact their AI systems. For a product team, it means making sure the model hasn&#8217;t quietly gotten worse at the thing it was deployed to do. <a href="https://www.nist.gov/news-events/news/2026/03/new-report-challenges-monitoring-deployed-ai-systems">A new report from NIST</a><strong><a href="https://www.nist.gov/news-events/news/2026/03/new-report-challenges-monitoring-deployed-ai-systems"> </a></strong><a href="https://www.nist.gov/news-events/news/2026/03/new-report-challenges-monitoring-deployed-ai-systems">formalizes this fragmentation</a>, organizing post-deployment monitoring into six distinct categories that rarely get discussed together:</p><ul><li><p><strong>Functionality</strong> &#8212; Is the system still working as intended? (Detecting drift, staleness, performance degradation)</p></li><li><p><strong>Operational</strong> &#8212; Is the infrastructure running reliably? (Latency, uptime, logging across distributed systems)</p></li><li><p><strong>Human Factors</strong> &#8212; How are users actually interacting with the system? (Feedback loops, over-reliance, sycophancy)</p></li><li><p><strong>Security</strong> &#8212; Is the system being attacked or misused? (Adversarial inputs, deceptive model behavior, misuse detection)</p></li><li><p><strong>Compliance</strong> &#8212; Is the system adhering to relevant regulations and policies? (Terms of service violations, regulatory adherence)</p></li><li><p><strong>Large-Scale Impacts</strong> &#8212; Is the system promoting or degrading human well-being at a population level?</p></li></ul><p>The report, drawn from three workshops with over 250 practitioners and a review of 87 papers, finds that most organizations are only monitoring one or two of these categories, and the field lacks agreed-upon methods, shared terminology, and basic consensus on who in the AI supply chain is even responsible for each. The incentive problems compound this: monitoring is expensive, publicly reporting incidents carries legal and competitive risk, and AI outputs are non-deterministic enough that establishing a reliable performance baseline is itself an open research problem. For deployers specifically, the report is a formal acknowledgment from NIST that the governance burden is shifting downstream, and the tools and standards needed to manage it don&#8217;t yet exist. The best practices for collecting and analyzing this kind of data are highly immature, and quickly shifting alongside the AI technology stack.</p><p>Trustible&#8217;s own <a href="https://insights.trustible.ai/ai-monitoring">AI Monitoring whitepaper</a>, and <a href="https://trustible.ai/post/what-is-ai-monitoring/">blog post</a> separates the ideas of &#8216;internal&#8217; monitoring which focuses on analysing highly technical data about the relevant AI system, and &#8216;external&#8217; monitoring which aims to collect information from outside the deployers boundaries. Internal monitoring largely maps to NIST&#8217;s Functionality, Operational, and Security factors while, &#8216;external&#8217; monitoring maps to the Human Factors, Compliance, and Large Scale Impacts elements.</p><p><strong>Key Takeaway:</strong> AI Monitoring is all of these things, and there&#8217;s therefore no &#8216;silver bullet&#8217; solution for all forms of AI monitoring. The NIST framework is a useful forcing function for governance teams to audit which monitoring categories they&#8217;ve actually addressed and which they&#8217;ve implicitly assumed someone else owns.</p><h3><strong>2. Tech Explainer: The Trouble with Evaluations</strong></h3><p>As AI systems grow more advanced, evaluating them gets more complicated; at the same time, we are seeing a decreased consistency in how evaluations are reported for general-purpose AI models. Unlike traditional machine learning, where models are built for a specific purpose (e.g. predicting if an email is spam) and can be evaluated against that goal, GPAI models are adapted for downstream tasks and assessed benchmarks (e.g. math problem solving) that are often inconsistently applied. Two providers can report scores on the same benchmark while using different prompting strategies or answer aggregation techniques, making direct comparisons unreliable. A benchmark can also misrepresent what it claims to measure: interview-style coding questions won&#8217;t tell you much about how a model performs in a production system.</p><p>The core problem is a lack of standards for what to report. Trustible&#8217;s work with the EvalEval coalition produced <a href="https://evalevalai.com/infrastructure/2026/02/17/everyevalever-launch/">a universal schema</a> for documenting evaluation results, giving developers and deployers a consistent reporting standard and a way to compare why scores on the &#8220;same benchmark&#8221; diverge. Related efforts like <a href="https://benchrisk.ai/score">BenchRisk</a>, <a href="https://arxiv.org/pdf/2512.04062">EvalFactSheets</a>, and the <a href="https://arxiv.org/html/2511.04703v1">Construct Validity Checklist</a> give developers and deployers structured tools to report on and assess benchmark quality before trusting the results. Adoption is still limited, but these are now available reference points for model developers, deployers, users and policy-makers. The challenge will get harder as evaluations shift from models to agents, where the <a href="https://blog.langchain.com/the-anatomy-of-an-agent-harness/">harness</a> (i.e. tools and integrations surrounding a model) shape results as much as the model itself.</p><p><strong>Key Takeaway:</strong> AI Literacy requires skeptically reviewing performance headlines touted by AI providers. While organizations should construct internal benchmarks when picking the best models for their systems, many of the same challenges persist and the frameworks shared can help. </p><h3>3. AI Incident Spotlight - AI Style, Personality, and Identity Theft (<a href="https://incidentdatabase.ai/cite/1407/">Incident 1407</a>)</h3><p><strong>What Happened:</strong> In late 2025, Grammarly launched an &#8220;Expert Review&#8221; tool where subscribers could <a href="https://www.wired.com/story/grammarly-is-facing-a-class-action-lawsuit-over-its-ai-expert-review-feature/">upload writing and receive real-time editing feedback</a> presented as coming from named journalists, authors, and academics, including novelist Stephen King and tech journalist Kara Swisher. Grammarly never sought or obtained consent from any of the named experts whose identities it used to sell the feature. In response, <a href="https://www.nytimes.com/2026/03/13/opinion/ai-doppelganger-deepfake-grammarly.html">journalist Julia Angwin filed a lawsuit </a>against Grammarly&#8217;s parent company Superhuman, and the feature was pulled shortly after.</p><p><strong>Why It Matters:</strong> While the visual and auditory &#8216;likeness&#8217; of a person has been discussed in depth in the context of AI generated images and video, this incident highlights an additional question of whether a person&#8217;s style and personality should also be included. In her lawsuit, and related NYT Op-Ed, Angwin argues: &#8220;My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page,&#8221;. For content creators, journalists, and subject matter experts whose reputations are themselves a professional asset, the commercial use of an AI simulation of their expertise is an existential threat to their way of life, and capable of causing massive reputational harm if the system is wrong. In this case, Grammarly is exposed from a liability angle, but the broader question about what constitutes &#8216;likeness&#8217;, and what rights someone should have to it are still unresolved. Existing privacy laws that may cover a person&#8217;s visual likeness under the context of &#8216;biometric&#8217; data don&#8217;t clearly apply to a person&#8217;s &#8216;content style&#8217;.</p><p><strong>How to Mitigate:</strong> Before shipping any feature that associates named individuals with AI-generated output, confirm you have explicit written consent or a licensing agreement. Some music and <a href="https://www.cbc.ca/news/entertainment/matthew-mcconaughey-michael-caine-ai-9.6976757">film artists have started to sell these rights to AI platforms</a>. While AI laws targeting deep fakes are still being debated or implemented, existing tort law can still apply if there are reasonable claims of loss of income resulting from the AI. Right-of-publicity review should be a mandatory gate in the product development lifecycle for any feature involving real people&#8217;s names, styles, or personas, and that review needs to happen before engineering begins, not at launch.</p><p><strong>Key Takeaway:</strong> There are many open questions about what constitutes a person&#8217;s likeness, and further debates about what kinds of rights a person should have around their likeness, and how to balance these issues with freedom of speech. Broader questions around likeness after death, or how to enforce these things on a global scale are unlikely to be resolved any time soon.</p><h3>4. Policy Roundup</h3><p><strong>Anthropic vs. the Pentagon</strong></p><p>The Trump administration officially <a href="https://www.cnn.com/2026/03/09/tech/anthropic-sues-pentagon">labeled Anthropic a &#8220;supply-chain risk&#8221;</a> and banned government agencies and military contractors from using Claude<a href="https://www.cnn.com/2026/03/09/tech/anthropic-sues-pentagon"> </a>after contract negotiations broke down over two conditions Anthropic refused to drop. While they publicly threatened an aggressive stance on the issue, the formal notice was more limited in scope and only prohibits use of Claude to directly support DoD contracts. Despite the narrower designation, Anthropic <a href="https://techcrunch.com/2026/03/09/anthropic-sues-defense-department-over-supply-chain-risk-designation/">filed two suits against the DOD</a>, calling the actions &#8220;unprecedented and unlawful&#8221;. Major tech companies,<a href="https://www.axios.com/2026/03/16/tech-industry-rallies-anthropic-pentagon-fight"> including Microsoft, have voiced strong support for Anthropic. </a></p><p><strong>Our Take:</strong> Anthropic may have &#8216;lost&#8217; the battle, but in doing so, may be winning the war. At least PR wise, as consumer Claude downloads spiked as a result of the discussion. Organizations may need to ensure that they never get too deeply locked in with a single model provider in case LLM selections get further politicized over time. Organizations may also need to invest in processes for safely switching model providers.</p><p><strong>EU AI Act Amendments Near the Finish Line</strong></p><p>EU Parliament lawmakers <a href="https://iapp.org/news/a/meps-reach-preliminary-political-agreement-on-AI-omnibus">reached a political deal </a>within the Parliament that adds an explicit ban on AI-generated non-consensual intimate images and eases compliance rules for AI embedded in regulated products like medical devices. Separately, the <a href="https://www.consilium.europa.eu/en/press/press-releases/2026/03/13/council-agrees-position-to-streamline-rules-on-artificial-intelligence/">EU Council approved its negotiating position</a> on the Digital Omnibus, which would push back high-risk system deadlines by up to 16 months pending the availability of compliance standards.</p><p><strong>Our Take:</strong> Now that the 3 main bodies of the EU policy making apparatus (Commission, Council, Parliament) have solidified their positions, political negotiations will begin between them. Given the common positions, enforcement of the high risk AI requirements of the EU AI Act are unlikely in 2026. Enforcement in late 2027, or early 2028 is now likely.</p><p><strong>US Supreme Court Closes the Door on AI Copyright</strong></p><p>The US Supreme Court<a href="https://www.lexology.com/library/detail.aspx?g=d5a3a142-46eb-4c11-9e0f-1c743c8fd467"> declined to hear an appeal </a>on <em>Thaler v. Perlmutter</em>, leaving intact the rule that works generated entirely by AI are ineligible for copyright protection. For now, Human authorship will continue to be a requirement for receiving IP protections in the US.</p><p><strong>Our Take:</strong> Requiring human involvement to receive IP protections is a good balancing act in the AI ecosystem as it protects content creators in a fair way. However we expect more aggressive lobbying by tech firms on this issue over the next few years. </p><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[AI Makes Work More Intense]]></title><description><![CDATA[Also, why repeating your prompt can improve accuracy, why context from the physical world is essential in healthcare AI, Trustible partnership announcement, and policy updates]]></description><link>https://insight.trustible.ai/p/ai-makes-work-more-intense</link><guid isPermaLink="false">https://insight.trustible.ai/p/ai-makes-work-more-intense</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Thu, 26 Feb 2026 12:03:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!35dW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ol><li><p>AI Tools Are Making Employees Work More, Not Less</p></li><li><p>Technical Explainer: Prompt Repetition</p></li><li><p>Trustible Joins Coalition for Health AI</p></li><li><p>AI Incident Spotlight - When AI Alerts Lack Sufficient Context</p></li><li><p>Policy Round-up</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!35dW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!35dW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!35dW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!35dW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!35dW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!35dW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!35dW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!35dW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!35dW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!35dW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20c4a11f-0167-4158-ba6a-0a58d08b7529_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(Source: Google Gemini)</p><h2><strong>1.  AI Tools Are Making Employees Work More, Not Less</strong></h2><p>The sales pitch for enterprise AI adoption goes something like this: AI handles the tedious stuff, your employees focus on higher-value work, everyone&#8217;s happier and more productive. <a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it?ab=HP-latest-text-8">New research described in the Harvard Business Review</a> suggests the reality is closer to the opposite. In an eight-month study of a 200-person tech company, researchers found that generative AI tools didn&#8217;t reduce workloads. They intensified them across three dimensions: employees expanded into tasks outside their roles (designers writing code, PMs debugging), work bled into breaks and off-hours as the low friction of &#8220;one more prompt&#8221; eroded boundaries, and constant multitasking across parallel AI workflows created persistent cognitive load. None of this was mandated. Workers did it voluntarily because AI made &#8220;doing more&#8221; feel accessible and even enjoyable.</p><p>For AI governance professionals, this matters because it complicates the risk calculus around enterprise AI deployment. The obvious governance concerns with AI tools, things like data leakage, IP exposure, and accuracy, are well understood. But work intensification introduces a quieter set of risks that most AI governance frameworks don&#8217;t account for. Employees operating outside their core competencies with AI assistance means more AI-generated or AI-assisted outputs flowing through an organization with less qualified review. The researcher&#8217;s finding that engineers spent increasing time correcting &#8220;vibe-coded&#8221; pull requests from non-engineering colleagues is a concrete example of how quality control can quietly degrade. Meanwhile, the burnout cycle the study describes, where early productivity gains give way to cognitive fatigue and lower decision quality, suggests that organizations measuring AI&#8217;s impact purely through short-term output metrics are likely overstating the long term benefits.</p><p><strong>Key Takeaway:</strong> Reviewing AI outputs across multiple tasks may be cognitively burning teams out, and expectations for productivity and outcomes from executives are rising. AI use policies may need to adapt to include acceptable patterns for taking a break from AI, and understand the cognitive impacts of rapid context switching and constant reviewing of AI outputs.</p><h2><strong>2. Technical Insight: Prompt Repetition</strong></h2><p><a href="https://arxiv.org/pdf/2512.14982">Google researchers recently published a paper showing</a> that simply copying and pasting a prompt twice into the input, with no other changes, consistently improves LLM accuracy across Gemini, GPT, Claude, and Deepseek models. The technique, called &#8220;prompt repetition,&#8221; won 47 out of 70 benchmark tests with zero losses, added no meaningful latency, and didn&#8217;t change the length or format of the model&#8217;s output. Because LLMs process tokens left to right, early tokens in a prompt can&#8217;t &#8220;see&#8221; later ones. Repeating the prompt gives every token a second pass where it can attend to the full context. There&#8217;s also a simpler intuition at play: repetition likely strengthens the internal representations of the input, effectively increasing the &#8220;weight&#8221; the model assigns to the prompt&#8217;s content relative to its prior training biases. The gains are modest on standard benchmarks but dramatic on tasks requiring attention to information buried in long inputs, exactly the kind of problem that plagues document-heavy enterprise workloads.</p><p>For governance teams, the more interesting implication is what this says about evaluation. If a trivial input transformation can meaningfully shift benchmark scores, it raises questions about how stable published model evaluations really are. Two organizations testing the same model with slightly different prompt formats could reach very different conclusions about its reliability. Prompt repetition works best when reasoning mode is off, which is how most enterprise API calls operate for classification, extraction, and structured output tasks, meaning it&#8217;s a real and essentially free improvement. But its bigger lesson is that small methodological choices in evaluation can have outsized effects on results.</p><p><strong>Key Takeaway:</strong> Prompt repetition is worth testing for non-reasoning API workloads, but governance teams should treat it as a reminder that model evaluations are more fragile than published results suggest, and should account for prompt sensitivity when comparing models or setting performance thresholds.</p><h3><strong>3. Trustible Joins the Coalition for Health AI</strong></h3><p>We&#8217;ve partnered with the Coalition for Health AI (CHAI) to bring CHAI&#8217;s AI Governance Framework directly into the Trustible platform. Healthcare organizations can now map their AI governance activities to CHAI&#8217;s guidance with structured workflows, healthcare-specific risk assessments, and audit-ready documentation. For health systems trying to move from AI ambition to confident deployment, this removes the need to build governance practices from scratch or adapt generic frameworks that miss healthcare&#8217;s context. Read more about the partnership<a href="https://trustible.ai/post/trustible-partners-with-coalition-for-health-ai-to-accelerate-responsible-ai-adoption-in-healthcare/"> here</a>.</p><h3><strong>4. AI Incident Spotlight - When AI Alerts Lack Sufficient Context (<a href="https://incidentdatabase.ai/cite/1374/">Incident 1374</a>)</strong></h3><p><strong>What Happened:</strong> A nurse at a Nevada hospital described an episode in which the facility&#8217;s AI sepsis alert system flagged an elderly patient with low blood pressure and triggered urgent protocol steps, including IV fluids. The nurse noticed the patient had a dialysis catheter, meaning her kidneys were already compromised. Pumping IV fluids into a patient who can&#8217;t process them risks dangerous fluid overload. When the nurse objected, he was told to proceed anyway because the AI had generated the alert. He refused, and a physician ultimately intervened with an alternative treatment that avoided the risk. The incident,<a href="https://www.scientificamerican.com/article/ai-is-entering-health-care-and-nurses-are-being-asked-to-trust-it/"> reported by Scientific American in February,</a> is part of a broader pattern of clinical AI systems generating recommendations that conflict with what&#8217;s observable at the bedside.</p><p><strong>Why it Matters:</strong> The model was working as designed. It detected signals consistent with sepsis and triggered the correct protocol. The problem is that it had no way to know about the dialysis catheter, a piece of real-world physical context visible to any clinician in the room but absent from the electronic health record the model was reading. This is a recurring blind spot: clinical AI systems operate on structured digital data, but a significant share of relevant information only exists at the bedside. What the alert did next is the governance concern. It created institutional momentum. Protocol kicked in, and the nurse&#8217;s clinical objection was initially treated as non-compliance. The AI didn&#8217;t just inform the decision, it set the default, and overriding it required escalation.</p><p><strong>How to Mitigate:</strong> Organizations deploying clinical AI alerts need workflows where alerts inform rather than direct. That means building override mechanisms that don&#8217;t require escalation and ensuring frontline staff have clear authority to act on evidence that contradicts a model&#8217;s output. This incident also highlights the limits of models that rely solely on digitized records. Physical context, like a dialysis catheter, is exactly the kind of information that&#8217;s hard for a model to ingest. Until that gap closes, human review is the only reliable way to catch what the system can&#8217;t see.</p><p><strong>Key Takeaway:</strong> An AI system is only as good as its inputs, which includes any and all relevant context. Missing context for a system is one of the biggest potential sources of error and one that often can be missed in many of the &#8216;evaluations&#8217; that are done to a system before deployment. Knowing what information an AI system can access, and what it&#8217;s input limits are is an essential part of governance.</p><h2>5. Policy Roundup</h2><p><strong>DOD and Anthropic.</strong> Anthropic has been in a <a href="https://www.nbcnews.com/tech/security/anthropic-pentagon-us-military-can-use-ai-missile-defense-hegseth-rcna260534">heated battle</a> with the Department of Defense (DoD) over the use of their models. The DoD has put pressure on Anthropic to drop safeguards and allow for their models to be used from a wider range of military purposes. </p><p><strong>Our Take: </strong>While AI safety has been prioritized under the Trump Administration, forcing a model provider to lower safeguards will have ripple effects across the ecosystem and may worsen the trust gap with AI. </p><p><strong>Utah&#8217;s own RAISE Act.</strong> Lawmakers in Utah have been quietly working on <a href="https://le.utah.gov/~2026/bills/static/HB0286.html">passing a law</a> that is very similar to California&#8217;s SB 53 and New York&#8217;s RAISE Act. The bill is currently making its way through the state house. One key difference is an added requirement for model providers to develop a child safety plan for their models.  </p><p><strong>Our Take:</strong> The Trump Administration has sought to deter state action on AI, but concerns with AI safety, as well as its impacts on jobs and children, have been relatively bipartisan.  </p><p><strong>India AI Summit. </strong>India recently held the latest global AI summit, which is a continuation of the work started in Paris last year and the UK in 2024. The focus was mainly on AI innovation and attendees <a href="https://www.politico.eu/article/world-weary-europe-eu-approach-ai-new-delhi-india/">scolded the EU</a> for its prescriptive approach to AI oversight.  </p><p><strong>Our Take: </strong>The shift in global attitudes towards AI regulations has been swift and the focus of these global AI summits showcases that dramatic turnabout. </p><p>In case you missed it, here are a few additional AI policy developments making the rounds:</p><ul><li><p><strong>Asia. </strong>The newly elected Japanese government will <a href="https://www.nippon.com/en/news/yjj2026021900997/">host a ministerial meeting</a> to discuss how AI can be effectively utilized by the government. In Korea, the National AI Strategy Committee <a href="https://doc.msit.go.kr/SynapDocViewServer/viewer/doc.html?key=46e5193af9e54c478d6a0088bf2acf02&amp;convType=html&amp;convLocale=ko_KR&amp;contextPath=/SynapDocViewServer/">adopted a final AI Action Plan</a> at its second plenary session.</p></li><li><p><strong>Australia. </strong>The Australian government is <a href="https://www.abc.net.au/news/2026-02-24/ai-body-scrapped-15-months-spent-experts/106381560">scrapping its AI Advisory Board</a> after launching it 15 months ago. The Advisory was charged with setting out recommendations for AI safeguards. </p></li></ul><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[The Dangers of Desktop Agents]]></title><description><![CDATA[Also, what is &#8216;context engineering&#8217;, why understanding training data sources is important, the heaviest AI legislative proposal in the US Senate, and exciting Trustible announcements!]]></description><link>https://insight.trustible.ai/p/the-dangers-of-desktop-agents</link><guid isPermaLink="false">https://insight.trustible.ai/p/the-dangers-of-desktop-agents</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Thu, 05 Feb 2026 12:45:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RqEO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy Thursday, and welcome to the latest edition of the Trustible AI Newsletter! It&#8217;s been a busy few weeks for us, and we&#8217;ve got a few very exciting partnership announcements to check out below.<a href="https://insights.trustible.ai/ai-monitoring"> </a>Here&#8217;s our team&#8217;s latest insights:</p><ol><li><p>The Dangers of Desktop Agents</p></li><li><p>Trustible Partnership Announcements</p></li><li><p>Tech Explainer: Context Engineering</p></li><li><p>Incident Roundup: CASM Found in Major Image Training Dataset</p></li><li><p>Trustible&#8217;s Top AI Policy Stories</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RqEO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RqEO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 424w, https://substackcdn.com/image/fetch/$s_!RqEO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 848w, https://substackcdn.com/image/fetch/$s_!RqEO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 1272w, https://substackcdn.com/image/fetch/$s_!RqEO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RqEO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RqEO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 424w, https://substackcdn.com/image/fetch/$s_!RqEO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 848w, https://substackcdn.com/image/fetch/$s_!RqEO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 1272w, https://substackcdn.com/image/fetch/$s_!RqEO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79d258aa-db75-41d7-b599-336a7558997b_2048x1117.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(Source: Google Gemini w/ Nano Banana)</p><h2>1. The Dangers of Desktop Agents</h2><p>Two similar desktop agentic AI apps have made headlines in the past few weeks. Anthropic&#8217;s <a href="https://venturebeat.com/orchestration/claude-cowork-turns-claude-from-a-chat-tool-into-shared-ai-infrastructure">Claude Cowork</a> brings their Claude Code capabilities to non-developers via a sandboxed macOS app that can read, edit, and create files autonomously. <a href="https://openclaw.ai/">OpenClaw</a> (formerly Moltbot, formerly Clawdbot) takes a slightly different approach allowing users to interact through WhatsApp or Telegram and then connecting to 100+ services for arbitrary agentic flows. While these apps have desktop apps, they still usually interact with LLMs hosted in the cloud, as most consumer grade computers cannot yet host sufficiently powerful LLMs.</p><p>For most organizations, these tools will be difficult to greenlight anytime soon. Agentic browsers like OpenAI Atlas already present massive security and privacy risks, and these general purpose desktop apps take the risks a step further. The core problem is that an automated system acting on behalf of a human breaks assumptions baked into most security architectures. Access controls, audit logs, and anomaly detection are built around the idea that a human is on the other end. Agents blur that line in ways that aren&#8217;t easy to monitor or contain. They&#8217;re also vulnerable to hijacking via prompt injection. <a href="https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare">Cisco researchers have already </a>demonstrated a malicious OpenClaw &#8220;skill&#8221; that exfiltrated data while bypassing safety guidelines entirely. Furthermore, as both tools run as desktop apps, they can still send your sensitive local files and context to hosted LLMs in the cloud where data may be permanently stored or processed. For companies with strict data handling policies, that&#8217;s a non-starter.</p><p><strong>Key Takeaway:</strong> These tools are impressive and getting a lot of attention online, but they are far away from being enterprise-ready.Most organizations will need to wait for better sandboxing, clearer data handling policies, and security architectures that actually account for non-human actors. Many organizations will quickly move to block such desktop applications via existing device management tools.</p><h2>2. Trustible Announcements</h2><p>Here&#8217;s a quick recap of some major announcements by Trustible in the past few weeks</p><p><strong>Trustible Announces Strategic Partnership with Leidos</strong></p><p>We&#8217;ve partnered with Leidos to bring automated AI governance to government agencies. In a proof-of-concept engagement, Leidos used Trustible&#8217;s platform to compress governance intake processes from weeks to hours, demonstrating that automation can reduce friction while maintaining the oversight that mission-critical environments require. Read the full announcement <a href="http://trustible.ai/post/leidos-and-trustible-launch-joint-initiative-to-redefine-ai-governance-with-agents">here</a>.</p><p><strong>Trustible Partners with the AI Incident Database</strong></p><p>Trustible is now the lead corporate sponsor for the AI Incident Database (AIID), the most widely used public repository of real-world AI harms. Through this partnership, Trustible customers will be able to cross-reference their AI inventories against documented incidents and receive alerts when new incidents relate to models or vendors they&#8217;re tracking. Read more about the announcement <a href="http://trustible.ai/post/trustible-leads-inaugural-sponsor-cohort-for-the-ai-incident-database">here</a>.</p><p><strong>Trustible Publishes Pragmatic AI Policy Paper</strong></p><p>We&#8217;ve published A Pragmatic Blueprint for AI Regulation, a policy paper offering a middle-ground framework for AI governance built around shared liability, copyright balance, child protection, content provenance, and information sharing. The paper argues that closing the AI adoption gap requires trust, and trust requires clear rules without stifling innovation. We decided to take stances on a few core AI policy areas that are often ignored in the larger &#8216;doomer&#8217; vs &#8216;optimist&#8217; debates, and to advocate for regulation that businesses actually want. Check out the <a href="http://trustible.ai/post/a-pragmatic-blueprint-for-ai-regulation">whitepaper here</a>.</p><h2>3. Tech Explainer: Context Engineering</h2><p>In recent months, <em>context engineering</em> has replaced <em>prompt engineering</em> as the focus for building effective AI agents. While the focus of prompt engineering has been on developing techniques that make an LLM respond to a specific query effectively (e.g. telling the model to &#8220;think step by step&#8221; or giving it an example output), context engineering refers to the methodology of giving an agent the right set of information at the right time. This may be challenging because at any given point, an agent may have access to a broad range of assets like tools, internal memory, external data stores, and the conversation history; however, all of this information still needs to be processed by an LLM that can process a limited number of tokens at a time. While leading frontier models can process 250k words at a time, they may not digest and recall all the tokens properly. Common context management techniques include summarization (where a separate LLM is used to condense the context), sub-agents (where each agent only needs partial context) and the use of memory (where an agent adds information to an outside store that can be referenced as necessary).</p><p>While it enables agents to perform more complex tasks, the use of memory, in particular, introduces new governance concerns. A <a href="https://cdt.org/wp-content/uploads/2025/12/2025-12-10-CDT-AI-Gov-Lab-A-Roadmap-For-Responsible-Approaches-to-AI-Memory-final-1.pdf">recent study</a> by the Center for Democracy and Technology identified that users&#8217; key concerns around memory include:</p><ul><li><p>Persistence: Who has control over when and how memories are deleted?</p></li><li><p>Privacy: Can memories inadvertently be shared with additional tools/systems? </p></li><li><p>Transparency: Can a user review all the memories associated with them? </p></li></ul><p>There are not yet well established best practices for managing this, and many regulations and risk frameworks don&#8217;t have specific considerations for context or memory governance. </p><p><strong>Key Take-away: </strong>Organizations deploying AI agents will need to develop new practices that bridge traditional data governance with the dynamic nature of context management. This includes defining clear policies for memory lifecycles, implementing context boundaries between different use cases, and ensuring users maintain meaningful control over their data.</p><h2>4. AI Incident Spotlight: CSAM Found in Dataset Used to Train Content Moderation Tools <a href="https://incidentdatabase.ai/cite/1349/">(Incident 1349</a>)</h2><p><strong>What Happened:</strong> The Canadian Centre for Child Protection (C3P) discovered that NudeNet, a dataset of over 700,000 images used to train AI nudity detection tools, contained approximately 680 images of suspected or confirmed child sexual abuse material (CASM). More than 120 depicted identified or known victims. The dataset had been freely available on Academic Torrents since 2019, and C3P identified over 250 academic works that either cited or used NudeNet or classifiers trained on it. Researchers who downloaded the dataset unknowingly possessed and distributed illegal material. Following C3P&#8217;s takedown notice, Academic Torrents removed the dataset, but the classifiers and models derived from it remain in circulation.</p><p><strong>Why it Matters:</strong> This is the second major incident involving CSAM in AI training data following the 2023 discovery of similar material in <a href="https://laion.ai/blog/relaion-5b/">LAION-5B</a>.  This highlights the ongoing problem with large-scale web scraping without rigorous vetting that captures illegal and harmful content. But the NudeNet case is particularly troubling because the dataset was specifically designed for content moderation. Tools built to detect harmful imagery were themselves trained on it.</p><p>Given NudeNet&#8217;s wide distribution and six-year availability, the contamination likely extends beyond academic research. Foundational models and commercial content moderation systems may have incorporated NudeNet or its derivatives without disclosure. Without transparency into training data provenance, downstream deployers inherit risks they cannot assess. Model cards rarely disclose specific dataset sources, let alone whether those sources were vetted for illegal content.</p><p>The incident also illustrates a difficult dual-use problem. Building effective detection systems for harmful content often requires training on examples of that content. But assembling such datasets creates its own harms as it perpetuates distribution of illegal material, re-victimizes survivors whose images are included, and exposes researchers to legal liability. The goal of protecting against abuse inadvertently extends it without proper controls in place.</p><p><strong>How to Mitigate:</strong> Require training data documentation from model providers, particularly for content moderation systems. If a vendor cannot explain how their training data was sourced and vetted, treat that as a material risk factor. For organizations that must include toxic content in datasets for detection purposes, that data should only be handled under strict access controls, legal compliance frameworks, and coordination with organizations like C3P or NCMEC that maintain vetted hash databases for this purpose. The NudeNet dataset existed for six years before anyone flagged it. That&#8217;s a long time to be unknowingly distributing illegal content.</p><h2>Policy Roundup</h2><p><strong>The Trouble with Trump&#8217;s AI Policy.</strong> There is a growing divide between the Trump Administration&#8217;s position on AI and Republican lawmakers. Republicans at the state and federal level are currently at odds with the Trump Administration&#8217;s &#8220;AI innovation&#8221; ethos, with Republicans pushing for more oversight of the technology (e.g., Senator Marsha Blackburn&#8217;s <a href="https://www.blackburn.senate.gov/2025/12/technology/blackburn-unveils-national-policy-framework-for-artificial-intelligence">TRUMP AMERICA AI Act</a>).     </p><p><strong>Our Take: </strong>Outside of the Executive Branch, Republicans have been more skeptical of AI and have sought some safeguards around the technology. This divide will set up an interesting clash with the Department of Justice as it seeks to evaluate the constitutionality of state laws under the Trump AI Moratorium Executive Order.</p><p><strong>More Turmoil with the EU AI Act.</strong> Lawmakers in the EU continue to struggle with implementing the EU AI Act. The European Commission missed its deadline for publishing draft guidance for classifying high-risk AI systems, while France is pushing a behind-the-scenes effort to separate the AI Act amendments from the EU&#8217;s larger Digital Omnibus package.  </p><p><strong>Our Take:</strong> The continuing drama over the EU AI Act is making it difficult for companies to understand their obligations and deadlines under the law. It may also show that EU lawmakers tried to do too much in one law, with the consequences now on full display. </p><p><strong>Singapore&#8217;s New Agentic Governance Framework.</strong> Singapore&#8217;s Infocomm Media Development Authority published the &#8220;<a href="https://www.imda.gov.sg/-/media/imda/files/about/emerging-tech-and-research/artificial-intelligence/mgf-for-agentic-ai.pdf">Model AI Governance Framework for Agentic AI</a>.&#8221; This framework is one of the first comprehensive governance frameworks for agents from a government entity. </p><p><strong>Our Take: </strong>Policymakers are usually behind the curve on technology but Singapore has been an outlier on AI guidance, producing a host of practical AI-related materials. </p><p>In case you missed it, here are a few additional AI policy developments making the rounds:</p><ul><li><p><strong>Africa.</strong> Egypt will be hosting the first <a href="https://www.egypttoday.com/Article/3/144845/Egypt-to-Host-Inaugural-%E2%80%98AI-Everything%E2%80%99-Middle-East-and-Africa">AI Everything Middle East &amp; Africa Summit</a>, which is intended to emphasize the region&#8217;s focus on digital development.</p></li><li><p><strong>Asia.</strong> South Korea&#8217;s AI law came into effect last month, with a one year grace period for enforcement. The law has <a href="https://www.theguardian.com/world/2026/jan/29/south-korea-world-first-ai-regulation-laws">recently faced major pushback</a> from the tech industry for going &#8220;too far&#8221; and from advocacy groups for not going far enough.</p></li><li><p><strong>North America. </strong>The Mexican government<strong> </strong>released a <a href="https://secihti.mx/sala-de-prensa/presentan-declaracion-de-etica-y-buenas-practicas-para-el-uso-y-desarrollo-de-la-ia-en-mexico-secihti-y-atdt/'">Declaration of Ethics and Good Practices for the Use and Development of AI</a>. The declaration outlines ten fundamental principles that serve as &#8220;a non-binding guide for public institutions, government agencies, autonomous bodies, as well as actors from the private and social sectors.&#8221;</p></li><li><p><strong>South America. </strong>A <a href="https://www.courthousenews.com/brazils-ai-take-on-taylor-swift-tests-limits-of-copyright-law/">fake version</a> of Taylor Swift&#8217;s &#8220;Fate of Ophelia&#8221; is testing the limits of Brazil&#8217;s IP law. The song, &#8220;A Sina de Of&#233;lia,&#8221; was generated by AI using voices from two well-known Brazilian pop artists.</p></li></ul><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[Why OpenAI and Anthropic Are Building Dedicated Health Applications]]></title><description><![CDATA[Also, how are AI healthcare tools evaluated, why Grok is getting bipartisan criticism, and the latest policy roundup.]]></description><link>https://insight.trustible.ai/p/why-openai-and-anthropic-are-building</link><guid isPermaLink="false">https://insight.trustible.ai/p/why-openai-and-anthropic-are-building</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 21 Jan 2026 13:03:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HhEY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HhEY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HhEY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HhEY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HhEY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Happy Wednesday, and welcome to the latest edition of the Trustible AI Newsletter! We&#8217;ve got a lot of exciting news to share in the next few weeks, but for now, be sure to <a href="https://insights.trustible.ai/ai-monitoring">download our latest whitepaper on AI monitoring! </a>Here&#8217;s our team&#8217;s latest insights:</p><ol><li><p>Why OpenAI and Anthropic Built Dedicated Health Applications</p></li><li><p>How to Evaluate Healthcare AI</p></li><li><p>Grok&#8217;s Generation of Non-consensual Intimate Imagery</p></li><li><p>Trustible&#8217;s Top AI Policy Stories</p></li></ol><h2>1. Why OpenAI and Anthropic Built Dedicated Health Applications</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ysn3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ysn3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!Ysn3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!Ysn3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!Ysn3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ysn3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png" width="1024" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:871262,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/185256326?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ysn3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 424w, https://substackcdn.com/image/fetch/$s_!Ysn3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 848w, https://substackcdn.com/image/fetch/$s_!Ysn3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 1272w, https://substackcdn.com/image/fetch/$s_!Ysn3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcda6ef06-2e6c-4973-ad8c-76acea2b07b0_1024x559.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(Source: Google Gemini)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Trustible Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Within a few days of each other, both OpenAI and Anthropic announced dedicated &#8216;health&#8217; versions of their AI platforms, <a href="https://openai.com/index/introducing-chatgpt-health/">ChatGPT Health</a>, and <a href="https://www.anthropic.com/news/healthcare-life-sciences">Claude for Healthcare</a>. According to their own data, over 230 million health related requests are made per week on ChatGPT, and it&#8217;s one of the top use cases. The dedicated health applications will have specific connectors for healthcare related databases, integrations with fitness and wellness platforms, and health related questions will now be &#8216;routed&#8217; to the dedicated healthcare application instead of being answered.</p><p>There are several likely motivations behind this split: opening new revenue streams, competing against health-specific wrapper companies, and a desire for more data. But we&#8217;ll focus on regulatory and compliance motivations. Under many AI related laws such as the EU AI Act, an AI service providing medical advice would be categorized as &#8216;high risk&#8217;, triggering a number of heavy compliance obligations. Many existing privacy, liability, and security laws surrounding health data also apply. By carving out health related uses into a dedicated app, both companies can build dedicated guardrails, infrastructure, and processes around the sensitive application. This allows their non-health applications to still innovate quickly without getting bogged down by compliance. And by putting significant effort into routing users to a &#8216;safer&#8217; application for health related queries, these companies will be able to claim that the non-health version of their applications are not &#8216;intended&#8217; for this high risk domain and are not marketed as such. This is actually not the first carve-out, as both companies already support a <a href="https://openai.com/global-affairs/introducing-chatgpt-gov/">similarly dedicated platform for the US public sector.</a> Much like with healthcare, these platforms have dedicated infrastructure, customized security and privacy controls, and a different set of guardrails.</p><p><strong>Key Takeaway:</strong> Regulation has always shaped product architecture, and we should expect big AI companies to create dedicated platforms for each high-risk domain these laws identify. </p><h2>2. Tech Explainer: How to Evaluate Healthcare AI</h2><p>Given the recently announced healthcare applications from <a href="https://openai.com/index/healthbench/">OpenAI</a> and<a href="https://www.anthropic.com/news/healthcare-life-sciences"> Anthropic</a>, alongside Utah&#8217;s new<a href="https://commerce.utah.gov/2026/01/06/news-release-utah-and-doctronic-announce-groundbreaking-partnership-for-ai-prescription-medication-renewals/"> pilot program</a>, it&#8217;s worth digging into how these systems were evaluated. Obviously since there are bespoke capabilities, guardrails, and integrations in these applications, they require customized testing, and there is already a growing ecosystem of healthcare related benchmarks and evaluations, albeit with many limitations. Here&#8217;s a quick analysis from the information released so far:</p><p><strong>ChatGPT Health</strong></p><p>OpenAI evaluated using <strong>HealthBench</strong>, a benchmark of realistic healthcare conversations with physician-created rubrics. Key strengths of his benchmark are:</p><ul><li><p>Multi-turn conversations (superior to single-turn evals as quality often degrades over time)</p></li><li><p>Multi-faceted rubrics evaluating medical accuracy, communication quality, and jargon avoidance, scored via LLM-as-a-judge</p></li><li><p>Development by 262 physicians from 60 countries, improving cross-cultural validity</p></li></ul><p>However, the benchmark doesn&#8217;t fully account for varied input formats from healthcare record providers (these integrations are central to the platform). Performance for &#8220;ChatGPT Health&#8221; was not published; but <a href="https://arxiv.org/pdf/2601.03267">gpt-5-thinking scored </a><strong><a href="https://arxiv.org/pdf/2601.03267">67.2%</a></strong>. It is unclear how this score translates to real world outcomes.</p><p><strong>Claude for Healthcare</strong></p><p>Anthropic&#8217;s announcement covered two functionalities: supporting healthcare professionals with prior authorizations and care coordination, and helping individuals summarize medical history and prepare for appointments. They reported Claude-4.5-Opus performance on <a href="https://ai.nejm.org/doi/pdf/10.1056/AIdbp2500144">MedAgentBench</a>, which assesses agent capabilities in medical records contexts with pre-defined tools. While co-developed by physicians, it&#8217;s a proxy metric, and Claude&#8217;s actual tools differ from the sandbox environment. No evaluations addressed the personal healthcare scenario.</p><p><strong>Utah Auto-Refill Program</strong></p><p>Utah&#8217;s automated refill pilot uses Doctronic&#8217;s algorithm, which <a href="https://www.remio.ai/post/utah-ai-prescription-refills-how-doctronic-approves-meds-without-a-doctor#:~:text=The%20data%20presented%20to%20support%20this%20move%20is%20substantial.%20In%20a%20previous%20test%20involving%20500%20urgent%20care%20cases%2C%20Doctronic%E2%80%99s%20algorithm%20matched%20the%20treatment%20decisions%20of%20human%20doctors%2099.2%25%20of%20the%20time.%20The%200.8%25%20variance%20was%20not%20necessarily%20error%2C%20but%20difference%20in%20clinical%20judgment">showed 99.2% agreement</a> with physician decisions in testing. Unlike the broader AI systems, this narrowly scoped use case allows direct performance measurement on the actual task. However, testing used urgent care cases with physician-entered data, which may differ from patient chatbot inputs in production.</p><p><strong>Our Take: </strong>Benchmarks are a useful evaluation tool, but they imperfectly simulate real-world conditions because they imperfectly capture aspects of real-world conditions like external data formats and real agentic tools. In addition, for the open-ended consumer products, it is not clear how these scores will translate to improved clinical outcomes. While the current systems have been released with a number of safeguards, better reporting standards will be necessary as AI Healthcare tools become more commonplace.</p><h2>3. AI Incident Spotlight: Grok&#8217;s Generation of Non-Consensual Intimate Imagery (<a href="https://incidentdatabase.ai/cite/1329/">Incident 1329</a>)</h2><p><strong>What Happened:</strong> In late December 2025, users discovered that xAI&#8217;s Grok would readily &#8220;undress&#8221; women in photos, manipulating existing images to create sexualized deepfakes without consent. The flood of content included images of celebrities, private individuals, and minors. Viral prompts ranged from &#8220;put her in a bikini&#8221; to far worse. Despite reports, X was slow to respond. Even one of Musk&#8217;s ex-partners struggled to get deepfakes of herself removed. After international backlash, X announced partial restrictions in mid-January, but the standalone Grok Imagine app continues generating explicit imagery.</p><p><strong>Why it Matters:</strong> This isn&#8217;t a fringe product. <a href="https://www.pbs.org/newshour/show/musks-grok-ai-faces-more-scrutiny-after-generating-sexual-deepfake-images">Days after Secretary Hegseth announced that Grok would be integrated into Pentagon systems</a>, including classified networks, regulators in at least a dozen countries <a href="https://www.techpolicy.press/tracking-regulator-responses-to-the-grok-undressing-controversy/">launched investigations or outright bans</a>. The same model generating what California&#8217;s Attorney General <a href="https://www.axios.com/2026/01/16/xai-california-elon-musk-deepfakes-children-grok">called an &#8220;avalanche&#8221; of illegal content</a> is being deployed to 3 million DoD personnel.</p><p>The political dimension matters too. Even policymakers who <a href="https://x.com/SenTedCruz/status/2009005328709697848">oppose AI regulation have consistently carved out exceptions for child safety</a>. This is one area with genuine bipartisan consensus, and incidents involving minors accelerate legislative action. By pushing boundaries on content moderation, xAI may be generating exactly the public backlash that fuels demand for stricter AI regulation across the board. Every headline about AI-generated sexual material erodes trust in AI broadly, not just in Grok.</p><p><strong>How to Mitigate:</strong> Treat content moderation capabilities as a procurement criterion. Before deploying any image generation system, request documentation on what categories are blocked and how. For organizations considering Grok or X API integration, this incident warrants a serious risk assessment, particularly for customer-facing applications where generated content could create legal exposure. </p><h2>4. AI Policy Roundup</h2><p><strong>Next Steps on AI Moratorium EO.</strong> The Department of Justice <a href="https://www.justice.gov/ag/media/1422986/dl?inline">issued a memo</a> establishing a task force to challenge state AI laws, as directed under the <a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">Trump AI Moratorium EO</a>. The EO also calls for federal legislative proposals to regulate AI, but the director of the Office of Science and Technology Policy <a href="https://meritalk.com/articles/trump-ai-plan-faces-lawmaker-skepticism-over-state-preemption/">offered few details</a> on the Administration&#8217;s plans at a recent congressional hearing. </p><p><strong>Our Take: </strong>The EO spurred controversy even before it was signed because of the power it attempts to assert over states to regulate AI. It has not appeared to blunt momentum at the state level to pass AI laws, as several state legislatures have introduced bills in 2026.  </p><p><strong>ChatGPT&#8217;s Confidentiality Quagmire.</strong> Sam Altman <a href="https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/">recently asserted</a> that OpenAI does not have an obligation to keep sensitive information confidential when people use ChatGPT as a therapist. Altman acknowledged that privacy concerns with AI may hinder adoption.</p><p><strong>Our Take:</strong> Model providers are further blurring the lines between their products and privacy obligations. Health privacy laws like HIPAA and HITECH do not explicitly cover products like ChatGPT, but as the models expand into offering health services (e.g., digital therapy) that may change.  </p><p><strong>Congress Targets Deepfake Porn.</strong> Congress is considering <a href="https://www.axios.com/newsletters/axios-pm-35671d9d-0819-4de3-ab4a-1f5fc4ac6021.html?chunk=1&amp;utm_term=emshare#story1">bipartisan legislation</a> that would allow victims to sue over nonconsensual sexual images. The DEFIANCE Act passed the Senate unanimously and will head to the House.</p><p><strong>Our Take: </strong>Congress is responding to concerns over Grok&#8217;s capabilities to produce fake, sexually explicit deepfakes. This is one of the rare times that federal lawmakers agree on creating a private right of action, which allows individuals to sue. The law does not ban these images, though is thought of as a complement to the TAKE IT DOWN Act. </p><p>In case you missed it, here are a few additional AI policy developments making the rounds:</p><ul><li><p><strong>Africa.</strong> Nigeria is working towards <a href="https://developingtelecoms.com/telecom-business/telecom-regulation/19609-nigeria-leading-the-way-on-ai-regulation-in-africa.html">passing a comprehensive AI law</a>, making it among one of the first countries in Africa to enact such a law. The law is primarily focused on safeguards for high-risk systems, and would allow regulators to demand information from providers for non-compliance. The law is expected to be enacted in March 2026.</p></li><li><p><strong>Asia.</strong> The Taiwanese legislature <a href="https://www.taipeitimes.com/News/front/archives/2025/12/24/2003849407">passed an AI basic law</a> towards the end of 2025 and that law came into effect on January 14, 2026. The law outlines a series of principles for AI development and deployment, though has no specific enforcement mechanism. </p></li><li><p><strong>Europe.</strong> Regulators in the EU and UK are considering consequences for AI tools that can create sexually explicit images. The concerns come in the wake of the controversy over Grok&#8217;s ability to produce sexualized images. EU lawmakers are <a href="https://www.politico.eu/article/european-parliament-lawmakers-call-for-full-ban-on-ai-nudifying-apps/">considering banning</a> the technology altogether and the <a href="https://www.bbc.com/news/articles/cq845glnvl1o">UK government</a> is threatening to revoke xAI&#8217;s ability to self-regulate.</p></li></ul><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Trustible Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Trustible's 2026 AI Predictions]]></title><description><![CDATA[2025 was a transformative year for AI - and we're forecasting even more consequential changes in 2026 across AI governance and the technical, policy, and business landscapes.]]></description><link>https://insight.trustible.ai/p/trustibles-2026-ai-predictions</link><guid isPermaLink="false">https://insight.trustible.ai/p/trustibles-2026-ai-predictions</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 07 Jan 2026 17:02:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wbbq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wbbq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wbbq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!wbbq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!wbbq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!wbbq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wbbq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1314269,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/183806928?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wbbq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!wbbq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!wbbq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!wbbq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9462c6c7-5cd0-4d89-9038-651517aecf3b_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Happy Wednesday, Happy New Year, and welcome to the first 2026 edition of the Trustible AI Newsletter! 2025 proved to be a critical - but tumultuous - year in the world of AI, and we don&#8217;t anticipate that trend changing in 2026. But, as we navigate what&#8217;s to come across the technical, policy, and business landscape of AI, we do believe in one constant: that 2026 will be a transformative year for AI governance, as it becomes the primary business imperative that will drive how enterprises will actualize the positive ROI of AI.</p><p>In this week&#8217;s edition, we&#8217;re sharing our 2026 predictions across what&#8217;s in store for AI governance, AI technical trends, AI incident trends, and what&#8217;s around the corner in the policy and regulatory sphere. </p><p>Let&#8217;s dig in.</p><div><hr></div><h3>1. Trustible&#8217;s 2026 AI Governance Predictions</h3><p>We aren&#8217;t alone in predicting that 2026 will be the &#8220;make or break&#8221; year for AI. There are a number of consequential questions that will likely be answered in 2026, including whether AI agents will be adopted at scale, whether major AI regulations in the EU and across U.S. states will actually come into force in their current form, and whether the AI bubble will burst, or continue to grow. These are monumental questions that policymakers and professional talking heads will continue to debate, but all of them also have implications for teams tackling AI governance. </p><p>Here are our predictions on AI governance for 2026:</p><ul><li><p><strong>AI Governance Beyond Intake</strong> - Many organizations now have robust policies, fully populated inventories, and initial risk assessments. What comes next is a lot of change management for systems already in place. This work may look very different from earlier governance work.</p></li><li><p><strong>AI Agents Become Mainstream</strong> - At this point last year, even many AI practitioners may have never heard of an MCP server, or had reviewed a proposed agentic AI system. This year, many organizations have mandates to deploy agentic AI workflows, and a lot of people are hoping that agentic AI provides the value that chatbot copilots did not.</p></li><li><p><strong>Growing Third Party Risks</strong> - As agentic AI rolls out inside organizations, knowing which tools and platforms are connected to AI systems will become its own inventorying challenge, and source of risks. In addition, many vendors may deploy their own agents for work, introducing new potential risks for their customers to stay on top of.</p></li><li><p><strong>Pressure for AI ROI</strong> - After several years of high budget experimentation, many organizations are now looking for tangible ROI from their AI systems. AI vendors will be under a lot of pressure to show revenue, and so prices for AI tools are likely to increase at the same time that organizations are looking to focus their efforts more on high value AI. Calculating that value will be a major challenge and narrative going forward.</p></li><li><p><strong>AI Policy Moves &#8216;Up the Stack&#8217;</strong> - 2025 had many AI policy proposals focused on the foundational model level, with the EU publishing their relevant Code of Practice for GPAI, and bills in California and New York being signed to regulate them. However, these types of regulations are being specifically targeted by the Trump administration, and trying to pass new ones is likely to face pushback. We think policymakers are more likely to target specific AI use cases or types of systems for further regulation, especially focused on protecting kids from AI, or regulating use of AI for mental health purposes. </p></li></ul><p>You can read <a href="https://trustible.ai/post/5-ai-governance-trends-heading-into-2026/">our full 2026 trends and prediction piece here</a>.</p><div><hr></div><h3>2. Technical Deep Dive - 2026 Technical Look-Ahead</h3><p>2025 brought a strong new generation of AI models from many providers, with an increased focus on training for reasoning and tool use. While some providers focused on creating increasingly large models, others continued to explore how smaller models trained on high-quality data can produce competitive results. We expect to see steady improvements across these areas, but for our 2026 predictions, we focus on the broader picture beyond just performance:</p><ul><li><p><strong>World Models</strong> are AI models that aim to understand and model the physical world directly (in contrast to LLMs that are trained to predict the next word and as a byproduct encode some knowledge about the world). In 2025, there were some early developments in this space from Fei-Fei Li&#8217;s company <a href="https://techcrunch.com/2025/11/12/fei-fei-lis-world-labs-speeds-up-the-world-model-race-with-marble-its-first-commercial-product/">World Labs</a> and China&#8217;s <a href="https://www.scmp.com/tech/big-tech/article/3332653/tencent-expands-ai-world-models-tech-giants-chase-spatial-intelligence">Tencent</a>; we expect to see some progress in this space, but these models are unlikely to overtake LLMs in usability and popularity in 2026, because they require large amounts of complex data that is not readily available.</p></li><li><p><strong>AI-Generated Videos</strong> will become impossible to distinguish from real videos; many examples from state of the art models, like Google&#8217;s <a href="https://aistudio.google.com/models/veo-3">Veo-3</a>, already lack tell-tale &#8220;AI&#8221; signs (like background objects that move in unrealistic ways). However, generating longer videos (&gt; 30 seconds) may remain difficult because of challenges with maintaining character and scene consistency. </p></li><li><p><strong>New Year, Same Risks:</strong> In late 2025, <a href="https://arxiv.org/abs/2511.15304">adversarial poetry</a>, a new jailbreaking technique, was able to overcome the defences of a large number of popular LLMs. At the same time, hallucination rates remained high on <a href="https://github.com/vectara/hallucination-leaderboard">multiple</a> <a href="https://research.aimultiple.com/ai-hallucination/">benchmarks</a>. We do not expect either problem to be &#8220;solved&#8221; in 2026, these risks are inherent to LLMs because LLMs are trained to generate text, not recognize factuality or the potential danger of the generated content.</p></li><li><p><strong>Model Transparency </strong>will continue to play an uncertain role in adoption and trust. While transparency decreased overall in 2025, according to the <a href="https://crfm.stanford.edu/fmti/December-2025/index.html">Stanford Transparency Index</a>, AI adoption increased. In the open model space, Qwen models, whose providers disclose little information about the training data, were <a href="https://aiworld.eu/story/chinese-developers-account-for-over-45-of-top-open-model-public-downloads">top downloads</a> from Hugging Face, while the highly transparent OLMO models did not receive much attention. In addition, new nuances have emerged around &#8220;transparency&#8221;: popular LLM providers release increasingly long System Cards, but many of the evaluation results now rely on LLM-as-a-Judge style automated evaluations which <a href="https://insight.trustible.ai/p/ai-copyright-conundrum-continues">introduce biases and a new layer of complexity.</a> </p></li></ul><p>None of these predictions point to a single dominant shift in 2026, but some of the most exciting developments may come from the non-LLM side of AI. More broadly, if 2025 was the year of AI pilots and experiments, 2026 will be the year of transforming them into hardened production-ready systems. Meanwhile the continued uncertainty around risks and transparency, points to a need for increased education around AI risks and evaluations.</p><div><hr></div><h3>3. AI Incident Spotlight - 2025 AI Incident Recap &amp; 2026 Predictions</h3><p>The AI Incident Database catalogued 345 distinct incidents in 2025, a record high. Our analysis of these incidents shows 3 major trends in 2025:</p><ul><li><p><strong>Deep Fake Scams</strong> - The majority of incidents in 2025 involved some form of scams, often involving deepfakes. These included using AI tools to <a href="https://incidentdatabase.ai/cite/1016/">generate flashy &#8216;phishing&#8217;</a> websites, using <a href="https://incidentdatabase.ai/cite/1189/">AI to create a massive web footprint</a> for a fraudulent company, and many instances of scammers exploiting fake videos of celebrities claiming to love the scam&#8217;s target (<a href="https://incidentdatabase.ai/cite/901/">Incident 901</a>, <a href="https://incidentdatabase.ai/cite/1126/">Incident 1126</a>, <a href="https://incidentdatabase.ai/cite/1185/">Incident 1185</a>).</p></li><li><p><strong>Chatbots &amp; Mental Health</strong> - Unfortunately there were many incidents involving deaths associated with chatbot use. These included instances of chatbots providing recommendations on how to <a href="https://incidentdatabase.ai/cite/1192/">tie a better noose</a>, affirming <a href="https://incidentdatabase.ai/cite/1204/">a man&#8217;s belief that his mother was plotting to kill him</a> leading to murder-suicide, and <a href="https://incidentdatabase.ai/cite/1259/">a teenager being induced to commit suicide</a> by a fake AI-powered Game of Thrones character. In a sign of how bad things have gotten, <a href="https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots">Wikipedia now has a dedicated page linked to them</a>. According to OpenAI&#8217;s own data, using AI for chats about mental health issues <a href="https://incidentdatabase.ai/cite/1253/">is one of the top user</a> use cases for ChatGPT. </p></li><li><p><strong>Early &#8216;Agentic&#8217; Incidents</strong> - 2025 saw some of the first incidents directly linked to AI agents, including a <a href="https://incidentdatabase.ai/cite/1152/">Replit agent deleting a production database</a>, <a href="https://incidentdatabase.ai/cite/1263/">Claude Code&#8217;s agent mode autonomously conducting</a> a cyber attack, and <a href="https://incidentdatabase.ai/cite/1313/">Wall Street Journal reporters successfully jailbreaking</a> an AI powered vending machine. These were attained not with direct improvements in the AI models themselves, but rather by connecting models with a lot of different tools capable of performing actions.</p></li></ul><p>Here are a few of our predictions for AI incidents in 2026:</p><ul><li><p><strong>Agentic AI</strong> - There were only a few incidents directly linked to AI agents in 2025, but as agentic AI gets more adoption, we expect to see many incidents linked to agents. There are a few good reasons to suspect this, including the poor security status of many MCP servers and the relatively early maturity on how to evaluate or red-team agents.</p></li><li><p><strong>AI in Healthcare</strong> - It&#8217;s estimated that <a href="https://www.ama-assn.org/practice-management/digital-health/2-3-physicians-are-using-health-ai-78-2023">over two-thirds of medical practitioners</a> are now using AI tools in their jobs, with notes transcription being the leading use case. However the <em>impact</em> of AI errors may take time to resolve. We expect to see some incidents linked to errors produced especially by earlier model versions that haven&#8217;t yet been acted upon.</p></li><li><p><strong>AI Videos</strong> - The quality of AI generated videos from tools like OpenAI Sora-2, or Google&#8217;s Nano Banana, is truly impressive, and many videos will become increasingly difficult for many people to identify as AI generated. We expect more scams and misinformation incidents specifically linked to hyper-realistic looking videos.</p></li></ul><div><hr></div><h3>4. Trustible&#8217;s 2026 Policy Predictions</h3><p>AI policy in 2025 was a roller coaster of new developments domestically and globally. The new Trump Administration upended AI safety work from the Biden Administration, the EU squabbled over whether to delay the AI Act (which it ultimately did), and governments at every level moved ahead with their own AI rules. Here are our top three thoughts on what to expect in AI policy in 2026:  </p><ul><li><p><strong>AI Lawsuits Will Test Regulatory Limits.</strong> Last year we tracked various lawsuits related to AI harms, from companion bot-related deaths to copyright infringement. We do not expect those battles to fade away, but a new one is about to heat up. States have been passing AI laws and those are in the crosshairs for new legal fights, as well the litigation coming from President Trump&#8217;s <a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">AI moratorium Executive Order</a> (EO). These new legal fights will forge a new path on old laws and rights as they apply to AI rules.   </p></li><li><p><strong>Turnaround for the AI Trust Gap. </strong>The past year <a href="https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html">saw another dip</a> in AI trust among the general public even though adoption was on the rise. A large part of the distrust stems from a lack of regulation, which <a href="https://globalnews.ca/news/11354504/ai-poll-government-regulation/">some studies show</a> would help ease concerns over the technology. As some countries start to implement AI rules and safeguards, do not be surprised if there is a (slight) uptick on AI trust.   </p></li><li><p><strong>AI Innovation Hits a Roadblock in the US.</strong> The second Trump Administration started with a bang for AI innovation, effectively undercutting any efforts that could burden the American AI ecosystem. Expect that line of thought to shift ever so slightly in 2026, as the Trump Administration grapples with challenges that AI presents to national security and critical infrastructure. The AI moratorium EO acknowledges the need for a federal AI framework, which is a marked shift from where the Administration stood last January. Expect further guidance on AI security and resiliency, as well as guidelines for certain industries (NIST <a href="https://www.nist.gov/news-events/news/2025/12/nist-launches-centers-ai-manufacturing-and-critical-infrastructure">recently announced</a> a new workstream for AI and advanced manufacturing).</p></li></ul><p>Overall, this year will blend &#8220;more of the same&#8221; with some new challenges. We expect US states to continue regulating AI, even as the federal government tries to clamp down on the AI legal patchwork. We also expect to see more interest in agentic AI, though regulatory frameworks are still a few years away. What we see this year is an opportunity to clarify some legal uncertainty, while also increasing a push for basic AI governance to help address safety and security concerns.   </p><div><hr></div><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[Black Friday or Black Mirror? ]]></title><description><![CDATA[Plus adversarial poetry proves the pen is mightier than the sword, (another) public sector consulting report includes hallucinated citations, and our regular policy roundup]]></description><link>https://insight.trustible.ai/p/black-friday-or-black-mirror</link><guid isPermaLink="false">https://insight.trustible.ai/p/black-friday-or-black-mirror</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 10 Dec 2025 13:03:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HhEY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HhEY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HhEY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HhEY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1314269,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/181195582?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HhEY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HhEY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dfc5d9e-dbf8-475d-8303-a787689d1437_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Happy Wednesday, and welcome back to the Trustible AI Newsletter! The holiday season is upon us, and it appears the White House is poised to gift states a new executive order this week to revive the federal AI moratorium. We&#8217;ll share more thoughts if and when the order drops, but at a minimum, expect a busy few months in the courts as the questions this order raises will almost certainly be decided in front of a judge. In the meantime, in this week&#8217;s edition:</p><ol><li><p>Will AI Make Black Friday Become Black Mirror?</p></li><li><p>Technical Deep Dive - A Poet&#8217;s Key to Model Hacking</p></li><li><p>AI Incident Spotlight - Deloitte Publishes Citation Hallucination in Government Sponsored Report (<a href="https://incidentdatabase.ai/cite/1286">Incident 1286</a>)</p></li><li><p>Trustible&#8217;s Top AI Policy Stories</p></li></ol><div><hr></div><h3>1. Will AI Make Black Friday Become Black Mirror?</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NSER!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NSER!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NSER!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NSER!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NSER!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NSER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:240613,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/181195582?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NSER!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NSER!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NSER!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NSER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcba2476c-6efc-484d-931a-2be6cd36d271_1600x896.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>What effect could AI agents have on pricing? E-commerce sites have long battled bots and scalpers that buy up <a href="https://www.pymnts.com/news/artificial-intelligence/2025/amazon-updates-code-keep-out-google-ai-shopping-tools/">limited goods for resale, with mixed success</a>. Platforms like Ticketmaster have become notorious for profiting off secondary markets, resulting in recent actions to curb secondary market activity. For many consumers, it feels like more goods than ever (from vintage clothes, to Pokemon cards, and Lego sets) are dominated by resellers.</p><p>The new wave of AI agents risk making this problem worse. Old school bots were often simple web &#8216;scrapers&#8217; that knew how to click specific buttons in order, and could be thwarted by certain types of activity blockers. AI agents can combine the reasoning powers of LLMs, with enhanced tool calling capabilities, to become more sophisticated, and are simpler to set up and deploy at scale. There are <a href="https://www.nytimes.com/2025/12/02/technology/artificial-intelligence-amazon-gmail.html">a slew of start-ups</a> already targeting how to train dedicated agents to do this by creating fully sandboxed digital replicas of websites like Amazon.</p><p>It&#8217;s worth considering what impacts this may have on prices, and the broader global economy. Many limited goods with resale value may get quickly bought, and then immediately posted for re-sale. Now, bots could be used to manipulate the price directly. A single item could be bought and resold to other bots before it ever leaves a physical warehouse. Marketplace platforms and credit card companies will have significant financial incentives to allow this so they get a cut of every resale. Pricing for goods could start to more quickly resemble the stock market where most trades are already done based on algorithms.</p><p>This will likely mean higher prices on many goods. This will especially be true if e-commerce platforms themselves use AI to adopt dynamic pricing strategies. A <a href="https://www.consumerreports.org/media-room/press-releases/2025/12/new-report-exposes-instacarts-hidden-price-games/'">recent expos&#233;</a> found that Instacart was using a hidden algorithm to test the upper limits of prices they were willing to pay for certain groceries. While we do not have enough information to fully understand the interplay between AI-enable pricing and constant bot activity, what we are seeing is that the incentive structure likely doesn&#8217;t favor the consumer.</p><p>Another unsettling dynamic that could emerge is dynamic pricing that acts as a proxy for social scoring. Consumers with certain buying histories or in certain geographic locations may be rewarded with access to certain pricing schemes to the detriment of other buyers. Characteristics like race, gender, sexuality, or disability could be inferred based on proxy data, which could in turn cause specific populations to burden higher costs because of the types of goods or services they are trying to access. For instance, rural consumers may pay higher grocery delivery prices if they live in a food desert or racial minorities may be quoted higher rent prices in certain neighborhoods.</p><p><strong>Key Takeaway:</strong> Given that &#8216;affordability&#8217; is currently a global concern, and that algorithms have already started to be a major contributor to that, negative impacts from AI-driven higher prices could become a large political force in the coming decade. Policymakers may feel the pressure to impose limited safeguards that help mitigate these issues.</p><div><hr></div><h3>2. Technical Deep Dive - A Poet&#8217;s Key to Model Hacking</h3><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text"><em>Safety alignment 
May succeed until
Poetry
Foils the plan</em></pre></div><p>Despite extensive alignment efforts, many LLM safety mechanisms can be brought down with a few lines of poetry. A recent research paper on &#8220;<a href="https://arxiv.org/pdf/2511.15304">Adversarial Poetry</a>&#8221; showed that using poetry to frame an adversarial request (e.g. advice on how to execute a cyber attack) resulted in a broad range of models producing unsafe outputs 62% of the time. The attack success rate (ASR) varied widely by model: all the GPT-5 models had an ASR of under 10%, while the Deepseek-3 and Gemini 2.5 models had an ASR of over 95%. While this isn&#8217;t the first method to cause models to ignore their built-in defences (e.g. the infamous <a href="https://learnprompting.org/docs/prompt_hacking/offensive_measures/dan">Do-Anything-Now prompt</a>), it does point to two broader themes.</p><p>First, while many providers now report extended safety evaluations, they are often focused on well-known attack vectors and may not paint a full picture. In addition, to creating a custom poetry dataset, the researchers in this study took a well-known dataset of adversarial prompts from MLCommons and translated them into poems, the average ASR went from 8% to 43% - suggesting that model developers may be overfitting to a known set of attacks during safety fine-tuning and not addressing the broader alignment problem.</p><p>Second, syntax has a big role in how LLMs process data. <a href="https://arxiv.org/pdf/2509.21155v2">Another recent study</a> showed that LLMs can rely on syntax over exact semantics of a sentence. For example, when asked &#8216;Where is Paris <em>undefined</em>?&#8217; A model may answer &#8216;France&#8217; despite the actual sentence being nonsensical. This may point to the success of the poetry attacks - the grammatical structure of poetry is not associated with adversarial behavior and thus may not trigger those safety protections. </p><p><strong>Key Take-away: </strong>LLMs will likely never be fully resistant to jailbreaking, which may take the form of complex multi-turn attacks or a simple poem (GPT-5 models seem particularly resistant to the later), because the training data contains unsafe content and reliable &#8220;unlearning&#8221; techniques do not exist. When building AI systems that may have adversarial users, relying on the model provider&#8217;s safety alignment is not sufficient and additional guardrails (e.g. output filtering) should be integrated into the systems.</p><div><hr></div><h3>3. AI Incident Spotlight - Deloitte Publishes Citation Hallucination in Government Sponsored Report (<a href="https://incidentdatabase.ai/cite/1286">Incident 1286</a>)</h3><p><strong>What Happened:</strong> A public report written by Deloitte, on behalf of the Provincial Government in Newfoundland, contained some hallucinated citations for key statistics and facts. Newfoundland reportedly paid Deloitte 1.6 million Canadian dollars for the report that outlined a human resources plan. The report had citations to publications that don&#8217;t actually exist, and supposed claims in those publications were used as justification for recommendations in the report. Deloitte claims the errors were strictly related to generating the citations, not the outcomes of or recommendations from the report. This comes only a few months <a href="https://incidentdatabase.ai/cite/1193/">after a highly similar incident in Australia</a> where Deloitte again cited poor citations in reports generated for public sector agencies.</p><p><strong>Why It Matters:</strong> There are a couple of things this incident highlights. The first is the challenge with generating citations in general. This is often a tedious task that many people want to automate, but it&#8217;s also one that is actually quite challenging for AI to do, unless it&#8217;s connected to some kind of &#8216;global database&#8217; of research upfront. Ironically, most publication archives are now heavily deploying anti AI scraping technology, which will actually make this problem <em>worse</em> in the short term despite model improvements. It also highlights one challenge some big consulting companies will have in the AI era. These recent incidents have caused reputational damage to Deloitte, and while they claim only the citations were AI generated, that is difficult to prove. Top services companies, like prestigious consulting firms, law firms, think tanks, etc often differentiate on the <em>quality</em> of their work, and often try to stack staff from elite universities to reinforce their brand. However these firms are also under huge pressure for productivity and AI can be a big contributor to that. The biggest risk for them is that &#8216;AI Slop&#8217; could undermine their chief competitive advantage. Few people would hire McKinsey at their normal price point if they can get the same level of insights and advice directly from ChatGPT. While top companies have swarmed on AI, we also expect there to be backlash that could suddenly make the value of truly &#8216;human&#8217; services <em>more valuable</em> in the AI era, especially for &#8216;elite/luxury&#8217; markets.</p><p><strong>How to Mitigate: </strong>Obviously the most reliable way to prevent hallucinated citations is to have them all manually reviewed, although that can be slow and tedious (hence why AI was used in the first place). There are some low hanging fruit ways for building some automated verification steps however. Simply running each citation itself through a non-API driven system to verify it exists is an option, and some organizations have started using a separate AI tool or model to run its own verification checks on generated content. It&#8217;s also important to have clear policies for employees to specify when they use AI for generating content, and to have a clear list of things to check in AI generated content in order to catch common AI mistakes like this.</p><div><hr></div><h3>4. Trustible&#8217;s Top AI Policy Stories</h3><p><strong>Trump&#8217;s New AI Executive Order. </strong>President Trump is <a href="https://www.axios.com/2025/12/08/trump-ai-executive-order-state-laws">expected to sign</a> a new executive order (EO) aimed at pausing state AI laws. A <a href="https://www.documentcloud.org/documents/26287992-trump-executive-order-on-ai-law-preemption/">draft</a> was previously leaked that outlines how the Administration will leverage the Department of Justice to challenge state AI laws.   <strong>  </strong></p><p><strong>Our Take:</strong> The Trump Administration has sought to pre-empt state AI laws but has not offered a federal replacement. We anticipate a fairly lengthy legal battle to ensue once the EO is signed.</p><p><strong>AI in the NDAA. </strong>Lawmakers <a href="https://armedservices.house.gov/uploadedfiles/rcp_text_of_house_amendment_to_s._1071.pdf">added amendments</a> to the National Defense Authorization Act that bans foreign models from use in the federal government and tasks the Department of Defense with creating an AI model evaluation framework. <strong> </strong></p><p><strong>Our Take: </strong>Congress is also considering legislation for an &#8220;AI in national security&#8221; playbook and these amendments would align with the targeted, security-focused approach to AI that we have seen from the federal government.</p><p><strong>New York Times and Perplexity.</strong> The New York Times <a href="https://www.theguardian.com/technology/2025/dec/05/new-york-times-perplexity-ai-lawsuit">joined a copyright lawsuit</a> against Preplexity.ai that alleges it illegally copied millions of its articles. Perplexity has been embroiled in legal battles for how it gathers and uses content for its AI search engine.<strong> </strong> </p><p><strong>Our Take: </strong>The ongoing copyright infringement cases highlight the growing need to update IP laws and regulations that can account for how AI systems can use protected content. </p><p>In case you missed it, here are a few additional AI policy developments making the rounds:</p><p><strong>Africa. </strong>The Ugandan government <a href="https://www.monitor.co.ug/uganda/news/national/govt-developing-policy-to-regulate-ai-baryomunsi-4837498">announced</a> that it will release a draft plan to regulate AI. The decision marks one of the first significant efforts to regulate AI in Africa. </p><p><strong>Asia. </strong>The Japanese government will <a href="https://www.japantimes.co.jp/news/2025/12/07/japan/politics/japan-public-ai-use-strategy/">release a draft plan</a> to improve Japanese AI development and increase AI adoption. Japan has taken a lighter-touch regulatory approach with AI previously and is looking to develop its domestic AI ecosystem. </p><p><strong>Australia. </strong>The Australian government released its <a href="https://www.industry.gov.au/publications/national-ai-plan">National AI Plan</a>, which is intended to help grow the country&#8217;s AI industry. The plan seeks to support new AI infrastructure, increase AI adoption, and enact laws to protect its citizens from potential AI harms.  </p><p><strong>Europe. </strong>Lawmakers are <a href="https://www.theguardian.com/technology/2025/dec/08/scores-of-uk-parliamentarians-join-call-to-regulate-most-powerful-ai-systems">under pressure</a> in the UK to regulate &#8220;superintelligent&#8221; AI system development. Specifically, the latest regulatory push wants more safeguards imposed on frontier model providers to reign in developing potential superintelligent systems.  </p><p><strong>North America. </strong>The Canadian government <a href="https://www.canada.ca/en/treasury-board-secretariat/news/2025/11/canada-launches-first-register-of-ai-uses-in-federal-government.html">released</a> its first public sector inventory of AI systems. The government also <a href="https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems">published</a> the world&#8217;s first standard for developing equitable and accessible AI systems.</p><div><hr></div><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. <br></p><p>AI Responsibly, </p><p>- Trustible Team<br></p>]]></content:encoded></item><item><title><![CDATA[Transatlantic AI Uncertainty ]]></title><description><![CDATA[Plus, a deeper look at one of the first confirmed cyber attacks by AI agents, the challenges of open weight models, and our global policy roundup]]></description><link>https://insight.trustible.ai/p/transatlantic-ai-uncertainty</link><guid isPermaLink="false">https://insight.trustible.ai/p/transatlantic-ai-uncertainty</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 26 Nov 2025 13:15:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Sq6w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sq6w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sq6w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Sq6w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Sq6w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Sq6w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sq6w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:389637,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/179979079?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Sq6w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Sq6w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Sq6w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Sq6w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4930bded-b32c-4a65-b357-a17d651a8e3e_1600x896.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Happy Wednesday, and welcome to this edition of the Trustible AI Newsletter! Two weeks is an eternity in AI world, and in the past two weeks, we&#8217;ve seen a seismic shift in the AI regulatory environment both here in the U.S. and across the pond in the EU (more on that later in this edition.)</p><p>But, it&#8217;s also been a big couple of weeks for all of us here at Trustible; last week, <a href="https://trustible.ai/post/introducing-the-trustible-ai-governance-insights-center/">we launched</a> our new Trustible AI Governance Insights Center, an open source repository of our AI governance heuristics, from our risks taxonomy, to recommended mitigation strategies, AI benefits, and even more developments in our AI model ratings curated by our team of experts. Over time, we&#8217;ll be adding even more insights and resources, but as a public benefit corporation, this is an important step in advancing our mission to help society realize the transformative potential of AI. You can explore the insights center at <a href="http://trustible.ai/resource-center">trustible.ai/resource-center</a>.</p><p>We are also thrilled to share that we&#8217;ve been listed as a Representative Vendor in the 2025 Gartner&#174; Market Guide for AI Governance Platforms. We believe this is a milestone that signals the start of an inflection point, when AI governance is no longer optional, experimental, or theoretical; it&#8217;s now a business imperative for enterprises looking to realize the promise of AI. You can read all about the <a href="https://trustible.ai/post/trustible-recognized-in-the-2025-gartner-market-guide-for-ai-governance-platforms/">exciting news here.</a></p><p>In this week&#8217;s edition, we&#8217;re covering:</p><ol><li><p>Trustible&#8217;s Take - Transatlantic AI Uncertainty</p></li><li><p>AI Incident Spotlight - Cyber Attacks by AI Agents</p></li><li><p>Technical Explainer: Big challenges with open-weight models</p></li><li><p>Trustible&#8217;s Top AI Policy Stories</p></li></ol><div><hr></div><h3>1. Trustible&#8217;s Take - Transatlantic AI Uncertainty</h3><p>As a result of a flurry of regulatory proposals last week, there is now more uncertainty than ever about when AI regulations may kick-in and what those regulations will be. In Europe, the EU Commission <a href="https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal">proposed a &#8216;Digital Omnibus</a>&#8217; to reform several digital laws, including GDPR and the EU AI Act. The Commission is proposing a <a href="https://www.euractiv.com/news/commission-proposes-delaying-key-part-of-eus-ai-rules">delay for high risk AI system obligations</a> for a period ranging between 12 and 24 months. A driving force behind the delay comes from issues developing compliance standards for high risk systems. The Commission&#8217;s proposed delay stipulates that if standards are developed before the new deadlines hit, then high risk system requirements will take effect sooner. However, it is unclear if the Commission&#8217;s proposal will make it past a <a href="https://www.politico.eu/article/ursula-von-der-leyen-eu-parliament-showdown-digital-red-tape-crusade/">skeptical EU Parliament</a> (which has to agree) or if the compromise text would pass before the current deadline for high risk obligations comes into effect in August 2026.</p><p>Meanwhile in the US, <a href="https://www.axios.com/2025/11/21/republicans-proposal-block-state-ai-laws">congressional Republicans are moving forward</a> with plans to resurrect the <a href="https://trustible.ai/post/trustible-s-perspective-the-ai-moratorium-would-have-been-bad-for-ai-adoption/">state AI moratorium</a> by attaching it to the annual National Defense Authorization Act. President Trump <a href="https://www.axios.com/2025/11/18/state-ai-laws-trump-ban">fully supports the effort moratorium</a> and is considering an Executive Order (EO) that would effectively enact a moratorium without Congress, but the planned EO <a href="https://www.reuters.com/world/white-house-pauses-executive-order-that-would-seek-preempt-state-laws-ai-sources-2025-11-21/">appears to be on hold</a>. A federal moratorium on state AI laws is facing backlash from several prominent GOP elected officials, <a href="https://thehill.com/policy/technology/5616134-trump-executive-order-ai/">including Governor DeSantis and Senator Hawley</a>. Any moratorium without an actual superseding federal law would likely face immediate lawsuits, especially if done via an EO.</p><p>Ironically, the recent policy prerogatives in the EU and US aim to incentivize AI growth and adoption but instead have injected such a level of regulatory uncertainty as to undermine these goals. The EU and US have <a href="https://www.edelman.com/trust/2025/trust-barometer/flash-poll-trust-artifical-intelligence">substantially lower rates of trust in AI</a> than any other part of the world. Solving the AI trust problem is the key to growing AI adoption rates on both sides of the Atlantic.</p><p><strong>Key Takeaway:</strong> The increasing regulatory uncertainty paired with the perception that AI is unregulated is not going to help accelerate adoption. Backtracking on efforts that improve trust and adoption could just cause the &#8220;AI bubble&#8221; to burst, along with any potential to &#8220;win&#8221; the AI race against China.</p><div><hr></div><h3>2. AI Incident Spotlight - Cyber Attacks by AI Agents (<a href="https://incidentdatabase.ai/cite/1263/">Incident 1263</a>)</h3><p><strong>What Happened:</strong> Anthropic identified that a Chinese hacking group (GTG-1002) used its Claude Code platform to launch a fully autonomous cyber espionage attack against over 30 targets. The groups used several &#8216;jailbreaking&#8217; techniques to evade protections built into Claude Code. The attack notably included autonomous &#8216;multi-step&#8217; processes, where the agent first scanned for the target&#8217;s cloud resources, programmatically identified vulnerabilities, created exploits for them, and then successfully extracted data before being shut down by Anthropic.</p><p><strong>Why it Matters:</strong> There are several highly notable and severe aspects of this incident. The first is that it highlights a new dangerous paradigm for cybersecurity. The fact that an AI agent was able to successfully conduct a highly sophisticated multi-step attack, with limited human interaction confirms the fears of many in the cybersecurity world. The fact that it was a Chinese group, with links to the Chinese state, using an American AI system is also likely to create a massive reaction in DC. This is also one of the few voluntary first-party incidents reported by a large model creator, although it&#8217;s unclear what their longer term mitigation efforts may be beyond simply trying to ban the relevant parties.</p><p><strong>How to Mitigate:</strong> A lot of focus of these incidents is often on the underlying &#8216;model&#8217;. However LLMs only ever generate text, and even in agentic systems, they&#8217;re often simply generating text that instructs a system to do something, or to call some kind of external tool. The real dangers here came from all the other &#8216;system&#8217; components built into Claude Code as a &#8216;platform. Claude-4 the model can <em>generate the text command</em> to make an HTTP request, but Claude Code the platform can actually <em>make</em> the request. Claude Code then also was storing information in memory, running a process over several minutes (or hours), and executing generated computer code. For organizations hosting AI systems, limiting how much a system can independently access the internet, or run code, can heavily restrict the potential blast radius and capabilities of a system. For those worried about similar incoming cybersecurity attacks, tools like CloudFlare anti-AI bot tool will become an essential layer to help block sophisticated non-human interactions.</p><div><hr></div><h3>3. Technical Explainer: Big challenges with open-weight models</h3><p>Open-weight models, like Phi and Qwen, are available for download and unconstrained use by researchers, businesses, and consumers. Unlike closed-source models (e.g., GPT or Gemini) that use input/output filters, protections in open-weight models must be integrated directly or added by deployers. <a href="https://stephencasper.com/open-technical-problems-in-open-weight-ai-model-risk-management/">Recent research</a> highlights risk management challenges that are particularly salient for open-weight models.</p><p><strong>Training Data Curation</strong>: One key mitigation strategy is to avoid training on unsafe sources. However, current filtering methodologies are inconsistent and some &#8220;knowledge&#8221; may be useful for benign capabilities (e.g. designing cybersecurity defences). In addition, recent work suggests that harmful knowledge may emerge from a combination of benign data sources (e.g. knowledge of biorisks may be inferred from general knowledge of biology). Choosing an appropriate curation strategy may be challenging for deployers, and a lack of clear guidelines can make it difficult for developers to review models.</p><p><strong>Tamper-Resistant Training</strong>: Post-training methodologies can further reduce unsafe behavior, however, downstream modifications can intentionally or unintentionally remove these protections. In addition, as models become integrated into agentic systems, they may retrieve harmful knowledge from the internet, introducing a new attack vector.</p><p><strong>Model Tampering Evaluation:</strong> Open-weight developers often lack the resources for extensive audits, shifting the burden to deployers. Furthermore, standardized frameworks don&#8217;t exist for these evaluations making it difficult to compare and trust different models. (In our own AI Model Ratings, we&#8217;ve observed that many open-weight developers explicitly disclose a lack of adversarial evaluations)</p><p><strong>Model Provenance:</strong> Both researchers and deployers may want to study the lineage of specific open-weight models. While the former may want to understand the broader ecosystem, the latter needs to know whether a particular model is appropriate for an application. Currently, no reliable and scalable approach exists for tackling this challenge.</p><p><strong>Key Takeaway:</strong> The challenges outlined are exacerbated by a lack of transparency - most popular open-weight models largely do not address these topics in their documentation. While these models allow for more flexibility and control, they shift the burden to deployers to investigate model provenance, run safety evaluations and build additional safeguards while considering a lack of unified standards and solutions for each step.</p><p><strong>P.S. </strong>While this work emphasizes the lack of reporting around safety, we&#8217;ve recently collaborated with the EvalEval coalition on <a href="https://arxiv.org/abs/2511.05613">a new paper</a> that shows that societal impact evaluations are underreported across the LLM landscape.</p><div><hr></div><h3>4. Trustible&#8217;s Top AI Policy Stories</h3><p><strong>House Hearing on Chatbots. </strong>The House Subcommittee on Oversight and Investigations <a href="https://energycommerce.house.gov/posts/subcommittee-on-o-and-i-holds-hearing-on-artificial-intelligence-ai-chatbots">held a hearing</a> to better understand the risks posed by AI chatbots, particularly to minors, and hear recommendations from experts on potential regulatory solutions. </p><p><strong>Our Take:</strong> Congress continues to be lasered focused on AI harms posed to children and may be one of the few AI issues that sees bipartisan legislation pass.</p><p><strong>National Security Framework. </strong>A bipartisan group of lawmakers in the House and Senate are <a href="https://fedscoop.com/nsa-ai-playbook-senate-house-bill/?utm_campaign=FedScoop%20-%20Editorial&amp;utm_content=358591144&amp;utm_medium=social&amp;utm_source=linkedin&amp;hss_channel=lcp-1097874">working on legislation</a> that would require the National Security Agency to publish an AI security playbook, which is intended to help outline how AI systems are being protected from foreign adversarial threats.   <strong> </strong></p><p><strong>Our Take: </strong>AI security is a priority for the Trump Administration (as emphasized in the White AI Action Plan) and represents another area where lawmakers could plausibly pass legislation.</p><p><strong>United Nations and Healthcare. </strong>A <a href="https://news.un.org/en/story/2025/11/1166400">new report</a> from the World Health Organization warns that AI used in healthcare settings needs legal guardrails to protect patients and healthcare professionals.  </p><p><strong>Our Take: </strong>The concerns are not new but the report notably observes &#8220;there is a broad consensus on the policy measures&#8221; that could improve AI adoption. </p><p>In case you missed it, here are a few additional AI policy developments making the rounds:</p><p><strong>Africa. </strong>The UAE <a href="https://www.reuters.com/world/middle-east/uae-announces-1-billion-initiative-expand-ai-africa-2025-11-22/">announced </a>the &#8220;AI for development initiative,&#8221; which will invest $1 billion to expand AI infrastructure in Africa. While US tech companies have been making active investments to help African countries develop AI technology and infrastructure, the US government has largely been absent.  </p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China.</strong> The Chinese government i<a href="https://www.reuters.com/world/china/china-bans-foreign-ai-chips-state-funded-data-centres-sources-say-2025-11-05/">ssued guidance</a> that would ban foreign-made AI chips from new data center projects. The decision comes amidst tensions with the US over advanced chips sales in China. </p></li><li><p><strong>South Korea.</strong> Korea&#8217;s Ministry of Science and ICT <a href="https://www.msit.go.kr/bbs/view.do?sCode=user&amp;mId=307&amp;mPid=208&amp;pageIndex=&amp;bbsSeqNo=94&amp;nttSeqNo=3186490&amp;searchOpt=ALL&amp;searchTxt=">released draft regulations</a> for the AI Basic Act, the country&#8217;s comprehensive AI law passed in December 2024. The proposed regulations clarify how covered entities will need to comply with the law, which takes effect in January 2026.</p></li></ul><p><strong>Australia. </strong>The Australian government <a href="https://www.digital.gov.au/policy/ai/australian-public-service-ai-plan-2025">released</a> the Australian Public Sector (APS) AI Plan, which is aimed at improving how AI is used by public sector agencies. While the Australian government has backed away from enacting AI rules for private companies, the ASP AI Plan could have far-reaching impacts on companies doing business with the federal government. </p><p><strong>Middle East. </strong>The Trump Administration <a href="https://www.whitehouse.gov/fact-sheets/2025/11/fact-sheet-president-donald-j-trump-solidifies-economic-and-defense-partnership-with-the-kingdom-of-saudi-arabia/">signed a new Memorandum of Understanding</a> with the Saudi Arabian government, which would allow the Saudi government to access US AI technology. The agreement is part of the Trump Administration&#8217;s broader AI policy goals with Middle Eastern countries. </p><p><strong>South America. </strong>The UN&#8217;s COP30 climate conference <a href="https://abcnews.go.com/Technology/wireStory/artificial-intelligence-sparks-debate-cop30-climate-talks-brazil-127628317">met in Brazil</a> and focused heavily on how AI is impacting climate change. While attendees acknowledged AI&#8217;s potential for helping address climate change, many raised concerns over AI&#8217;s strain on natural resources and energy consumption.</p><div><hr></div><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>. </p><p>AI Responsibly, </p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[The AI Age-Gate Conundrum]]></title><description><![CDATA[Plus the role of Guardrail Models, breaking down a deepfake endorsement scam, and our global AI policy roundup]]></description><link>https://insight.trustible.ai/p/the-ai-age-gate-conundrum</link><guid isPermaLink="false">https://insight.trustible.ai/p/the-ai-age-gate-conundrum</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 12 Nov 2025 13:15:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HUdv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! Last week, the Trustible team travelled to sunny San Jose for the <a href="https://events.govtech.com/GovAI-Coalition-Summit">GovAI Coalition Summit</a>, where state and local leaders met to discuss how to accelerate safe and responsible AI adoption within states, cities, counties, and municipalities across the country - and we successfully made it home (after a few delays.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HUdv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HUdv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HUdv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HUdv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HUdv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HUdv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HUdv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HUdv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HUdv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HUdv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89771e15-c290-4df9-8a4f-899221b33c5a_1600x896.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In this week&#8217;s edition, we&#8217;re covering:</p><ol><li><p>Chatbot Teetertot: Balancing Child Safety &amp; AI Literacy</p></li><li><p>Tech Explainer: The Role of Guardrail Models in AI Systems</p></li><li><p>AI Incident Spotlight: Deep-Fake Endorsement Scam (Incident 1261)</p></li><li><p>Policy Round-Up</p></li></ol><div><hr></div><h3>1. Chatbot Teetertot: Balancing Child Safety &amp; AI Literacy</h3><p>Minors are increasingly using AI tools for <a href="https://www.apa.org/monitor/2025/10/technology-youth-friendships">social support and companionship</a>. However, it&#8217;s <a href="https://www.commonsensemedia.org/ai-ratings/social-ai-companions?gate=riskassessment">not difficult</a> for kids to elicit problematic or harmful content from AI tools, mainly AI chatbots and companion bots. The most notable examples are from <a href="http://character.ai">Character.ai</a>, whose companion bots have lead to children <a href="https://www.bbc.com/news/articles/ce3xgwyywe4o">committing self-harm or suicide</a>, as well as <a href="https://www.usatoday.com/story/life/health-wellness/2025/10/20/character-ai-chatbot-relationships-teenagers/86745562007/">sharing or generating illicit content</a>. Character.ai has since <a href="https://www.bbc.com/news/articles/cq837y3v9y1o">blocked kids</a> from accessing its chatbots.</p><p>Industry is trying to figure this out, with OpenAI&#8217;s <a href="https://openai.com/index/introducing-the-teen-safety-blueprint/">Teen Safety Blueprint</a> as one example of how the private sector can offer solutions. Policymakers in the US have also taken notice and are attempting to enact new laws that address these issues. For instance, a <a href="https://trustible.ai/post/everything-you-need-to-know-about-californias-new-ai-laws/">new California law</a> requires companies to implement safeguards to prevent companion chatbots from discussing suicide and must direct the user to self-harm resources if they express suicidal thoughts to the bot. Congress has also taken notice and the bipartisan <a href="https://outreach.senate.gov/iqextranet/iqClickTrk.aspx?&amp;cid=SenHawley&amp;crop=15476QQQ11203529QQQ8925856QQQ8301018&amp;report_id=&amp;redirect=https%3a%2f%2fwww.hawley.senate.gov%2fwp-content%2fuploads%2f2025%2f10%2fGUARD-Act-Bill-Text.pdf&amp;redir_log=175350999596526">GUARD Act</a> was introduced in the Senate, which would (among other things) ban minors from accessing companion bots.</p><p>Understandably, lawmakers and parents want to protect kids from the harms posed by chatbots. Age verification requirements have long been used to prevent minors from accessing inappropriate or illicit content. Yet, we must reflect on whether age-gating sufficiently addresses the problem (as those requirements can be gamed and also pose serious First Amendment issues) and whether these types of barriers to certain technologies will do more long term harm. Yes, it is essential to protect kids from potentially harmful systems and content, but it also highlights the challenge of effectively teaching them essential AI literacy skills to build a competitive next generation workforce.</p><p>While AI developers have a role to play with child safety, it does not stop with them. Organizations deploying public-facing chatbots need to understand that, even if children are not the intended audience, they may still use their chatbot products. It&#8217;s important for companies with public facing chatbots to implement safeguards around outputs and make appropriate disclosure (e.g., noting when a product should not be used by people under the age of 18). These mitigations are especially true as chatbots become more general purpose, which broadens the aperture on exposure.</p><p><strong>Our Take:</strong> Kids today are more tech savvy than the previous generation and we need to acknowledge that fact as we think about how to implement pragmatic protections while helping them build valuable AI skillsets. We also need to think about how new child safety laws are not so sweeping that they unintentionally stunt growth for other AI use cases.</p><div><hr></div><h3>2. Tech Explainer: The Role of Guardrail Models in AI Systems</h3><p>One way to mitigate a variety of AI system risks including processing PII, outputting unsafe (e.g. toxic language or specialized advice) content and prompt injections is by incorporating guardrail models that review and flag the inputs to and preliminary output from the system. Think of guardrail systems as guardians designed to be the first line of defense against potential threats. Designed specifically for detection rather than reasoning, guardrail models are typically smaller and faster than general-purpose LLMs. Both <a href="https://aws.amazon.com/blogs/machine-learning/amazon-bedrock-guardrails-image-content-filters-provide-industry-leading-safeguards-helping-customer-block-up-to-88-of-harmful-multimodal-content-generally-available-today/">AWS Bedrock</a> and <a href="https://learn.microsoft.com/en-us/azure/ai-services/content-moderator/overview">Azure AI</a> have off-the-shelf guardrail models that can be integrated into any endpoint, while on the open-source side, <a href="https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-3/">Llama-Guard</a> is a popular solution that can recognize 13 common harm categories.</p><p>Recently, OpenAI released a new open-source guardrail model, <a href="https://openai.com/index/introducing-gpt-oss-safeguard/">gpt-oss-safeguard</a>. Unlike the other solutions that can only detect a pre-defined set of harms, this model allows the user to input an arbitrary human-written policy (i.e. a set of rules to follow), meaning it can be used for domain-specific concerns (e.g. flagging spoilers on a movie forum or detecting regulatory non-compliance in legal text). The increased flexibility does not guarantee accuracy: the developers state that gpt-oss-safeguard is often less accurate than a custom model. As well, the documentation does not explore what kinds of &#8220;policies&#8221; will be effective; <a href="https://arxiv.org/pdf/2502.18695">recent research</a> suggests that existing human content moderation guidelines may need to be modified to work properly with LLMs. This type of model can be used as a rapid deployment solution, when off-the-shelf models do not cover the use case, and resources can not be devoted to building out an internal solution that may require thousands of labeled data points.</p><p><strong>Our Take:</strong> Guardrail models serve as a first and final line of defence against unsafe system outputs, but require careful evaluation to be used effectively. Existing experts in that domain will need to collaborate with content engineers to review off-the-shelf guardrail models or build bespoke policies for models like gpt-oss-safeguard.</p><div><hr></div><h3>3. AI Incident Spotlight: <a href="https://incidentdatabase.ai/cite/1261/">Deep-Fake Endorsement Scam</a> (Incident 1261)</h3><p><strong>What Happened:</strong> A deep fake of a senior official in the government of Western Australia was used to facilitate an online scam. The scammers generated an AI video showing Roger Cook, the official, falsely endorsing an investment service. The deep fake was described as hyper realistic in both voice and appearance.</p><p><strong>Why it Matters:</strong> Public officials are at particularly high risk of being &#8216;deep faked&#8217;, and leveraged by scammers. The latest systems can impersonate facial likeness and voice with only a few short high quality video or audio clips. These are particularly easy to get for public officials who are highly visible and often need to participate in televised events. Many public officials also don&#8217;t always receive the same degree of privacy protections, and certain activities like satire of politicians can actually be a protected activity, blurring the lines of what is acceptable or not. It&#8217;s likely that these types of deep fake impersonation attacks will become more common over time.</p><p><strong>How to Mitigate: </strong>Unfortunately there aren&#8217;t many ways that individuals or organizations can prevent this kind of activity on their own. Instead, the best options are to have mitigations set up to reduce the likelihood of people falling for these types of scams, and to support regulations on platforms that may potentially host, or benefit, from these types of schemes. For organizations, it is appropriate to consider regular training on how to detect deep fakes, and even run on-going exercises for it on employees.</p><div><hr></div><h3>4. Policy Round-Up</h3><h4>Trustible&#8217;s Top AI Policy Stories</h4><p><strong>ChatGPT&#8217;s New Lawsuits. </strong>OpenAI is facing a flurry of new allegations over ChatGPT&#8217;s impact on mental health. New lawsuits against the company allege that ChatGPT <a href="https://abcnews.go.com/US/lawsuit-alleges-chatgpt-convinced-user-bend-time-leading/story?id=127262203">induced psychosis</a>, as well as lead to <a href="https://nypost.com/2025/11/07/business/chatgpt-drove-users-to-suicide-psychosis-and-financial-ruin-california-lawsuits/?utm_source=flipboard&amp;utm_campaign=nypost&amp;utm_medium=social">financial ruin and suicide</a>.</p><p><strong>Our Take:</strong> It is important to clearly communicate how these tools should be used and make sure that users understand they are interacting with a machine, not a real person.</p><p><strong>Pausing the EU AI Act. </strong>The European Commission (EC) released <a href="https://www.theguardian.com/world/2025/nov/07/european-commission-ai-artificial-intelligence-act-trump-administration-tech-business">initial proposals</a> for its Digital Omnibus package, which includes loosening EU AI Act obligations and postponing enforcement for certain requirements.</p><p><strong>Our Take: </strong>The uncertainty is complicating compliance for many organizations but they should focus on EU AI Act compliance in case the EC opts against enforcement delays.</p><p><strong>UK Copyright Decision. </strong>A UK court decision is upending the interplay between AI data use and IP law after it <a href="https://www.theguardian.com/media/2025/nov/04/stabilty-ai-high-court-getty-images-copyright">ruled in favor of Stability AI</a> in a case brought by Getty Images for secondary copyright infringement.</p><p><strong>Our Take: </strong>The case adds a new dimension to how AI is trained on protected data because the alleged infringement did not occur in the UK. Organizations should make sure they have proper permission to use data when training their AI models and tools.</p><p>In case you missed it, here are a few additional AI policy developments making the rounds:</p><p><strong>United States Congress. </strong>Two pieces of bipartisan AI legislation have been introduced in the Senate. The <a href="https://www.banking.senate.gov/newsroom/minority/banks-warren-cotton-schumer-mccormick-coons-introduce-landmark-bipartisan-gain-ai-act-to-maintain-us-position-as-worlds-leader-in-critical-artificial-intelligence-chips">GAIN AI Act</a> would regulate how American AI chips were exported to China and other countries of concern. The <a href="https://www.warner.senate.gov/public/index.cfm/2025/11/warner-hawley-to-introduce-bipartisan-legislation-to-track-number-of-jobs-lost-to-ai">AI-Related Job Impacts Clarity Act</a> would require companies and the federal agencies to report when AI-related layoffs occur. House Democrats are also <a href="https://www.fiercehealthcare.com/regulatory/house-dems-make-push-roll-back-cms-ai-powered-prior-auth-model">attempting to rollback</a> the Trump Administration&#8217;s efforts to use AI at the Centers for Medicare &amp; Medicaid Services.</p><p><strong>Trump Administration. </strong>After OpenAI&#8217;s CFO <a href="https://www.cnn.com/2025/11/06/tech/openai-backtracks-government-support-chip-investments">caused controversy</a> by implying the federal government could be a &#8220;backstop&#8221; for AI infrastructure debt, the Trump Administration <a href="https://www.cnbc.com/2025/11/06/trump-ai-sacks-federal-bailout-openai-friar.html">dismissed the idea</a> of a federal bailout for frontier model companies. Sam Altman has <a href="https://finance.yahoo.com/news/openai-ceo-altman-denies-company-is-looking-into-government-bailout-202255408.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAALohZ2OYHdp8FuIDbai0Hx0iPzs28JRRY4aEj_5-9jLuLnBbcK7POtr5MUAr_g7o5oWl2pRdq82Eg5ojJeApmDEVhgTwDZnEae67QIRC_V6h_gjpPmnjpf-1S4rryLuUlXNA-OuffJMs1euo9lwv6ABrTmR03lqjr5nUgwT1gegv">denied</a> that OpenAI expects a bailout.</p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China. </strong>A <a href="https://economictimes.indiatimes.com/news/international/global-trends/us-news-no-degree-no-discussion-china-tightens-the-grip-on-influencers-and-its-new-law-has-sparked-massive-debate-online-check-details/articleshow/124929667.cms?from=mdr">new law</a> requires social media influencers to hold degrees for certain topics (e.g., medicine or finance) before posting about them. The law also requires influencers to disclose when their content is AI-generated.</p></li><li><p><strong>India.</strong> The Indian government is <a href="https://www.cxodigitalpulse.com/india-to-introduce-comprehensive-ai-law-following-deepfake-regulations/">preparing to release</a> a draft comprehensive AI law. The current content is unknown but is expected to be modeled after the Information Technology Act of 2000.</p></li></ul><p><strong>Europe. </strong>Nvidia and Deutsche Telekom <a href="https://www.telekom.com/en/company/management-unplugged/details/europes-most-modern-ai-factory-1099006">announced</a> plans to build Europe&#8217;s largest AI in Germany. The latest announcement emphasizes Europe&#8217;s desire to build its own AI ecosystem.</p><p><strong>Middle East. </strong>The Trump Administration <a href="https://thehill.com/policy/energy-environment/5588362-us-uae-ai-burgum/">signed a memorandum of understanding</a> with the UAE to further cooperation in AI and energy.</p><div><hr></div><p>As always, we welcome your feedback on content! Drop us a line with your thoughts to <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a> for a chance to win an exclusive set of Trustible AI Model Playing Cards! </p><p>AI Responsibly,</p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[Trustible AI Newsletter #46: AI Needs Fallbacks]]></title><description><![CDATA[Plus an AI incident tests the boundaries of Section 230, why Reddit hold a special place in the eyes of LLMs, and our global AI policy roundup]]></description><link>https://insight.trustible.ai/p/trustible-ai-newsletter-46-ai-needs</link><guid isPermaLink="false">https://insight.trustible.ai/p/trustible-ai-newsletter-46-ai-needs</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 29 Oct 2025 13:48:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Kx9x!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! In this week&#8217;s edition, we&#8217;re covering:</p><ol><li><p>Trustible&#8217;s Take - The Need for AI Fallbacks</p></li><li><p>AI Incident Spotlight - (<a href="https://incidentdatabase.ai/cite/1248/">AI Incident 1248</a>)</p></li><li><p>Reddit&#8217;s Hidden Hand in AI Training and Why It Matters</p></li><li><p>Policy Round-Up</p></li></ol><div><hr></div><h3>1. Trustible&#8217;s Take - The Need for AI Fallbacks</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Kx9x!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Kx9x!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Kx9x!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Kx9x!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Kx9x!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Kx9x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Kx9x!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Kx9x!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Kx9x!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Kx9x!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2921a55f-527c-40e1-9da3-fab3056fb598_1600x896.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Several owners of Eight Sleep, a tech enabled &#8216;smart bed&#8217;, were <a href="https://www.nytimes.com/2025/10/24/business/amazon-aws-outage-eight-sleep-mattress.html">suddenly woken up early last Monday</a> by their beds going haywire. It turns out that a massive AWS outage related to a <a href="https://aws.amazon.com/message/101925/?tag=cnet-buy-button-20&amp;ascsubtag=c2e2900bc5e44d4888546409ba69821f%7Cb76c7969-e605-44a0-9694-d9c588f5bde2%7Cdtp%7Ccn">misconfigured Domain Name Service (DNS) system</a> was able to take down more than just websites and SaaS applications. This event highlights two core risks related to the evolving AI ecosystem.</p><p>The first, is that there is a lot of infrastructure that still supports AI systems. For example, all internet traffic relies on DNS to figure out what servers to talk to, AI systems need to read and write information to databases like AWS DynamoDB, and the physical hardware that supports the relevant computation is also at risk. One reason the AWS outage was so widespread is because there are only a handful of hyperscaled cloud service platforms, and the set of affected data centers (&#8216;us-east-1 based in northern Virginia) happens to be one of the earliest, and most used regions. As we start to incorporate more AI into everyday systems and rely on it more, the risk of creating &#8216;central points of failure&#8217; increases.</p><p>The second issue at play was that the devices with embedded &#8216;smart&#8217; capabilities did not have appropriate fallback mechanisms. Instead of simply disabling the smart features and acting as a plain old bed, some of the smart beds start increasing temperature endlessly. A few days <em>after</em> the outage, <a href="https://www.theverge.com/news/804289/eight-sleep-smart-bed-aws-outage-overheating-offline">Eight Sleep did roll out a dedicated &#8216;outage mode&#8217;</a> that is able to use local bluetooth to send instructions to the smart bed. At this point in time, we have a &#8216;non-AI&#8217; path and process for <em>most</em> things, however will that be true 10 years from now? One popular AI policy proposal is to ensure that there <em>are</em> non-AI fallbacks, bypass, or appeal processes, as a main mitigation towards this potential AI overreliance.</p><p><strong>Key Takeaway:</strong> The AI ecosystem is not yet mature enough to have &#8216;multi-cloud&#8217; deployments, partly because not all model providers are available across all major cloud providers. This maturity, and clear standards for &#8216;non-AI&#8217; modes are likely going to be necessary before AI is adopted for highly critical applications in fields like healthcare or national infrastructure.</p><div><hr></div><h3>2. AI Incident Spotlight - (<a href="https://incidentdatabase.ai/cite/1248/">AI Incident 1248</a>)</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nKbA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nKbA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nKbA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nKbA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nKbA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nKbA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nKbA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nKbA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nKbA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nKbA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F728d1694-5d65-41f9-a049-881779595458_1600x896.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>What Happened:</strong> US Conservative activist <a href="https://www.wsj.com/tech/ai/activist-robby-starbuck-sues-google-over-claims-of-false-ai-info-d0a8bdbe">Robby Starbuck has sued Google</a> claiming that Google&#8217;s AI models regularly defame him by accusing him of sexual assault. He filed a similar lawsuit against Meta earlier this year alleging similar things coming from Meta&#8217;s AI tools. That case was settled before going to trial. Many of his allegations date back to an earlier model, Bard, that Google has since deprecated.</p><p><strong>Why it Matters:</strong> The exact cause of the hallucination isn&#8217;t known. It&#8217;s possible that there was politically motivated intentional misinformation posted online that the systems incorporated (a form of data poisoning), or it could simply be a hallucination because of similarities between Robby and others. Regardless of the source of the misinformation, this case, and others like it, will likely test the precedent of applying &#8216;Section 230&#8217; to AI systems. The issue at the core is who is liable for false information like this. Under current interpretations, web platforms are not directly responsible for the defamatory content created by users of their platform. Whether an AI system counts as a &#8216;platform&#8217; (protected), or user (liable) could radically shift the liability scheme for LLM providers. So far, no one has been successful in winning a defamation case against AI in the US.</p><p><strong>How to Mitigate:</strong> Most of the most recent LLM systems have the ability to conduct real-time web searches in order to pull in &#8216;fresh&#8217; information from reliable sources that can help pull in facts. However this web searching capability is a feature of the <em>system</em> not the models themselves, and therefore many AI features built on just model APIs won&#8217;t have this ability, and allowing searches introduces additional privacy and cyber risks. System prompts that cover how to respond when doing research or answering questions about people can help ensure that only verified information is shared, and to avoid stating potentially negative information altogether.</p><div><hr></div><h3>3. Reddit&#8217;s Hidden Hand in AI Training and Why It Matters</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x6lM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x6lM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!x6lM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!x6lM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!x6lM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x6lM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x6lM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!x6lM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!x6lM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!x6lM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ff92551-7b19-46e1-805b-858c82ab75dc_1600x896.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Few platforms have shaped the modern internet like Reddit. Its open, chaotic mix of expertise, argument, and lived experience has become a goldmine for organic community discussion for brands, but also, for LLMs. Because Reddit posts are conversational, self-correcting, and wide-ranging, they provide the kind of nuanced human expression that makes AI sound more human. In fact, many of the citations or answers you see in AI-generated content trace their roots back to Reddit threads.</p><p>But this organic data source has become a flashpoint. Reddit&#8217;s recent lawsuits against Perplexity and other AI vendors highlight an emerging legal and ethical battleground: who owns public discourse when it fuels AI? The platform argues that scraping and repurposing Reddit data without consent undermines both its community and its business model, especially as Reddit now licenses its content to certain model developers under paid agreements (representing $35M in revenue for Reddit as of Q2 2025, up 24% year over year.)</p><p>Technically, Reddit&#8217;s data is particularly valuable because of its structure. Its posts and comment trees offer deeply nested, timestamped dialogues, rich with slang, reasoning chains, code snippets, and emotional context, all of which are perfect for pretraining and fine-tuning models to understand how humans argue, explain, and empathize. AI crawlers systematically follow thread hierarchies and metadata (like upvotes or subreddit topics) to learn which ideas communities endorse or reject, turning social feedback loops into learning signals. This makes Reddit data uniquely high-quality - and uniquely sensitive.</p><p>For enterprises, Reddit represents both opportunity and risk. It&#8217;s a valuable source for sentiment analysis, market research, and training domain-specific chatbots. Yet the same exposure that makes Reddit content powerful also makes it volatile. A viral post, an out-of-context quote, or an AI model trained on outdated or toxic Reddit data can easily amplify reputational risks.</p><p><strong>Why is it Relevant:</strong> Reddit&#8217;s fight with AI companies is a preview of how data ownership, consent, and community ethics will define the next phase of AI governance.</p><p><strong>Key Takeaway:</strong> AI governance professionals should be alert to how LLMs draw from social data ecosystems like Reddit, and monitor these channels for keyword and thematic discussions where misinformation may be shared. Engaging in these channels, setting the record straight, and controlling your organization&#8217;s own footprint (from ads to AMAs) helps protect your brand. These are not neutral channels, they are living communities whose norms, biases, and moderation dynamics shape AI behavior.</p><div><hr></div><h3>4. Policy Round-Up</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XES2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XES2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!XES2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!XES2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!XES2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XES2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XES2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!XES2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!XES2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!XES2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2802db90-1c36-4d3a-8879-d3f2772dbe9a_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Trustible&#8217;s Top AI Policy Stories</h3><p><strong>Anthropic vs the White House. </strong>Anthropic has been in an <a href="https://www.axios.com/2025/10/21/anthropic-ai-czar-white-house-david-sacks-jd-vance">interesting back-and-forth</a> with White House AI Czar David Sacks, who has <a href="https://www.cnbc.com/2025/10/21/anthropic-ceo-trump-sacks-woke.html">criticized the company</a> as promoting an agenda &#8220;to backdoor Woke AI and other AI regulations.&#8221;</p><p><strong>Our Take:</strong> Anthropic has been a leading voice on AI safety and supported state laws, including the recently enacted SB 53. The spat shows the fine line companies need to walk between promoting AI regulations, innovation, and navigating a tenuous political environment.</p><p><strong>EU Prepared to Expand Copyright Laws to AI. </strong>It appears MEPs are prepared to <a href="https://www.linkedin.com/posts/luca-bertuzzi-186729130_meps-are-preparing-to-call-for-eu-copyright-activity-7386011881951641600-G_gU?utm_medium=ios_app&amp;rcm=ACoAAAROw78Bd0_8xsIVj7o_JK1Kqowm_3tc4rI&amp;utm_source=social_share_send&amp;utm_campaign=copy_link">require EU copyright law</a> apply to AI training regardless of where the training occurs.</p><p><strong>Our Take: </strong>The decision would put US tech companies and their frontier models in the cross hairs. But depending on the actual text, it could have big implications for companies that are training models on external data sources.</p><p><strong>Frontier Model Providers Get EU AI Act Warning. </strong>The Dutch Data Protection Authority <a href="https://www.politico.eu/article/dont-ask-chatbots-how-vote-dutch-authorities-tell-voters-election/">warned four frontier model providers</a> (OpenAI, xAI, Google, and Mistral) that their chatbots&#8217; advice on the Dutch parliamentary elections could classify them as high-risk systems under the EU AI Act.</p><p><strong>Our Take: </strong>While the EU AI Act has not come into full effect yet, this is a good reminder that &#8220;low risk&#8221; systems can evolve into high-risk systems, which is why continuous oversight is necessary.</p><p>In case you missed it, here are additional AI policy developments:</p><p><strong>United States Congress. </strong>A <a href="https://www.grassley.senate.gov/imo/media/doc/ao_to_grassley_re_judiciary_use_of_ai.pdf">recent letter</a> to Federal courts Senate Judiciary Chair Senator Chuck Grassley (R-IA) revealed that interim guidance has been issued to federal courts, which addressed &#8220;non-technical suggestions on the use, procurement, and security of AI tools.&#8221; Grassley supports <a href="https://www.judiciary.senate.gov/press/rep/releases/grassley-calls-on-the-federal-judiciary-to-formally-regulate-ai-use">formal AI regulations</a> for the federal judiciary, given the errors and controversy that have arisen with AI and the courts.</p><p><strong>Trump Administration. </strong>The International Trade Administration (ITA), which sits within the Department of Commerce, is <a href="https://www.trade.gov/press-release/department-commerce-announces-american-ai-exports-program-implementation">launching a program</a> to develop full-stake AI export controls. ITA issued a <a href="https://www.federalregister.gov/documents/2025/10/28/2025-19674/american-ai-exports-program">request for information</a>, in accordance with President Trump&#8217;s <a href="https://www.federalregister.gov/documents/2025/07/28/2025-14218/promoting-the-export-of-the-american-ai-technology-stack">Executive Order</a> on Promoting the Export of the American AI Technology Stack, to seek industry input on how to establish and implement the program.</p><p><strong>Africa. </strong>Gebeya (an Ethiopia-based platform) <a href="https://techafricanews.com/2025/10/28/gebeya-unveils-gebeya-dala-an-ai-app-builder-designed-for-africas-unique-digital-landscape/">launched</a> Gebeya Dala, an AI-powered app builder designed with African cultural considerations. This is the latest in a series of culturally-specific AI tools that have launched this year that align with regional or non-western cultures.</p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China. </strong>China&#8217;s legislature <a href="https://www.chinadaily.com.cn/a/202510/28/WS6900868da310f735438b76a6.html">formally adopted amendments</a> to China&#8217;s cybersecurity law, which promotes stronger ethical AI standards, risk monitoring, and assessments.</p></li><li><p><strong>Japan. </strong>The Government of Japan entered into a <a href="https://www.whitehouse.gov/articles/2025/10/u-s-japan-technology-prosperity-deal/">Memorandum of Cooperation</a> with the U.S. government, in part to cooperate on accelerating AI adoption and innovation.</p></li><li><p><strong>Vietnam.</strong> The Government of Vietnam is working on <a href="https://vir.com.vn/vietnam-amends-law-on-intellectual-property-137386.html">amending its intellectual property</a> regulations to promote AI innovation. The government is also <a href="https://vietnamnet.vn/en/vietnam-backs-open-source-ai-to-empower-small-nations-2457154.html">supporting</a> an open source AI ecosystem as part of their broader strategic plans as a regional AI player.</p></li></ul><p><strong>Australia. </strong>The Labour government indicated that it <a href="https://www.musicbusinessworldwide.com/australia-rejects-proposal-that-would-have-exempted-ai-training-from-copyright-laws/">will not exempt</a> frontier model providers from copyright laws for text and data mining. The government also launched an investigation into how chatbot companies like Character.ai implement safeguards for children. The government is also <a href="https://tech.co/news/australia-sues-microsoft-ai-price-increases">suing Microsoft</a> because of deceptive price hikes related to copilot integrations into Microsoft 365.</p><p><strong>Europe. </strong>AI-related policy developments in Europe include:</p><ul><li><p><strong>Albania. </strong>Albania&#8217;s Prime Minister <a href="https://futurism.com/artificial-intelligence/rama-diella-albania-pregnant">announced</a> that its AI minister, Diella, is pregnant with &#8220;83 children.&#8221; The AI-generated offspring will serve members of parliament as their assistants.</p></li><li><p><strong>EU.</strong> The EU&#8217;s standards-setting body <a href="https://www.euractiv.com/news/fast-tracking-of-eu-ai-act-standards-writing-leads-to-revolt/">caused controversy</a> by announcing that it would &#8220;fast-track&#8221; the most delayed EU AI Act standards with a smaller group of experts. The move was characterized as &#8220;unprecedented.&#8221; The decision caused pushback from some members of the standards body because of &#8220;serious unintended consequences.&#8221;</p></li><li><p><strong>UK. </strong>The Labor Government <a href="https://www.innovationnewsnetwork.com/uk-and-openai-pen-landmark-deal-to-boost-ai-adoption/62852/">announced</a> a new partnership with OpenAI that will allow its UK business customers to host their data within the UK. A local news station also piloted an <a href="https://variety.com/2025/tv/news/ai-news-anchor-channel-4-1236557295/">AI newscaster</a> in a story about whether AI will replace humans in the workforce.</p></li></ul><p><strong>North America. </strong>The Canadian government is considering a series of AI-related laws that will address <a href="https://betakit.com/evan-solomon-teases-new-ai-laws-as-experts-warn-canada-is-behind-international-peers/">deepfake, data transfers</a> and <a href="https://www.biometricupdate.com/202510/canadas-ai-minister-considering-age-assurance-requirements-for-chatbots">age assurances for chatbots</a>. Canada <a href="https://trustible.ai/post/what-does-the-global-pause-on-ai-laws-mean-for-ai-governance/">abandoned its efforts</a> to pass a comprehensive AI law after their federal elections earlier this year.</p><p><strong>Middle East. </strong>AI-related policy developments in the Middle East include</p><ul><li><p><strong>Saudi Arabia.</strong> Saudi-based AI company Humain is <a href="https://www.jpost.com/middle-east/article-871953">making plans</a> to be listed on the Saudi stock exchange as well as the NASDAQ. Humain and Qualcomm also <a href="https://www.qualcomm.com/news/releases/2025/10/humain-and-qualcomm-to-deploy-ai-infrastructure-in-saudi-arabia-">announced</a> a partnership on deploying advanced AI infrastructure in Saudi Arabia.</p></li><li><p><strong>UAE. </strong>G42 (UAE-based AI provider) and Cisco <a href="https://investor.cisco.com/news/news-details/2025/Cisco-and-G42-Deepen-US-UAE-Technology-Partnership-to-Build-Secure-End-to-End-AI-Infrastructure-in-the-UAE/default.aspx">announced</a> a partnership to build &#8220;secure, trusted and high-performance [AI] infrastructure.&#8221;</p></li></ul><p><strong>South America.</strong></p><ul><li><p><strong>Argentina.</strong> A <a href="https://latamjournalismreview.org/articles/argentinas-newsrooms-are-leading-the-ai-revolution-but-risk-getting-devoured-by-it/">recent study</a> showed that a third of media professionals in Argentina use AI to assist with their jobs, which includes &#8220;help write and edit articles, craft headlines and translate text.&#8221; There are some concerns that the lack of regulations for how journalists use AI could hurt the industry financially, as well as a need for AI-related journalistic standards.</p></li><li><p><strong>Chile.</strong> The Chilean government is dealing with public backlash over <a href="https://www.webpronews.com/chiles-ai-ambitions-spark-resource-wars/">resource issues</a> posed by AI infrastructure, specifically as the government seeks to build more data centers. The outcry over AI energy and water consumption has been brewing in other countries, including the U.S.</p></li></ul><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>.</p><p>AI Responsibly,</p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[Trustible AI Newsletter 45: Why SB 53 Won’t Have a Big Impact ]]></title><description><![CDATA[Plus Armilla AI and Trustible&#8217;s new integrated risk offering, AI agents aren&#8217;t always leaving behind a paper trail, and model specs 101]]></description><link>https://insight.trustible.ai/p/trustible-ai-newsletter-45-why-sb</link><guid isPermaLink="false">https://insight.trustible.ai/p/trustible-ai-newsletter-45-why-sb</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 15 Oct 2025 12:45:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TUD1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter!</p><p>At Trustible, we&#8217;re on a mission, giving AI Governance professionals the information, insights, and tools they need. Our legal and technical analysts help filter through the noise and identify what&#8217;s meaningful for enterprises, and what&#8217;s just hype. To ensure we&#8217;re advancing that mission, we&#8217;ve been listening to your feedback over the past few months, and we&#8217;re going to be revamping our newsletter going forward.</p><p>Here&#8217;s what we&#8217;re changing and what you can expect going forward:</p><ul><li><p>Technical Insights</p><ul><li><p>AI technology is evolving at a rapid pace and it&#8217;s hard for anyone to keep up, let alone contextualize what new developments will mean for enterprise use of AI. Our machine learning experts will use this section to translate technical into plain english and connect it to the challenges organizations face.</p></li></ul></li><li><p>Policy Round-Up</p><ul><li><p>Our regular policy roundup will continue as an overview of the major AI policy headlines from the past two weeks. We won&#8217;t be able to cover everything around the world, but we&#8217;ll focus on the developments that most impact practitioners.</p></li></ul></li><li><p>AI Incident Spotlight</p><ul><li><p>This new section will be a deep dive explainer on a recent incident captured in the AI Incident Database. Our goal will be to provide actionable recommendations on how to prevent similar incidents for enterprises.</p></li></ul></li><li><p>Trustible&#8217;s Take</p><ul><li><p>This will be our editorial team&#8217;s take on the most notable news relevant to AI Governance professionals. This section will focus a lot on trying to be the voice of pragmatic AI, and to try and cut through the hype found on traditional and social media platforms.</p></li></ul></li><li><p>News &amp; Updates</p><ul><li><p>We&#8217;ll regularly publish summaries of our more in-depth whitepapers, research, and blog, along with new announcements from Trustible. We want our newsletter to be insightful and actionable for anyone working in AI Governance, but we also want to let you know how we&#8217;re building solutions to many of the issues discussed!</p></li></ul></li></ul><p>With that, in today&#8217;s edition (5-6 minute read):</p><ol><li><p>Trustible&#8217;s Take: Why SB 53 Won&#8217;t Have a Big Impact</p></li><li><p>AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management</p></li><li><p>AI Incident Spotlight - AI Agents Aren&#8217;t Always Leaving a Paper Trail</p></li><li><p>Technical Insight - What To Know about &#8216;Model Specs&#8217;</p></li><li><p>Global &amp; U.S. Policy Roundup</p></li></ol><div><hr></div><h2>1. Trustible&#8217;s Take: Why SB 53 Won&#8217;t Have a Big Impact</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TUD1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TUD1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TUD1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TUD1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TUD1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TUD1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:103681,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/176190336?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TUD1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TUD1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TUD1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TUD1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f16d214-8070-4b75-898f-cfce6ef21de3_1600x896.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There has been a lot of fanfare about Governor Newsom signing SB 53, a frontier AI safety bill. Many proponents argue it will have a major impact in regulating AI. We&#8217;re not sure. Especially after its various amendments and changes, we think it&#8217;s very unlikely to make a large impact in the AI space. Here&#8217;s why:</p><ul><li><p>SB 53 is almost entirely a &#8216;subset&#8217; of the EU AI Act&#8217;s requirements</p><ul><li><p>While there are a few differences included in SB 53, including clear whistleblower protections for frontier model provider employees, many of the &#8216;core&#8217; safety framework requirements are extremely similar to the &#8216;safety and security&#8217; requirements in Chapter 3 of the EU&#8217;s Code of Practice for GPAI providers. A major sign that this alignment was intentional was the use of the same computer thresholds to establish &#8216;frontier models&#8217; (SB 53), as &#8216;GPAI Models with Systemic Risk&#8217; (EU AI Act). <a href="https://openai.com/global-affairs/letter-to-governor-newsom-on-harmonized-regulation/">OpenAI actively lobbied Newsom </a>to align these requirements, and notably neither endorsed nor denounced the bill.</p></li></ul></li><li><p>Most frontier labs are already compliant, and proposed enforcement is weak</p><ul><li><p>The requirements of the &#8216;frontier AI framework&#8217; described in SB 53 reads exactly like <a href="https://www.anthropic.com/news/the-need-for-transparency-in-frontier-ai">Anthropic&#8217;s proposal for it</a>, in alignment with <a href="https://openai.com/index/updating-our-preparedness-framework/">OpenAI&#8217;s </a>and <a href="https://deepmind.google/discover/blog/strengthening-our-frontier-safety-framework/">Google&#8217;s framework</a>, and there&#8217;s even a clear path for <a href="https://data.x.ai/2025-08-20-xai-risk-management-framework.pdf">xAI to update theirs </a>to align with the requirements. Given the laws&#8217; very high threshold for &#8216;frontier&#8217; models (10^26 FLOPS), that&#8217;s <em>likely</em> the whole list. In addition, only the California AG is able to enforce the law and can only impose <em>civil</em> <em>penalties</em> for non-compliance. It&#8217;s unclear whether the frontier labs will need to do anything in order to comply, and the risks of doing so are relatively low in the short term.</p></li></ul></li><li><p>It doesn&#8217;t address any meaningful AI issues, nor clarify the legal environment</p><ul><li><p>SB 53 doesn&#8217;t address many of the immediate AI policy issues that downstream deployers and users are struggling with. For example, despite frontier model requirements, the bill does not address copyright issues, liability transfers, or content watermarking. The focus on only &#8216;catastrophic risks&#8217; is unlikely to make a big impact for high risk sectors trying to understand how to simply not break existing customer relationships and laws when deploying AI systems.</p></li></ul></li></ul><p><strong>Key Takeaway:</strong> It&#8217;s unclear whether SB 53 is &#8216;regulation&#8217; or &#8216;regulatory capture&#8217; by frontier model providers. For most downstream AI system builders, the biggest impact will likely be receiving 300 pages of documentation, instead of the current 150 pages.</p><div><hr></div><h2>2. AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Odtd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Odtd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Odtd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Odtd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Odtd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Odtd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:909044,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/176190336?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Odtd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Odtd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Odtd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Odtd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f69c8b1-d3b8-4abd-842d-70eeb69d0ba9_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>AI adoption is accelerating, but readiness still lags behind. Nearly 59% of large enterprises are already working with AI and plan to expand investment, yet only 42% have deployed AI at scale. At the same time, incidents of AI failure are rising sharply; the Stanford AI Index recorded a 26&#215; increase in AI incidents since 2012, and more than 140 AI-related lawsuits are currently pending in U.S. courts.</p><p>The message is clear: as organizations race to integrate AI into products, operations, and decisions, risk management has to evolve just as quickly. That&#8217;s why last week, Trustible and Armilla AI <a href="https://trustible.ai/post/ai-governance-meets-insurance-why-trustible-armilla-are-joining-forces-on-ai-risk-management/">announced a new partnership</a> to tackle these challenges.</p><p>Together, we&#8217;re connecting the dots between AI governance and AI insurance, helping enterprises both prevent and protect against emerging AI risks. Trustible helps organizations operationalize responsible AI governance, while Armilla, provides affirmative AI insurance, explicitly covering risks that traditional cyber or E&amp;O policies often exclude, such as model errors, generative AI copyright and libel issues, and regulatory penalties.</p><p>By working together, Trustible and Armilla create a feedback loop between good governance and improved insurability, enabling organizations to innovate confidently while minimizing and transferring residual risk.</p><p>You can learn more about the <a href="https://trustible.ai/armilla/">partnership here</a>.</p><div><hr></div><h2>3. AI Incident Spotlight - AI Agents aren&#8217;t always leaving a paper trail (<a href="https://incidentdatabase.ai/cite/1218/">AI Incident 1218</a>)</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KhLl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KhLl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KhLl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KhLl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KhLl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KhLl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:128328,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/176190336?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KhLl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KhLl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KhLl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KhLl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7114121f-ff2f-4e03-aab0-26675e581cde_1600x896.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>What Happened:</strong> Cybersecurity researchers have identified that in some instances, searches from Microsoft 365 Copilot don&#8217;t get properly registered in a document&#8217;s audit log. Any human &#8216;look-up&#8217; or access to a file in Microsoft 365 is logged in a dedicated &#8216;audit trail&#8217; which is an essential part of appropriate access controls. However despite Copilot citing answers from a certain document, there is not always a permanent record that Copilot read information from that file.</p><p><strong>Why it Matters:</strong> Generative AI &#8216;answering&#8217; systems can accidentally break normal access control rules and share information from documents a user may not otherwise have access to (data leakage). Not logging this access appropriately can exacerbate this issue, or even encourage this attack vector because. Most enterprise IT/ security policies require strict access controls and audit logs to help detect unauthorized access or use, and at least in some instances, Copilot may not obey the normal control expectations.</p><p><strong>How to Mitigate:</strong> Without additional information, our theory is that documents get stored in their &#8216;embedding representation&#8217; inside of 365 so that they can be searched over by an LLM. This means the information being accessed by 365 Copilot is not the &#8216;original&#8217; document that stores the audit log. In addition, registering every &#8216;system&#8217; access may bloat the audit log too heavily and there are not yet standards for logging AI system vs human accesses. For now, we recommend keeping highly sensitive documents out of files/folders indexed by 365 until this issue is fixed.</p><div><hr></div><h2>4. Technical Insight - What To Know about &#8216;Model Specs&#8217;</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jzci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jzci!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!jzci!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!jzci!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!jzci!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jzci!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:135110,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/176190336?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jzci!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!jzci!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!jzci!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!jzci!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc5800035-aad4-4dbd-91c2-b1b56a83d19a_1600x896.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>OpenAI recently made some <a href="https://github.com/openai/model_spec/blob/main/CHANGELOG.md">significant changes to their &#8216;Model Spec&#8217;</a>. The latest changes focus on better behavior for agents, trying to reduce &#8216;sycophancy&#8217;, and incorporated insights from a recent <a href="https://openai.com/index/collective-alignment-aug-2025-updates/">&#8216;public alignment&#8217; project OpenAI</a> has been running. Their model spec outlines how OpenAI has fine-tuned their models to enforce certain nuanced situations. It&#8217;s the best tactical representation of both their top level AI principles, and the specific guardrails built into their systems. While OpenAI is the only provider to use this exact format, other frontier model providers publish their versioned &#8216;System Prompts&#8217; (<a href="https://docs.claude.com/en/release-notes/system-prompts#september-29-2025">Anthropic</a>, <a href="https://github.com/xai-org/grok-prompts">Grok</a>), which serve a similar purpose, although are not as structured.</p><p><strong>Why is it relevant:</strong> OpenAI&#8217;s model spec is one of the most detailed documents about how frontier model providers are trying to align their models. While system cards give in depth insights into <em>technical</em> details, the model spec is consumable by non-technical experts, and contains more actionable information. Knowing what a model is supposed to do is essential for helping establish if the model is malfunctioning (acting outside of the spec), or if it allows certain behaviors that the deployer may want to block on their own. While recent legislation like SB 53 focus only on &#8216;catastrophic risks&#8217; and will require reports on mitigations efforts towards those, the model spec contains relevant information for understanding whether the system has specific guardrails against things like data leakage, generating images based on a person, or how to handle chats about sensitive topics like sexuality. Publishing model specs, or similar documents, could be the next type of &#8216;transparency&#8217; document frontier models may be required to publish in the future. The author of the Trump Administrations&#8217; &#8216;AI Action Plan, Dean Ball, recently <a href="https://www.hyperdimensional.co/p/be-it-enacted">proposed such an idea in his newsletter</a>, even while arguing for federal pre-emption of other AI regulations.</p><p><strong>Key Takeaway:</strong> OpenAI&#8217;s model spec is better documentation to examine than their system cards for non-technical AI governance professionals trying to understand the risks of deploying or using OpenAI models.</p><div><hr></div><h2>5. Policy Round-Up</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zExb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zExb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!zExb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!zExb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!zExb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zExb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:557744,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://insight.trustible.ai/i/176190336?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zExb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!zExb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!zExb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!zExb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c1e7c84-c839-4c65-8498-76a60d297083_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Trustible&#8217;s Top AI Policy Stories</h3><p><strong>The Problems with Sora 2. </strong>OpenAI <a href="https://openai.com/index/sora-2/">launched</a> its new video generation model, Sora 2, a couple of weeks ago. Since then, Sora 2 has raised fresh concerns over its environmental impact, ability to spread misinformation, and IP infringement.</p><p><strong>Our Take:</strong> When using new models, AI governance professionals should consider metrics like the model&#8217;s impact on resources (e.g., energy and environment), as well as understand what types of outputs are being generated and the appropriate ways to use them.</p><p><strong>The EU&#8217;s AI Breakup with the US. </strong>The European Commission released the <a href="https://digital-strategy.ec.europa.eu/en/policies/apply-ai">Apply AI Strategy</a>, which will invest approximately &#8364;1 billion in the EU&#8217;s AI industry to reduce its reliance on the US and China.</p><p><strong>Our Take: </strong>A new EU ecosystem will provide companies with new choices for AI models and tools that take a different approach to AI safety and security than the US.</p><p><strong>Fears Over the AI Bubble. </strong>There have been growing concerns over an &#8220;<a href="https://www.bbc.com/news/articles/cz69qy760weo">AI bubble</a>&#8221; in the economy that is reminiscent of the dot-com bubble from the late-1990s.</p><p><strong>Our Take: </strong>AI is here to stay (whether or not a bubble exists or bursts) and that means AI governance is not going anywhere.</p><p><strong>California Regulates Companion Chatbots. </strong>Governor Newsom signed <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243">SB 243</a> into law, which protects minors and other vulnerable groups from AI companions.</p><p><strong>Out Take: </strong>Chatbots are generally thought to be low risk use cases, but the new law underscores how companies need to have insights into the safeguards around their chatbots.</p><p>In case you missed it, here are additional AI policy developments:</p><p><strong>United States Congress. </strong>Two bipartisan AI bills were recently introduced in the Senate. The <a href="https://www.judiciary.senate.gov/imo/media/doc/OLL25B47.pdf">AI LEAD Act</a> would impose a &#8220;duty of care&#8221; standard for AI system developers and would classify AI systems as products, as opposed to platform. The <a href="https://outreach.senate.gov/iqextranet/iqClickTrk.aspx?&amp;cid=SenHawley&amp;crop=15759QQQ11147926QQQ8888007QQQ8019990&amp;report_id=&amp;redirect=https%3a%2f%2fwww.hawley.senate.gov%2fwp-content%2fuploads%2f2025%2f09%2fHawley-Blumenthal-Artificial-Intelligence-Risk-Evaluation-Act.pdf&amp;redir_log=071212710711839">AI Risk Evaluation Act</a> would establish the advanced AI evaluation program through the Department of Energy.</p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China. </strong>The Chinese government is <a href="https://www.reuters.com/world/china/china-steps-up-customs-crackdown-nvidia-ai-chips-ft-reports-2025-10-10/">attempting to crackdown</a> on Nvidia chip imports as it seeks to promote its own homegrown chip industry. It has also been reported that Chinese government officials <a href="https://www.cnn.com/2025/10/07/politics/china-chatgpt-surveillance">were caught using ChatGPT</a> to create tools for mass surveillance and social media monitoring.</p></li><li><p><strong>Vietnam. </strong>The Ministry of Science and Technology is <a href="https://www.mlex.com/mlex/artificial-intelligence/articles/2396124/vietnam-releases-draft-ai-law-for-public-comment">seeking public input</a> on a comprehensive AI law.</p></li></ul><p><strong>Europe. </strong>ASML&#8217;s Chief Financial Officer <a href="https://www.politico.eu/article/dutch-chips-giant-asml-executive-roger-dassen-slams-eu-ai-overregulation/">criticized the EU</a> for overregulating AI, claiming that the difficulty with AI in Europe is &#8220;because [the EU] started with regulating, to keep AI under the thumb.&#8221;</p><p><strong>North America. </strong>AI-related policy developments in outside of the U.S. in North America include:</p><ul><li><p><strong>Canada. </strong>OpenAI is looking to <a href="https://www.cbc.ca/news/business/open-ai-canada-data-centres-digital-sovereignty-9.6935195">Canada for cheaper energy</a> and as part of the deal would help build new data centers in Canada as it pushes to expand its sovereign AI industry.</p></li><li><p><strong>Mexico.</strong> Salesforce <a href="https://www.reuters.com/world/americas/salesforce-spend-1-billion-mexico-over-next-five-years-drive-ai-adoption-2025-10-08/">announced</a> that it would invest approximately $1 billion in Mexico over the next five years in an effort to expand AI adoption.</p></li></ul><p><strong>South America. </strong>OpenAI <a href="https://www.reuters.com/world/americas/openai-sur-energy-weigh-25-billion-argentina-data-center-project-2025-10-10/">signed a letter of intent</a> that would invest up to $25 billion for a large-scale data center, which is expected to be built in the Argentine Patagonia.</p><div><hr></div><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>.</p><p>AI Responsibly,</p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[Trustible AI Governance Newsletter #44: Model Swaps & Data Trust ]]></title><description><![CDATA[Plus what makes for a perfect use case intake process, and our global policy roundup]]></description><link>https://insight.trustible.ai/p/trustible-ai-governance-newsletter</link><guid isPermaLink="false">https://insight.trustible.ai/p/trustible-ai-governance-newsletter</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 01 Oct 2025 14:39:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!01W9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! It&#8217;s been a busy week in the AI regulatory landscape, as California enacts new legislation targeted towards model builders, and the E.U. continues to ponder the next steps for AI Act enforcement (we wrote about the pros and cons of this <a href="https://trustible.ai/post/should-the-eu-stop-the-clock-on-the-ai-act/">on our blog</a>.)</p><p>In today&#8217;s edition (5-6 minute read):</p><ol><li><p>Model Swap</p></li><li><p>Good Models Still Require Credible Data</p></li><li><p>What Makes for the &#8220;Perfect&#8221; AI Use Case Intake Process?</p></li><li><p>Global &amp; U.S. Policy Roundup</p></li></ol><div><hr></div><h2><strong>1. Model Swap</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!01W9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!01W9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!01W9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!01W9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!01W9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!01W9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!01W9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!01W9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!01W9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!01W9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6142da8-9e9c-4947-ae96-0e372e6b9300_1600x896.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://openai.com/index/openai-anthropic-safety-evaluation/">OpenAI</a> and <a href="https://alignment.anthropic.com/2025/openai-findings/">Anthropic</a> recently ran a cross-evaluation of each other&#8217;s models. Both highlighted different risks, and demonstrated how task framing shapes evaluation outcomes.</p><p>OpenAI leaned on instruction-following and jailbreak resilience as core safety markers, noting for instance that Claude refused up to 70% of hallucination probes rather than risk giving a wrong answer. Anthropic emphasized long-horizon misuse and sycophancy, finding that OpenAI&#8217;s o3 resisted harmful misuse better than GPT-4o, GPT-4.1, or o4-mini, which were often willing to cooperate with simulated bioweapon or drug synthesis requests.</p><p>While some research labs like <a href="https://epoch.ai/">Epoch AI</a>, and government sponsored institutes like the UK&#8217;s <a href="https://www.aisi.gov.uk/">AI Security Institute (AISI)</a> have done independent model evaluations, this was the first instance of the 2 frontier model leaders conducting this kind of evaluation swap. The differences in evaluation approaches, and respective strengths/weaknesses of each other&#8217;s models highlights how the philosophies, and incentives, of the model providers can be reflected in their models. It also highlights the need to have a wide range of evaluations, and now just rely on self reporting. The evaluations focused on &#8216;frontier capability&#8217; assessments however, not assessments on more &#8216;day to day&#8217; AI tasks. OpenAI more recently introduced <a href="https://openai.com/index/gdpval/">GDPEval</a> to assess model performance on industry specific tasks, and this type of benchmark could quickly become an industry standard relevant to AI deployers and users.</p><p><strong>Key Takeaway:</strong> Evaluations that just rely on a single number, can easily mask some of the more nuanced differences between foundation models. Once you dig into the details, the priorities, culture, and incentives of a model provider may be more clear, and may be an issue for certain types of use cases.</p><div><hr></div><h2><strong>2. Good Models Still Require Credible Data</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Fz2Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 424w, https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 848w, https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 1272w, https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png" width="300" height="224" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:224,&quot;width&quot;:300,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 424w, https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 848w, https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 1272w, https://substackcdn.com/image/fetch/$s_!Fz2Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90120b3c-c24c-4c88-8ed6-96f457958add_300x224.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>A <a href="https://onlinelibrary.wiley.com/doi/10.1002/leap.2018">recent study</a> showed that ChatGPT may return answers based on claims in redacted scientific studies, even when information about the redaction appears in the same document. Unlike hallucinations, where the model fabricates a claim, this study points to the models inability to properly contextualize information in the training data and recognize when it&#8217;s inaccurate. This challenge can&#8217;t be solved easily: first, the pre-training process breaks up the documents into multiple pieces, thus a retraction statement may have been associated with the title of the paper, but not with individual statements in another part of the paper. Second, modem LLMs are trained on a vast corporas of data that are not manually reviewed, thus removing all erroneous information from pre-training data isn&#8217;t feasible (even if it was - this process would be biased and subjective).</p><p>While this study focused on GPT 4o-mini&#8217;s internal knowledge, many modern systems integrate external information through web searches or connection to internal knowledge bases (i.e. RAG). These integrations can help the model analyze the document as a whole and consider retractions published on external platforms. We tested a couple example claims from the study and found that during a web search the model recognized the retraction and corrected its original answer.. However, this process still relies on the model having access exclusively to up-to-date information; an out-of-date document that isn&#8217;t annotated as such can cause similar erroneous assertions. This failure mode may have been behind Air Canada&#8217;s chatbot <a href="https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know">stating false advice</a> on bereavement travel last year. Our own team at Trustible has encountered similar challenges using AI for coding tasks, where old, outdated code (tech debt), regularly confuses the model.</p><p><strong>Key Takeaways: </strong>LLMs can not reliably identify if information is up-to-date, especially if updates or retractions are not clearly associated with a specific fact. Practitioners should take care to maintain accurate data sources for training, fine-tuning and RAG. In addition, detailed prompting can help the system explicitly check for potential inconsistencies.</p><div><hr></div><h3><strong>3. What Makes the &#8220;Perfect&#8221; AI Use Case Intake Process?</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FPE7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FPE7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 424w, https://substackcdn.com/image/fetch/$s_!FPE7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 848w, https://substackcdn.com/image/fetch/$s_!FPE7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!FPE7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FPE7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FPE7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 424w, https://substackcdn.com/image/fetch/$s_!FPE7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 848w, https://substackcdn.com/image/fetch/$s_!FPE7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!FPE7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12c98937-c3e8-451e-869c-ef3a7268f269_2048x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>AI intake is the front door to governance. In a panel at the IAPP AI Governance Global North America in Boston a few weeks ago, Trustable led a discussion with leaders from Leidos and Nuix on AI use case intake processes. The room quickly aligned on a core truth: there&#8217;s no &#8220;perfect&#8221; intake process, only the one that fits your org&#8217;s risk profile, scale, and speed. The real work is choosing trade offs you can live with.</p><p>The conversation unpacked six design levers teams should tune, not max out:</p><ul><li><p>Granularity: are you tracking whole use cases, or features within products?</p></li><li><p>Heaviness: what&#8217;s the minimum set of questions that still surfaces risk?</p></li><li><p>Outcomes: does intake simply route, or also drive mitigations and decisions?</p></li><li><p>Participation: who owns what&#8212;privacy, legal, security, product, HR, IT?</p></li><li><p>Implementation: start with forms/spreadsheets, but plan for workflow and automation.</p></li><li><p>Timing: catch ideas early without slowing experimentation.</p></li></ul><p>Practitioners shared pragmatic moves: start where you are (even if it&#8217;s messy), iterate fast, and reframe intake from &#8220;audit&#8221; to &#8220;risk reduction.&#8221; Build muscle memory with short cycles and clear handoffs. As volume grows, expect ad hoc docs to buckle; that&#8217;s your signal to standardize fields, centralize your inventory, and automate routing so triage and transparency don&#8217;t degrade. Perhaps most importantly, make intake a shared habit&#8212;invite cross-functional partners in before the first pilot, not after the first incident. Culture change is the glue that keeps the process from backsliding.</p><p><strong>Key Takeaway: </strong>fit beats perfection. A right-sized intake gives leaders visibility, lets teams move quickly with guardrails, and creates artifacts that stand up to regulatory and stakeholder scrutiny.</p><p><a href="https://trustible.ai/post/what-is-the-perfect-ai-use-case-intake-process/">You can read the full recap on our blog</a>.</p><div><hr></div><h2><strong>4. Global &amp; U.S. Policy Roundup</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nk8y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nk8y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Nk8y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Nk8y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Nk8y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nk8y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nk8y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Nk8y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Nk8y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Nk8y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F469091fb-c2f0-41e5-b408-ec9c8600d9db_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Trustible&#8217;s Top AI Policy Stories</h3><p><strong>OSTP Request for Information.</strong> As part of the Trump Administration&#8217;s AI Action Plan, OSTP is <a href="https://public-inspection.federalregister.gov/2025-18737.pdf">seeking input</a> on existing rules that may hinder AI deployment or adoption.</p><p><strong>Our Take:</strong> Beyond commenting on rules that industry does not like, this proceeding will allow companies to identify redundancies among existing rules that can help streamline AI governance processes should the federal government tweak them.</p><p><strong>Anthropic settlement approved.</strong> Judge Alsup <a href="https://apnews.com/article/anthropic-authors-copyright-judge-artificial-intelligence-9643064e847a5e88ef6ee8b620b3a44c">approved</a> the $1.5 billion settlement agreement between Anthropic and authors who say the company infringed on their copyrights.</p><p><strong>Our Take: </strong>The copyright cases continue to highlight why companies need to know where data used in their AI tools come from and they have the requisite permission to use IP in those tools.</p><p><strong>United Nations AI Dialogue.</strong> The UN <a href="https://www.nytimes.com/2025/09/25/business/un-artificial-intelligence.html?login=google&amp;auth=login-google">announced</a> the &#8220;global dialogue on AI governance,&#8221; which would create a panel of experts to study best practices for AI governance.</p><p><strong>Our Take: </strong>The UN is trying to stake its claim on AI standards but the final recommendations will likely remain at a high-level, making it difficult to operationalize at the enterprise level.</p><p><strong>AI Bills in California.</strong> Governor Gavin Newsom signed <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB53">SB 53</a> into law, which is a watered down version of last year&#8217;s SB 1047.</p><p><strong>Our Take: </strong>The new law imposes new governance obligations for frontier model providers with some limited downstream impacts.</p><p>In case you missed it, here are additional AI policy developments:</p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China.</strong> Deepseek <a href="https://www.livemint.com/technology/tech-news/ai-is-transforming-how-software-engineers-do-their-jobs-just-dont-call-it-vibecoding-11759168172565.html">launched DeepSeek-V3.2-Exp</a>, a new model billed as an &#8220;intermediate step toward [their] next-generation architecture.&#8221; The new model is likely to put new pressure on other Chinese model providers like &#8203;Alibaba, which launched their new Qwen 3 Max model earlier this month.</p></li><li><p><strong>Japan.</strong> The Ministry of Defense is <a href="https://thedefensepost.com/2025/09/22/japan-military-ai-rules/">taking a hard line</a> on AI in the military, allowing it to help with defense operations but maintaining that humans must be in charge of lethal force decisions. Japan&#8217;s new policy comes as it formalized the <a href="https://www.safia.hq.af.mil/IA-News/Article/4302028/us-and-japan-formalize-samurai-project-arrangement-to-advance-ai-safety-in-unma/">SAMURAI Project</a> with the U.S., which is intended to advance AI safety in unmanned aerial vehicles.</p></li><li><p><strong>India. </strong>The government <a href="https://www.mexc.com/en-GB/news/india-venezuela-unveil-ai-pact-ghana-advances-digital-id/108745">signed a new partnership agreement</a> with Venezuela to &#8220;jointly explore the integration of [AI] and digital public infrastructure in sectors such as health, payments, and education.&#8221;</p></li></ul><p><strong>Australia.</strong> The Digital Economy Minister <a href="https://www.mlex.com/mlex/articles/2388339/our-ai-regulation-will-be-light-touch-australian-minister-tells-tech-companies">reiterated</a> that Australia will take a light touch approach to AI regulation. The Australian government was interested in a more comprehensive approach but have since <a href="https://www.mlex.com/mlex/articles/2388339/our-ai-regulation-will-be-light-touch-australian-minister-tells-tech-companies">abandoned that effort</a>.</p><p><strong>Europe. </strong>AI-related policy developments in the Europe include:</p><ul><li><p><strong>EU. </strong>The European Commission (EC) <a href="https://trustible.ai/post/should-the-eu-stop-the-clock-on-the-ai-act/">may pause implementing</a> the AI Act, after strongly rejecting the idea back in July. The potential pause will be discussed at an upcoming AI Board meeting in October primarily because of implementation delays at the national level. The EC also published <a href="https://digital-strategy.ec.europa.eu/en/consultations/ai-act-commission-issues-draft-guidance-and-reporting-template-serious-ai-incidents-and-seeks">draft guidelines</a> for reporting serious incidents as required under the AI Act.</p></li><li><p><strong>Italy. </strong>The Italian government <a href="https://www.theguardian.com/world/2025/sep/18/italy-first-in-eu-to-pass-comprehensive-law-regulating-ai">passed a new AI law </a>that criminalizes certain uses of AI, such as creating deepfakes or assisting with committing crimes. The law also requires that children under the age of 14 get consent from their parents to access AI.</p></li></ul><p><strong>Middle East.</strong></p><ul><li><p><strong>UAE. </strong>Sam Altman <a href="https://timesofindia.indiatimes.com/technology/tech-news/sam-altman-meets-uae-president-sheikh-mohamed-bin-zayed-al-nahyan-to-boost-its-ai-research-and-usage/articleshow/124192588.cms">met</a> with President Sheikh Mohamed bin Zayed Al Nahyan to discuss how to foster closer cooperation on AI.</p></li><li><p>Saudi Arabia. Representatives from South Korea <a href="https://www.arabnews.com/node/2617056/business-economy">met with officials</a> in Saudi Arabia to discuss closer collaboration on building more innovative environments for SMEs and expanding new market opportunities.</p></li></ul><p><strong>North America. </strong>AI-related policy developments in outside of the U.S. in North America include:</p><ul><li><p><strong>Canada. </strong>The AI and Digital Innovation Minister <a href="https://www.digitaljournal.com/tech-science/canada-launches-ai-task-force-with-30-day-sprint-for-national-strategy/article">announced</a> a new AI Task Force that will work over the next 30 days to make recommendations for Canada&#8217;s national AI strategy. It is unclear whether these recommendations will turn into actual regulations.</p></li><li><p><strong>Mexico.</strong> CloudHQ <a href="https://www.newsnationnow.com/business/tech/us-tech-company-to-build-4-8-billion-data-center-in-mexico/">announced</a> a $4.8 billion data center in a state just north of Mexico City.</p></li></ul><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>.</p><p>AI Responsibly,</p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[AI Copyright Conundrum Continues As the Era of AEO & GEO Dawns ]]></title><description><![CDATA[Plus judging LLM-as-a-Judge as a methodology and our global policy and industry roundup]]></description><link>https://insight.trustible.ai/p/ai-copyright-conundrum-continues</link><guid isPermaLink="false">https://insight.trustible.ai/p/ai-copyright-conundrum-continues</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 17 Sep 2025 11:05:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LwjB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy Wednesday, and welcome back to another edition of the Trustible AI Governance Newsletter! The Trustible team is shipping up to Boston this week for the <strong><a href="https://iapp.org/conference/iapp-ai-governance-global-north-america/register-now-aiggna25/">2025 IAPP AI Governance Global North America Conference</a></strong>, and teaming up with AI governance leaders from Nuix and Leidos to talk through how to build the perfect AI intake workflow - <em>or are we?</em> If you&#8217;re in Boston this week, you&#8217;ll have to attend our session on Friday to find out.</p><p>In the meantime, in today&#8217;s edition (5-6 minute read):</p><ol><li><p>The Great AI Copyright Conundrum Part II</p></li><li><p>Judging LLM-as-a-Judge</p></li><li><p>Global &amp; U.S. Policy Roundup</p></li><li><p>AEO / GEO - The New SEO</p></li></ol><div><hr></div><h3><strong>1. The Great AI Copyright Conundrum Part II</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LwjB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LwjB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!LwjB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!LwjB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!LwjB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LwjB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:150540,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/173808175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LwjB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!LwjB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!LwjB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!LwjB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa584b45b-1163-415f-88dc-c424323b6aca_1600x896.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The summer saw a flurry of developments on the interplay between AI and IP. <strong><a href="https://trustible.substack.com/p/trustible-and-databricks-team-up?utm_source=publication-search">Two other cases</a></strong>, one against Meta and another involving Anthropic, handed significant victories to big tech and how their AI systems use protected works. As we move into the fall, there is another avalanche of news on AI and copyright litigation.</p><p>We previously <strong><a href="https://trustible.substack.com/p/trustible-and-databricks-team-up?utm_source=publication-search">discussed a lawsuit</a></strong> brought by a group of authors who sued Anthropic for copyright infringement by alleging that the company trained Claude on their protected works without their permission. Anthropic sought to put an end to the case with a <strong><a href="https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai">$1.5 billion settlement</a></strong>, which would have paid roughly $3,000 per book that they pirated. However, Judge William Alsup rejected the deal because of his concerns with how the deal was struck and the claims process. Judge Alsup will review the deal again on September 25 to &#8220;see if [he] can hold my nose and approve it.&#8221; Meanwhile, Apple is <strong><a href="https://www.cnet.com/tech/services-and-software/apple-gets-hit-with-ai-copyright-lawsuit-days-before-iphone-17-event/">subject to a new lawsuit</a></strong> by authors who claim that their protected works were used to develop Apple Intelligence. Perplexity <strong><a href="https://www.theverge.com/news/777344/perplexity-lawsuit-encyclopedia-britannica-merriam-webster">was also recently sued</a></strong> by Encyclopedia Britannica and Merriam-Webster over how it uses their material for its &#8220;answer engine.&#8221;</p><p>The Anthropic settlement sought to provide an off-ramp for the dispute at a cost lower than prolonging the litigation. Yet the broader concern (as raised by Judge Alsup) is how this deal came about, given it was done behind closed doors. Generally settlements are not bad, but when genuine legal questions need resolution, a quick fix cash settlement does not quite meet the moment. Moreover, determining the value for using someone&#8217;s IP to train an AI system is not a cut and dry equation. Paying IP owners the same amount (as seen in the Anthropic settlement) ignores the fact that some IP is more valuable than others. A recent <strong><a href="https://www.nytimes.com/2025/09/13/opinion/culture/a-chatbot-ate-my-books-jackpot.html?unlocked_article_code=1.mE8.mbEF.zyP26IpwbG1J&amp;amp;smid=url-share">New York Times op-ed</a></strong> raises legitimate issues over how to put a price on someone&#8217;s IP and how someone could effectively &#8220;game the system&#8221; if all IP is worth the same price.</p><p><strong>Key Takeaway:</strong> First and foremost - know where your data comes from because you do not want to be accused of violating IP law. Second, we need policymakers to intervene and actually fix the issue because relying on court decisions to effectively make law is not a long-term sustainable solution. If there are growing concerns over the &#8220;patchwork of state AI laws&#8221; in the US, then there should be greater concerns with the patchwork of legal decisions and outcomes, especially when the US Supreme Court is less inclined to settle these types of questions.</p><div><hr></div><h3><strong>2. Judging LLM-as-a-Judge</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E7yD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E7yD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!E7yD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!E7yD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!E7yD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E7yD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:109200,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/173808175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E7yD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 424w, https://substackcdn.com/image/fetch/$s_!E7yD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 848w, https://substackcdn.com/image/fetch/$s_!E7yD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!E7yD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6d6742c-9869-403f-bd8f-3ce6a383fb73_1600x896.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Over the last two years, the LLM-as-a-Judge (LLJ) methodology has become a go-to technique in AI development - but a recent paper titled <em><strong><a href="https://arxiv.org/pdf/2508.18076">Neither Valid nor Reliable? Investigating the Use of LLMs as Judges</a></strong> </em>by Chenbouni et al. highlights critical weaknesses of this approach. LLJ refers to the practice of using existing LLMs to support development of new AI systems. Common uses include:</p><ul><li><p><strong>Performance Evaluation</strong>: Replacing human annotators for reviewing outputs of AI systems. For example, LLJ can be used to <strong><a href="https://www.databricks.com/blog/announcing-mlflow-28-llm-judge-metrics-and-best-practices-llm-evaluation-rag-applications-part">judge fluency and professionalism</a></strong> of model outputs or to evaluate safety alignment (many of the tests in <strong><a href="https://cdn.openai.com/gpt-5-system-card.pdf">GPT-5&#8217;s system card</a></strong> rely on this technique).</p></li><li><p><strong>Model Enhancement: LLJs can assist with training of new models through processes like reward modeling and/or replacing humans in RLHF (where the goal is to select a preferred output from a model).</strong></p></li><li><p><strong>Data Annotation: LLJs can be used to annotate datasets for model training and evaluation.</strong></p></li></ul><p>LLJs became popular for these tasks because they are considered a proxy for human judgement that can be used cheaply, quickly and at-scale; however, the limitations of this approach are often overlooked. Two key factors to consider are:</p><ul><li><p><strong>Validity: </strong>When assessing whether LLJs do in fact agree with human judgement, it is important to consider whether humans themselves can agree on this task. On tasks with a high degree of ambiguity, both the method for measuring agreement and an appropriate threshold may not be clear. For example, the GPT-5 system card mentions a 75% agreement between the LLJ and human assessors - this threshold may suggest that relying on the LLJ alone may not be sufficient.</p></li><li><p><strong>Reliability: </strong>LLJs exhibit a variety of biases ranging from <strong><a href="https://arxiv.org/pdf/2406.11939">preferring outputs from itself</a></strong> (when used to compare multiple candidate models) to <strong><a href="https://dl.acm.org/doi/pdf/10.1145/3715275.3732204">skewed racial and gender preferences in reward modeling</a></strong>. Furthermore, while some modern models can output an &#8220;explanation&#8221; for their label, this text may not be faithful to the underlying processes. With these factors in mind, a rating for &#8220;professionalism&#8221; (as in the above example) of text may reflect a number of subjective factors beyond the user-specificed criteria.</p></li></ul><p>Chenbouni&#8217;s paper outlines a deeper list of interconnected challenges with the use of LLJs and my. Many of the challenges outlined parallel human evaluation on difficult tasks. Mitigations can include using LLM-as-a-Jury (an ensemble of diverse LLJs) and carefully reviewing LLJ outputs for biases and consistency before deploying them at scale.</p><p><strong>Key Takeaway: </strong>In an ecosystem where the pressure to deploy AI systems quickly is high, LLJs can be seen as a quick win for evaluation; however, concerns around validity and reliability suggest that they themselves should be carefully evaluated for appropriateness for a given task. When using LLJs, practitioners should examine the task itself for subjectivity (for example by evaluating human inter-annotator scores on a sample), evaluate the model outputs against a large gold-standard dataset, analyze the errors for systematic biases, and when possible uses an ensemble of LLJs (i.e. LLMs-as-a-Jury.)</p><div><hr></div><h3><strong>3. Global &amp; U.S. Policy Roundup</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2bWA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2bWA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!2bWA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!2bWA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!2bWA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2bWA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:229307,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/173808175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2bWA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!2bWA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!2bWA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!2bWA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F97f04ddc-595e-462d-a7e2-e095cc6e43a2_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>U.S. Federal Government.</strong> AI-related policy developments across the federal government include:</p><ul><li><p><strong>White House.</strong> Office of Science and Technology Policy (OSTP) director Michael Kratsios <strong><a href="https://www.axios.com/2025/09/12/kratsios-white-house-ai-plans">teased</a></strong> the White House&#8217;s AI policy plans. OSTP will solicit input from the public on &#8220;federal regulations that they think hold back the development and deployment of AI.&#8221; President Trump also <strong><a href="https://www.whitehouse.gov/articles/2025/09/president-trump-tech-leaders-unite-american-ai-dominance/">hosted</a></strong> CEO&#8217;s from all the major tech companies to discuss AI innovation.</p></li><li><p><strong>Federal Agencies. </strong>The FTC <strong><a href="https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions">launched an inquiry</a></strong> into how certain companies deploy their chatbots, with a particular focus on harms posed to teens and children. The inquiry is not a formal investigation and seeks to obtain information from seven AI companies. The Government Accountability Office <strong><a href="https://www.gao.gov/products/gao-25-107933">released a report</a></strong> outlining 94 AI-related requirements for the federal government.</p></li><li><p><strong>Congress.</strong> Senator Ted Cruz (R-TX) is back in the news <strong><a href="https://thehill.com/policy/technology/5506290-ted-cruz-ai-provision/">declaring</a></strong> that the AI moratorium on state and local laws is &#8220;not dead at all.&#8221; It is unclear what Cruz meant by his comments, as the moratorium language was removed by the Senate from the Republicans reconciliation bill earlier this summer. Cruz also <strong><a href="https://www.commerce.senate.gov/2025/9/sen-cruz-unveils-ai-policy-framework-to-strengthen-american-ai-leadership">introduced the SANDBOX Act</a></strong>, which would permit AI developers and deployers to seek waivers for regulations that could &#8220;impede their work.&#8221; OSTP would coordinate with federal agencies to evaluate requests within the scope.</p></li></ul><p><strong>U.S. States. </strong>AI-related policy developments at the state level include:</p><ul><li><p><strong>California. </strong>The state legislature <strong><a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53">passed SB 53</a></strong> with a veto-proof majority. The bill is a reincarnation of last year&#8217;s SB 1047 with a far narrower scope that requires foundation model providers to implement and disclose safety and security protocols for their models.</p></li><li><p><strong>Michigan. </strong>The Michigan Chamber of Commerce <strong><a href="https://www.michamber.com/news/house-panel-considers-sweeping-bill-to-regulate-ai/">came out against</a></strong> a <strong><a href="https://www.legislature.mi.gov/Bills/Bill?ObjectName=2025-HB-4668">proposed bill</a></strong> in the state legislature that would comprehensively regulate AI in the same vein as Colorado&#8217;s AI law. The Chamber called the bill &#8220;well-intentioned&#8221; but favored a federal AI law.</p></li><li><p><strong>Ohio. </strong>The state&#8217;s Department of Homeland Security <strong><a href="https://www.wdtn.com/news/ohio/ohio-homeland-security-unveils-new-ai-suspicious-activity-reporting-system/">launched a new reporting system</a></strong> that uses AI to disseminate information about potential threats of violence. Users can upload photo, video and audio of alleged suspicious activity, which is reported and reviewed by analysts at the Statewide Terrorism Analysis and Crime Center.</p></li><li><p><strong>Texas. </strong>Secretary of Education, Linda McMahon, <strong><a href="https://www.kvue.com/article/news/education/secretary-education-artificial-intelligence-austin/269-10b95d3b-2ce5-4648-9c27-34c293f0ed7c">visited a private school</a></strong> in Austin, TX that is relying on AI to help teach its students. The Secretary also participated in a roundtable discussion about &#8220;AI literacy and the evolving role of technology in education.&#8221;</p></li></ul><p><strong>Africa. </strong>The South African government <strong><a href="https://iafrica.com/south-africa-moves-to-establish-national-ai-network-of-experts/">hosted a forum</a></strong> to discuss creating a &#8220;National Artificial Intelligence Network of Experts.&#8221; Uganda <strong><a href="https://africa.businessinsider.com/local/markets/another-first-ai-factory-in-africa-uganda-is-said-to-have-made-its-entry-into-the-ai/n47mrec">launched the Aeonian Project</a></strong>, which intends to build Africa&#8217;s first &#8220;AI factory.&#8221;</p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China. </strong>The Beijing Internet Court <strong><a href="https://natlawreview.com/article/early-jurisprudence-beijing-intersection-artificial-intelligence-copyright-and#google_vignette">released decisions</a></strong> in eight AI-related cases. Notably, the court found that AI-generated content can be protected under the country&#8217;s copyright laws and that platforms using algorithms to detect and remove AI-generated content must provide &#8220;reasonable explanations&#8221; for their decisions. While these cases are significant, they are non-binding because China follows a civil law system.</p></li><li><p><strong>Kazakhstan.</strong> The President of Kazakhstan <strong><a href="https://eurasianet.org/kazakhstans-president-proposes-creating-a-ministry-of-artificial-intelligence">announced plans</a></strong> for a new Ministry of Artificial Intelligence and Digital Development to address some of the emerging threats posed by AI technologies.</p></li><li><p><strong>South Korea. </strong>The Ministry of Science and Information and Communication Technology <strong><a href="https://www.korea.kr/briefing/pressReleaseView.do?newsId=156706814&amp;pageIndex=1&amp;repCodeType=&amp;repCode=&amp;startDate=2024-09-09&amp;endDate=2025-09-09&amp;srchWord=AI%EA%B8%B0%EB%B3%B8%EB%B2%95%20%ED%95%98%EC%9C%84%EB%B2%95%EB%A0%B9%20%EC%A0%9C%EC%A0%95%EB%B0%A9%ED%96%A5&amp;period=year">released a draft decree</a></strong> to implement South Korea&#8217;s AI Basic Act. The draft decree requires certain disclosures for developers of generative and high-impact AI, as well as establishes safety assurance standards for high-performance AI.</p></li></ul><p><strong>Europe. </strong>AI-related policy developments in the Europe include:</p><ul><li><p><strong>Albania.</strong> The Prime Minister <strong><a href="https://www.nbcnews.com/world/europe/albanias-prime-minister-appoints-ai-generated-minister-rcna230963">appointed an AI-generated minister</a></strong> to address corruption and improve government transparency. The minister, known as Diella, was created with help from Microsoft.</p></li><li><p><strong>EU. </strong>The European Commission <strong><a href="https://www.linkedin.com/posts/luca-bertuzzi-186729130_just-in-case-anyone-still-had-doubts-that-activity-7373656925340655616-ulUU?utm_medium=ios_app&amp;rcm=ACoAAAROw78Bd0_8xsIVj7o_JK1Kqowm_3tc4rI&amp;utm_source=social_share_send&amp;utm_campaign=copy_link">announced</a></strong> that the forthcoming "digital omnibus&#8221; will cover &#8220;targeted adjustments&#8221; to the EU AI Act. At the same time, the &#8220;stop the clock&#8221; movement got a <strong><a href="https://www.euronews.com/my-europe/2025/09/16/draghi-calls-for-pause-to-ai-act-to-gauge-risks">notable boost</a></strong> from former Italian Prime Minister, Mario Draghi, who argued that the EU AI Act should be paused to assess potential &#8220;drawbacks.&#8221; Poland also is <strong><a href="https://www.linkedin.com/posts/luca-bertuzzi-186729130_poland-has-proposed-delaying-the-eu-ai-act-activity-7373654130181111808-1MHH?utm_medium=ios_app&amp;rcm=ACoAAAROw78Bd0_8xsIVj7o_JK1Kqowm_3tc4rI&amp;utm_source=social_share_send&amp;utm_campaign=copy_link">proposing a 6 to 12 month delay</a></strong> for high-risk AI system penalties under the EU AI Act. Mistral also <strong><a href="https://www.rfi.fr/en/france/20250909-france-s-mistral-ai-soars-to-%E2%82%AC11-7bn-in-value-after-record-investment-drive">announced</a></strong> that it raised &#8364;1.7 billion in its most recent funding round and secured a strategic partnership with Dutch semiconductor company, ASML.</p></li><li><p><strong>UK. </strong>OpenAI and Nvidia <strong><a href="https://www.ft.com/content/522c141a-39dc-4fb7-a7d8-8fa01e6ef27d">plan to announce</a></strong> a large investment in AI infrastructure as part of President Trump&#8217;s state visit towards the end of September. The exact dollar amount has not been disclosed, but the deal includes commitments from the UK government to supply energy, whereas OpenAI will provide &#8220;access to its AI tools and technology&#8221; and Nvidia will supply &#8220;the chips used to power AI models.&#8221;</p></li></ul><p><strong>Middle East. </strong>The UAE <strong><a href="https://www.linkedin.com/redir/suspicious-page?url=https%3A%2F%2Fwww%2emiddleeastainews%2ecom%2Fp%2Fuae-sends-ai-chiefs-to-silicon-valley">sent local and federal chief AI officers</a></strong> to meet with representatives from the US tech industry in Silicon Valley. The Institute of Foundation Models at Mohamed bin Zayed University of Artificial Intelligence and G42 also <strong><a href="https://mbzuai.ac.ae/news/mbzuai-and-g42-launch-k2-think-a-leading-open-source-system-for-advanced-ai-reasoning/">launched K2</a></strong>, described as a &#8220;leading open-source system for advanced AI reasoning.&#8221;</p><p><strong>North America. </strong>AI-related policy developments in outside of the U.S. in North America include:</p><ul><li><p><strong>Canada. </strong>The Canadian federal government <strong><a href="https://globalnews.ca/news/11404246/federal-government-artificial-intelligence-registry/">intends to create</a></strong> a public registry for its AI projects. OpenAI is also <strong><a href="https://globalnews.ca/news/11406736/openai-chatgpt-lawsuit-ontario-news-outlets/">seeking to move its copyright case</a></strong> out of Canada and to the US, arguing that it does not do business there and is not subject to Canadian copyright law.</p></li><li><p><strong>Mexico. </strong>The Chamber of Deputies and Senate are <strong><a href="https://mexicobusiness.news/cloudanddata/news/mexico-congress-starts-bicameral-talks-unified-ai-law">working on a comprehensive AI law</a></strong> that leverages existing law with new requirements to address risks from AI technology. Mexico&#8217;s work on a national AI law comes at a time when <strong><a href="https://trustible.ai/post/what-does-the-global-pause-on-ai-laws-mean-for-ai-governance/">other countries</a></strong> are abandoning or delaying their efforts on similar legislation.</p></li></ul><div><hr></div><h3><strong>4. AEO / GEO - The New SEO</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ji3D!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ji3D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ji3D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ji3D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ji3D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ji3D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:828667,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/173808175?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ji3D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ji3D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ji3D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ji3D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F47fb1397-3f7e-41b5-9308-d2cf8500102c_1536x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the pre-AI internet era, most businesses lived or died at the hands of algorithms owned by search giants that determined how webpages showed up in internet searches. An entire field of tools and services, called Search Engine Optimization (SEO), developed to backwards engineer the search algorithms and optimize a website&#8217;s ability to rank high in search results. As we enter the AI era, a brand new category is being created around optimizing content for AI-answer engines.</p><p>There are two variants to AI-optimization, but are often conflated with each other: Answer Engine Optimization (AEO), and Generative Engine Optimization (GEO). AEO is geared towards appearing in the &#8216;answer box&#8217; that now shows above many traditional search platforms. These aim to provide a single answer for any given query. GEO in contrast is focused on the experience inside of AI chat platforms entirely. Most major consumer AI platforms like ChatGPT have the ability to pull in various search results or have existing indexed content, and use them to generate answers. Pulling data from the internet can both <strong><a href="https://openai.com/index/introducing-chatgpt-search">reduce hallucinations</a></strong>, and (probably) also help provide some legal liability mitigations in the US, and search platforms that show user generated content have traditionally been protected under Section 230.</p><p>While there is no shortage of potential market impacts from this, there are two major points of importance here for AI Governance professionals. The first is that the way content is written and optimized for the web will have to change. Whereas SEO was heavily focused on backlinks for page authority, AI <strong><a href="https://www.searchenginejournal.com/google-antitrust-case-ai-overviews-use-fastsearch-not-links/555220">systems may be optimized</a></strong> for different kinds of direct information. <strong><a href="https://www.siegemedia.com/strategy/best-answer-engine-optimization-aeo-agencies?utm_source=chatgpt.com">Dozens of startups have emerged</a></strong> to help organizations with this task, and unsurprisingly many groups are also looking for AI to help. How a brand is presented in an AI tool, could become an area of concern for AI governance teams who may have the expertise on how to support the business as it navigates AEO/GEO issues. The second, is to be aware that the answers from an AI chat system can be influenced in the same ways as search. <strong><a href="https://searchengineland.com/google-tests-ads-ai-overviews-440649">Google has already discussed potential &#8216;sponsored answer results&#8217;</a></strong>, there likely will be over time, and in the meantime, any &#8216;answer&#8217; from an AI system that conducts internet searches should be viewed with the same level of scrutiny as a regular search engine.</p><p>On the inverse, GenAI content is a cause that many brands are seeing their content visibility decline sharply (both in traditional SEO and GEO.) Search engines and LLMs are prioritizing human, organically written content for rank due to its unique and novel quality, where content developed by LLMs are ranked as a lower quality. We&#8217;ve <strong><a href="https://trustible.substack.com/p/trustible-and-databricks-team-up?utm_source=publication-search">covered this in the past</a></strong> on AI our coverage on AI slop. This presents a balancing act, where AI can help accelerate content development and visibility in the era of AEO/GEO, but overreliance can actually do more harm than good.</p><p><strong>Key Takeaway:</strong> AI tools may quickly become the &#8216;gatekeepers&#8217; of information. What they say about companies, brands, or services, could quickly be accepted as canonical. Many AI tools will only give a few answers to questions like &#8216;What&#8217;s the best newsletter platform&#8217;, and so the competition to be in the AI&#8217;s answers will be fierce, and could quickly impact entire markets.</p><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <strong><a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a></strong>.</p><p>AI Responsibly,</p><p>- Trustible Team</p>]]></content:encoded></item><item><title><![CDATA[The Problem with LLMs Always Saying ‘Yes’ ]]></title><description><![CDATA[Plus, How Mode Safeguards Degrade with GenAI Use, our Global Policy Roundup, and the U.S. Federal Government&#8217;s Shifting AI Plans]]></description><link>https://insight.trustible.ai/p/the-problem-with-llms-always-saying</link><guid isPermaLink="false">https://insight.trustible.ai/p/the-problem-with-llms-always-saying</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 03 Sep 2025 13:27:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0ih7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy Wednesday, and welcome to this week&#8217;s edition of the Trustible AI Newsletter! September&#8217;s arrival means school&#8217;s back in session (along with more effort from the White House to bring AI literacy to the classroom), Congress is back in Washington after summer recess, and we&#8217;re back to counting down the 13 remaining legislative days to avoid (another) government shutdown.</p><p>In the meantime, in today&#8217;s edition (5-6 minute read):</p><ol><li><p>The Hidden Danger of Chatbots: Why &#8220;Yes&#8221; Can Be Deadly</p></li><li><p>How Model Safeguards &amp; Performance Degrade With Use</p></li><li><p>Global &amp; U.S. Policy Roundup</p></li><li><p>Shifting Winds: What Federal Moves Mean for U.S. AI Hegemony</p></li></ol><div><hr></div><h2><strong>1. The Hidden Danger of Chatbots: Why &#8220;Yes&#8221; Can Be Deadly</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0ih7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0ih7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!0ih7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!0ih7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!0ih7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0ih7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51054820-3483-402f-92cf-69f4ce814f87_1600x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0ih7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!0ih7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!0ih7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!0ih7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51054820-3483-402f-92cf-69f4ce814f87_1600x896.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Chatbots are designed to be agreeable. Their instinct to say &#8220;yes&#8221; makes them engaging, but it also makes them dangerous in high-risk applications&#8212;especially when the stakes involve mental health, financial decisions, or employment outcomes.</p><p>There has been increasing attention given to people using AI enabled chatbots for &#8216;self service mental health&#8217;. <a href="https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf">A recent CommonSenseMedia study</a> suggests that of the 72% of US teens who regularly use AI systems, up to an eighth of that cohort may have used AI for therapeutic or mental health purposes. The dangers of using AI systems for this use case became evident recently after the family of a recently deceased teenager, Adam, <a href="https://cdn.sanity.io/files/3tzzh18d/production/5802c13979a6056f86690687a629e771a07932ab.pdf">sued OpenAI for gross negligence and wrongful death</a> after Adam committed suicide. The lawsuit alleges ChatGPT did everything from giving feedback on how to construct a stronger noose, why he shouldn&#8217;t feel guilt about the suicide, and how to steal alcohol from his parents to dull the pain. The lawsuit points out that if any human gave the feedback that ChatpGPT gave Adam, there would not be any doubt about their complicity. Cases such as this could quickly set clear precedents for the liability that chatbot creators may face.</p><p>There are many potential takeaways from this unfortunate incident, and plenty of reasons why nascent AI systems should not be used for mental health purposes. For an in-depth breakdown of some of these, we recommend <a href="https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care">a recent Stanford study on the issue.</a> However, we'll focus on just one particular element of the AI system that makes them particularly dangerous: their built-in willingness to say &#8216;yes&#8217; to anything and everything. This tendency has been described in extreme cases as &#8216;sycophancy&#8217;, but outside of that sensationalist term, a system that is conditioned to always say &#8216;yes&#8217; should not be used in certain circumstances. On the one hand, an AI system that is constantly pushing back or saying &#8216;no&#8217; will likely not be widely adopted, and training for that may be difficult. On the other hand, in certain domains such as health, law, or finance, we often expect, and even pay massive sums, to people to tell us &#8216;no&#8217;. No, you likely don&#8217;t have a rare disease, &#8216;no&#8217;, that action likely isn&#8217;t legal, and &#8216;no&#8217;, that investment strategy isn&#8217;t going to make you rich. A system that will always tell you &#8216;yes&#8217; is a dangerously tempting tool for a wide variety of use cases because it will always appeal to our own desire for confirmation and validation. There are not yet clear principles or regulations on when an AI should reject instructions, and when it may be equally important to always accept them.</p><p><strong>Key Takeaway:</strong> LLMs are conditions to say &#8216;yes&#8217; to as many things as possible, and there are market drivers that encourage that. However, that makes them uniquely unsuitable for certain domains where having a bias for &#8216;yes&#8217; could cause physical, financial, or reputational harm.</p><div><hr></div><h3><strong>2. How Model Safeguards &amp; Performance Degrade With Use</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W0hM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W0hM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!W0hM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!W0hM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!W0hM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W0hM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W0hM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!W0hM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!W0hM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!W0hM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34b38576-abc6-447d-a2a7-82289f1116f0_1600x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Following the wrongful death lawsuit against OpenAI alleging that ChatGPT is associated with multiple reported instances of self-harm and attributed as a cause of suicide in recent cases, OpenAI released a <a href="https://openai.com/index/helping-people-when-they-need-it-most/">blog</a> detailing their current and future safeguards. What&#8217;s notable is their acknowledgment that existing protections work best on short exchanges, not extended conversations. This effect is not unique: its been observed across both models (e.g. Claude, Llama and Grok) and violation types.</p><p>Until now this failure-mode has not been given significant attention: Companies like OpenAI publish extensive System Cards that detail types of safety tests run, but most of them focus on single-turn conversations (e.g. Given a specific question in isolation, will the model return an inappropriate response). While GPT-5&#8217;s System Card does mention a manual red-teaming exercise that tested for psychological harms, which included &#8216;multi-turn, tailored attacks [that] may occasionally succeed&#8217; - the severity of the violations was low. These tailored attacks were unlikely to include realistic conversations that spanned multiple months.</p><p>Generative AI systems used two broad categories of safeguards - both of which may fail to capture nuance of long-term use.</p><p><strong>Post-Training:</strong> Many AI Systems use SFT (supervised fine-tuning) and RL (reinforcement learning) to teach the model to produce correct responses to user prompts. Both of these methods are applied to single-turn conversations - meaning models aren&#8217;t explicitly trained to respond correctly in a multi-turn context. This may not be evident during basic testing that, also, only tests single-turn conversations.</p><p><strong>Output Monitoring:</strong> OpenAI uses classifiers to check if both user prompts and model outputs should be flagged for potential violations. In the blogpost, they mention that the classifiers have a particular issue with underestimating the severity of what it's seeing. One particular difficulty may be that the violations are subtle and require the context of a longer conversation to properly gauge severity.</p><p><strong>Key Takeaway:</strong> Both the performance and protections of models during long conversations (we estimate at over 20 turns) are understudied, but anecdotal evidence suggests that they degrade significantly. For enterprises and consumers, especially those relying on LLMs for high-risk use cases, it&#8217;s important to ensure that users are aware of this risk and the negative outcomes, and to consider the outputs accordingly. It&#8217;s equally important to monitor and test your systems&#8217; performance as part of your overall AI governance program.</p><div><hr></div><h3><strong>3. Global &amp; U.S. Policy Roundup</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Y-wu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Y-wu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Y-wu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Y-wu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Y-wu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Y-wu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Y-wu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Y-wu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Y-wu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Y-wu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9c8b6bb7-507e-4af1-8041-208f8c714d84_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Major International AI New: </strong>The United Nations&#8217; General Assembly <a href="https://www.un.org/en/delegate/two-new-mechanisms-promote-cooperation-ai-governance">established</a> the UN Independent International Scientific Panel on AI and the Global Dialogue on AI Governance.</p><p><strong>U.S. Federal Government.</strong> First Lady Melania Trump <a href="https://apnews.com/article/melania-trump-artificial-intelligence-student-contest-7e8cefa4a614a4bfeee4be1b1c998149">announced</a> a national challenge for K-12 AI students to create innovative AI solutions for community problems. The Department of Labor also <a href="https://www.dol.gov/newsroom/releases/osec/osec20250826">announced new guidance</a> to help states understand how the Workforce Innovation and Opportunity Act grants can help &#8220;bolster [AI] literacy and training across the public workforce system.&#8221;</p><p><strong>U.S. States. </strong>AI-related policy developments at the state level include:</p><ul><li><p><strong>California.</strong> The <a href="https://www.ksbw.com/article/state-superintendent-to-convene-artificial-intelligence-in-education-workgroup/65935988">State Superintendent</a> hosted the first meeting of a new AI workgroup that was established under legislation from 2025. The group will help create guidance for AI in K&#8211;12 education.</p></li><li><p><strong>Illinois.</strong> A <a href="https://blockclubchicago.org/2025/08/28/the-great-lakes-could-be-at-risk-due-to-data-centers-powering-ai-study-warns/">recent report</a> from the Alliance for the Great Lakes raised alarm bells for Chicago&#8217;s drinking water because of AI data centers&#8217; increasing strain on Lake Michigan and other local water systems. Illinois is home to approximately 187 data centers, most of which are located near Chicago.</p></li><li><p><strong>Michigan. </strong>Governor Gretchen Whitmer <a href="https://www.fox2detroit.com/news/new-michigan-laws-ban-ai-generated-porn-real-people">signed</a> a package of bills into law that prohibits using AI to create non-consensual explicit images of real people. The new laws also set sentence guidance for violators, which can include jail time.</p></li><li><p><strong>Virginia.</strong> Google <a href="https://www.wusa9.com/article/news/local/virginia/google-expands-virginia-data-centers-9-billion-investment-youngkin-loudoun-prince-william/65-0a32b126-95bb-46fb-8bb1-238a41552228">plans to invest</a> an additional $9 billion for AI infrastructure in Northern and Central Virginia through 2026. A new <a href="https://www.thecentersquare.com/virginia/article_892918c1-6d97-4be3-98fa-e466e525dff8.html">AI-power private school</a> also opened its doors in Northern Virginia.</p></li></ul><p><strong>Africa. </strong>CNBC Africa <a href="https://www.cnbcafrica.com/events/ai-summit/">held their first AI Summit</a> in Johannesburg, South Africa. The summit brought together over 300 industry professionals to discuss the &#8220;impact of AI across Africa's key sectors.&#8221;</p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China. </strong>An official from the National Development and Reform Commission <a href="https://www.msn.com/en-us/money/markets/china-warns-against-disorderly-ai-race-as-local-tech-giants-chase-us-rivals/ar-AA1LtbXZ?ocid=Peregrine">echoed comments</a> from Chinese President Xi Jinping that focused on preventing &#8220;disorderly competition&#8221; with new AI models. The Chinese government is attempting to prevent duplicative efforts that could lead to deflationary pressures within the AI sector.</p></li><li><p><strong>South Korea.</strong> The South Korean government is <a href="https://asia.nikkei.com/business/technology/artificial-intelligence/south-korea-triples-ai-budget-to-7bn-amid-intense-global-competition">proposing</a> an 8 percent increase in AI investment under its 2026 budget. In addition to spending increases for R&amp;D, the proposed budget also directs more funds towards AI startups.</p></li><li><p><strong>Thailand.</strong> Bangkok has proven to be a <a href="https://www.dcbyte.com/news-blogs/bangkok-data-centre-market-analysis/">strategic hub for data centers</a>, fueled in part because of major investments from tech companies such as AWS, Google, Microsoft, and Alibaba.</p></li></ul><p><strong>Australia. </strong>The Commonwealth Bank of Australia (CBA) <a href="https://www.abc.net.au/news/2025-08-21/cba-backtracks-on-ai-job-cuts-as-chatbot-lifts-call-volumes/105679492">recently reversed</a> a decision to lay off employees due to AI. The CBA noted that the customer service jobs it eliminated after introducing its AI-powered &#8220;voice-bot&#8221; were not redundant.</p><p><strong>Europe. </strong>AI-related policy developments in the Europe include:</p><ul><li><p><strong>EU. </strong>EU Industry is <a href="https://www.euractiv.com/section/tech/news/eu-ai-rules-lagging-because-european-industry-isnt-showing-up-says-standards-leader/">being accused</a> of dragging their feet on helping write EU AI Act standards. Piercosma Bisconti, one of the experts who is helping write EU AI Act standards, claimed that companies who support delaying the EU AI Act&#8217;s implementation &#8220;should be contributing [to standards development], and they are not&#8221; and added that "EU industry is barely at the table.&#8221;</p></li><li><p><strong>Switzerland. </strong>The Swiss Federal Technology Institute of Lausanne <a href="https://www.engadget.com/ai/switzerland-launches-its-own-open-source-ai-model-133051578.html?src=rss&amp;utm_source=flipboard&amp;utm_content=topic%2Fartificialintelligence">released</a> Apertus, an open-source LLM that is trained only on publicly available data. The Swiss hope that Apertus can be a plausible alternative to proprietary models, like OpenAI&#8217;s GPT models.</p></li></ul><p><strong>Middle East. </strong>Saudi owned Humain <a href="https://san.com/cc/saudi-arabia-releases-most-advanced-ai-chatbot-with-islamic-values/">released</a> &#8220;HUMAIN Chat,&#8221; a chatbot designed to &#8220;comply with Islamic values.&#8221; The latest announcement comes as Saudi Arabia angles to be the <a href="https://english.alarabiya.net/News/saudi-arabia/2025/08/27/saudi-arabia-aims-to-be-third-biggest-ai-provider-in-the-world-humain-ceo">third largest model provider</a> in the world, after the US and China.</p><p><strong>North America. </strong>AI-related policy developments in outside of the U.S. in North America include:</p><ul><li><p><strong>Canada. </strong>The Canadian federal government <a href="https://www.intelligentcio.com/north-america/2025/09/01/canada-partners-with-cohere-to-advance-artificial-intelligence-leadership/">signed a memorandum of understanding</a> with Cohere to help accelerate AI adoption for public services and reinforce its position as global leader in AI. Meanwhile, the <a href="https://www.bnnbloomberg.ca/business/technology/2025/08/28/poll-suggests-85-of-canadians-want-governments-to-regulate-ai/">vast majority of Canadians</a> want some type of AI regulation as many Canadians express concern with AI safety and risks.</p></li><li><p><strong>Mexico. </strong>The Supreme Court <a href="https://mexiconewsdaily.com/news/mexico-works-created-ai-cannot-be-granted-copyright/">unanimously ruled</a> that content generated exclusively by AI is copyrightable under the country&#8217;s current laws. The Court found that &#8220;automated systems do not possess the necessary qualities &#8230; for authorship.&#8221;</p></li></ul><p><strong>South America. </strong>Uruguay became the <a href="https://www.coe.int/en/web/portal/-/uruguay-signs-council-of-europe-s-global-ai-treaty">first Latin American country</a> to sign the Council of Europe&#8217;s AI treaty, which seeks to promote the use of AI systems that respect human rights, democracy and the rule of law. 16 other countries, including the US, have signed the treaty. </p><div><hr></div><h2><strong>4. Shifting Winds: What federal moves mean for U.S. AI hegemony</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aJDw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aJDw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!aJDw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!aJDw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!aJDw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aJDw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png" width="1456" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aJDw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 424w, https://substackcdn.com/image/fetch/$s_!aJDw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 848w, https://substackcdn.com/image/fetch/$s_!aJDw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 1272w, https://substackcdn.com/image/fetch/$s_!aJDw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ccc7192-299d-4043-b5b5-c9a300cfd150_1600x896.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the high-stakes international battle for AI leadership, the U.S. federal government&#8217;s recent AI decisions, ranging from GSA&#8217;s procurement acceleration to export-levy tactics, are not just about innovation; they're complete strategic pivots.</p><p>The White House&#8217;s broader AI Action Plan now prioritizes infrastructure, innovation, and international leadership, elevating AI to the status of critical U.S. infrastructure. This deregulatory posture, emphasizing "permissionless innovation", positions tech firms to benefit. But critics warn it increases risks from misinformation, ethical bias, and global tensions.</p><p>GSA&#8217;s inclusion of OpenAI, Anthropic, and Google models on its Multiple Award Schedule (MAS), with heavily discounted &#8220;OneGov&#8221; pricing of $1 or less, widens federal access to frontier AI technologies cheaply and quickly.. But this move has drawn protests over preordained pricing and lack of competition, raising concerns among governance officials. From a geopolitical standpoint, it ensures U.S. federal infrastructure remains closely tied to American AI ecosystems, reducing foreign reliance, yet may compromise procurement transparency and preparedness.</p><p>For large organizations and system integrators supporting government contracts, the procurement bonanza, from government-wide acquisition to USAi.gov&#8217;s sandbox, accelerates onboarding. USAi, a free GSA-hosted AI evaluation suite, offers agencies unified API access to multiple models in a secure environment. Yet its pilot nature, according to the GSA, is only temporary in nature; the GSA doesn&#8217;t want to be in the business of providing tools for the long-term.</p><p>Complementing that, the National Security Memorandum on AI underscores AI&#8217;s strategic role in defense, intelligence, and allied collaboration, while insisting government AI use must uphold civil rights, transparency, and democratic values. These policies together reflect an urgent understanding: global competitors like China and the UAE are racing ahead in compute, sovereign AI, and infrastructure. Enter initiatives such as the Stargate Project, a potential $500 billion-plus U.S. investment in AI infrastructure through partnerships with OpenAI, Oracle, SoftBank, and MGX, framed as a Manhattan Project&#8211;scale response to global pressure.</p><p>Moreover, the U.S. now taps into chip-based revenues: a previously unthinkable 15% levy on Nvidia/AMD AI chip sales to China, tied to export approval. This gives Washington financial leverage, disincentivizes adversary acceleration, and incentivizes domestic supply chains, all signals to enterprises that hardware cost projections must now include geopolitical premiums.</p><p><strong>Why it Matters:</strong> The U.S. drive for AI global dominance through deregulation, infrastructure building, and enhanced AI literacy in schools and across the workforce contrasts sharply with the EU&#8217;s regulation-heavy path and China&#8217;s authoritarian deployment model. We are now deep in the throes of a fractious and alternative set of emerging visions for what global AI hegemony should look like and whose vision should come out on top. For the U.S. to truly succeed in scaling trusted AI adoption and putting both private and public sector organizations on a trajectory for AI global dominance, clarity in policy, strategy, and commercial support should be the priorities.</p><div><hr></div><p><em>P.S. We&#8217;re hosting a webinar at 1 p.m. ET today with Databricks, Schellman, and Trustible to deep-dive on how frameworks and standards, like the Databricks AI Governance Framework and ISO 42001, can help build actionable guardrails that accelerate enterprise AI adoption. You can <a href="https://app.livestorm.co/trustible/building-guardrails-for-enterprise-ai-exploring-the-databricks-ai-governance-framework-and-beyond?utm_campaign=19007660-DG-2025-09-DatabricksSchellmanWebinar&amp;utm_content=342790006&amp;utm_medium=social&amp;utm_source=linkedin&amp;hss_channel=lcp-88638823">register here</a>, or sign up to receive the recording later.</em></p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>.</p><p>AI Responsibly,</p><p>- Trustible Team</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe to receive our latest posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI adoption breaking significantly across demographic groups]]></title><description><![CDATA[Plus, Trustible launches partner program, how to effectively and safely use AI, our global policy and industry round up, and what GPT-5 can tell us about AI evaluations]]></description><link>https://insight.trustible.ai/p/ai-adoption-breaking-significantly</link><guid isPermaLink="false">https://insight.trustible.ai/p/ai-adoption-breaking-significantly</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 20 Aug 2025 14:59:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MeHl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! We have a lot to cover this week; let&#8217;s dive right in.</p><p>In today&#8217;s edition (5-6 minute read):</p><ol><li><p>AI adoption uneven across demographic groups</p></li><li><p>Trustible launches global partner program</p></li><li><p>What you need to know to effectively and safely use AI</p></li><li><p>Global &amp; U.S. policy roundup</p></li><li><p>What GPT-5 can tell us about AI evaluations?</p></li><li><p>Webinar: Building Guardrails for Enterprise AI - Exploring the Databricks AI Governance Framework and Beyond</p></li></ol><div><hr></div><h4>1. AI adoption uneven across demographic groups</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MeHl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MeHl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 424w, https://substackcdn.com/image/fetch/$s_!MeHl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 848w, https://substackcdn.com/image/fetch/$s_!MeHl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 1272w, https://substackcdn.com/image/fetch/$s_!MeHl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MeHl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png" width="1456" height="823" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:823,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MeHl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 424w, https://substackcdn.com/image/fetch/$s_!MeHl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 848w, https://substackcdn.com/image/fetch/$s_!MeHl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 1272w, https://substackcdn.com/image/fetch/$s_!MeHl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdbebb583-15bc-4aea-9c63-0112db6a7bcb_1472x832.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Recent studies from <a href="https://www.hbs.edu/ris/Publication%20Files/GenderGapsGenerativeAI%20(5)_9a0023d1-eb7a-4466-b6d1-bc27fb11e1ad.pdf">HBS</a> and <a href="https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/">Pew</a> show that AI adoption is uneven across demographics in ways that challenge expectations. HBS&#8217;s meta-analysis of 18 studies finds women are significantly less likely than men to use generative AI tools, with heightened skepticism over bias and risk being key factors. Pew reports that younger and more educated adults are adopting AI far faster than older adults or those with less educational attainment, with a notable 30-point gap in usage between those with a post-graduate degree (48%), and those with only a high school level (17%).</p><p>Meanwhile, an <a href="https://www.elon.edu/u/news/2025/03/12/survey-52-of-u-s-adults-now-use-ai-large-language-models-like-chatgpt/">Elon University survey found some racial and ethnic gaps in LLM usage, </a>with Black and Hispanic adults saying they use AI <em>more</em> often (57% and 66% respectively) than the general population (52%) of adults.</p><p>While there&#8217;s a variety of potential sources or reasons behind these disparities ranging from risk sensitivity, perceptions of bias, availability of education resources, etc, the potential consequences of sustained disparities are clear: negative feedback loops. When groups hold back from using AI, whether from distrust or lack of access, their perspectives become underrepresented in the very systems that are learning from user data and being fine-tuned on it. That dynamic could make future tools feel even less relevant to them and further undermine trust.</p><p><strong>Key takeaway: </strong>While the tech industry is often quick to attack any form of regulation as potential burdens on or obstacles to AI growth, regulation may also be one of the key elements that unlocks adoptions from the most skeptical AI groups. This unlock is necessary to maximize the benefits of AI.</p><div><hr></div><h4>2. Trustible Launches Global Partner Program</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lAVm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lAVm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!lAVm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!lAVm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!lAVm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lAVm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lAVm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!lAVm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!lAVm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!lAVm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1aac1278-c626-4c0d-818c-bc2e9e7f17fc_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Today, we&#8217;re announcing the global launch of the Trustible Partner Program, an ecosystem purpose built to weave AI governance through every stage of the AI lifecycle. Our program brings together technology alliances, system integrators, resellers and distributors, auditors, and insurers around a single objective: help organizations adopt AI responsibly, prove compliance, and scale value with governance at the core.</p><p>Accelerating responsible AI adoption isn&#8217;t an isolated activity by a single organization or vendor - it takes a coalition of partners to bring trusted AI from being just words on a page to operationalization. The Trustible Partner Program isn&#8217;t a passive commitment: we&#8217;re creating a working coalition of builders committed to getting AI governance done for organizations&#8212;starting today. By aligning leaders across the AI value chain, we&#8217;re delivering a connected AI governance experience that accelerates the time&#8209;to&#8209;value of AI and truly delivering trusted AI.</p><p>In the coming weeks, we&#8217;ll share our initial launch partners, with more to follow throughout the year. We&#8217;re opening applications today for Technology Alliances, Resellers/Distributors, System Integrators, and Strategic Alliances. If you&#8217;re building, advising, or delivering AI, and you believe governance should be built&#8209;in, not bolted&#8209;on, join us.</p><p>You can read more about <a href="https://www.trustible.ai/post/trustible-partner-program">the program on our blog</a>.</p><div><hr></div><h4>3. What You Need to Know to Effectively and Safely Use AI</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eOF1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eOF1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 424w, https://substackcdn.com/image/fetch/$s_!eOF1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 848w, https://substackcdn.com/image/fetch/$s_!eOF1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!eOF1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eOF1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg" width="520" height="272" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:272,&quot;width&quot;:520,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eOF1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 424w, https://substackcdn.com/image/fetch/$s_!eOF1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 848w, https://substackcdn.com/image/fetch/$s_!eOF1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!eOF1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37108a58-b5c1-41f2-b33c-638679ba0b08_520x272.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The Trump Administration's AI Action Plan proposes several policy recommendations around AI education. In this context, it&#8217;s not about the use of AI within the current education system, but rather educating our entire current and future workforce about how to use AI. Massive investments in data center infrastructure, growing model size and complexity, and agentically integrating into existing applications won&#8217;t yield returns if there are bottlenecks in human expertise for deploying, using, and governing these systems. While governments will likely take point in enacting AI education within schools, it&#8217;s worth considering what an AI literacy and education program should look like inside of an organization, and how to retrain their existing workforce to be expert &#8216;AI users&#8217;.</p><p><strong>Basic Literacy</strong> - This should teach the basic vocabulary about AI, and basic tenants about the system for users. The exact technical details of AI models themselves don&#8217;t need to be covered in depth, but rather this should focus on more basic concepts like what a prompt is, how to provide appropriate context in a prompt, and what different AI tools may be broadly capable of.</p><p><strong>Prompting Skills</strong> - Describing the exact desired output you want from an AI system isn&#8217;t always easy. Much like with computer programming, it can be challenging to figure out the exact steps you want an AI system to take, or describe your desired outcome with enough clarity for the AI system to understand. AI models don&#8217;t always have a full understanding of the world, and some relevant information about this needs to be directly provided. There are a lot of prompting techniques that can help</p><p><strong>Risk &amp; Governance Education</strong> - Much like how every technology enabled employees has to do an annual cybersecurity training, there should be an equivalent for using AI tools. There are some distinct AI risks, ranging from hallucinations, to system biases. Ensuring that users simply know these things are possible can itself be a challenge. Many organizations want &#8216;humans in the loop&#8217; of AI systems as their main risk mitigation, but that still requires training on what to look for.</p><p><strong>Agentic Awareness</strong> - As various forms of Agentic AI get developed and deployed, it will become essential to teach staff on what kinds of tasks are ready to be automated, and which ones still need to be done by humans because of AI system limitations, governance concerns, or even because of the business strategy and brand reputation of your organization. There will be a lot of hype from AI agents for years to come, and knowing what can be automated efficiently, will be half the battle.</p><p><strong>Key Takeaway:</strong> LLMs can be very powerful and extremely accessible because their inputs can be natural language, but that can be a double edged sword. Most organizations won&#8217;t be able to hire easily for these AI skills, and will likely need to develop education and training programs for them themselves.</p><div><hr></div><h4>4. Global &amp; U.S. Policy Roundup</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Bdux!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Bdux!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Bdux!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Bdux!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Bdux!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Bdux!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Bdux!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!Bdux!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!Bdux!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!Bdux!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcce56ab9-18b7-47fb-835b-3d4b2f80ae77_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here is our quick synopsis of the major AI policy developments:</p><p><strong>U.S. Federal Government.</strong> The Trump Administration is touting a deal cut with AI chip manufacturers&#8217; <a href="https://www.cnbc.com/2025/08/11/trump-nvidia-amd-china-chip-revenue-deal-implications.html">Nvidia and AMD</a>, which stipulates that the U.S. government would receive 15% of revenue from their chip sales to China. Congressional Democrats have asked the Administration to <a href="https://www.cnbc.com/2025/08/16/senate-democrats-letter-trump-advanced-ai-chip-sales-china.html">reconsider the deal</a> due to national security concerns. The General Service Administration also <a href="https://www.politico.com/news/2025/08/14/ai-launches-across-the-government-00508993">announced</a> a new program aimed at encouraging AI adoption across the federal government. The move is part of a push by the Trump Administration to modernize and automate more aspects of the federal government.</p><p><strong>U.S. States. </strong>AI-related policy developments at the state level include:</p><ul><li><p><strong>California. </strong>OpenAI <a href="https://openai.com/global-affairs/letter-to-governor-newsom-on-harmonized-regulation/">asked</a> Governor Gavin Newsom to &#8220;consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the [EU Code of Practice] or enter into a safety-oriented agreement with a relevant US federal government agency.&#8221;</p></li><li><p><strong>Colorado. </strong>The Colorado state legislature will return for a <a href="https://coloradosun.com/2025/08/06/colorado-special-session-big-beautiful-bill/">special session</a> on August 21, 2025, during which it is expected to address potential changes to its AI law (SB 205). Proposed bills have been <a href="https://www.cpr.org/2025/08/19/how-to-update-colorado-ai-law-special-session/">released</a> to amend the current law, all of which would narrow its current scope. It is unclear whether a deal will be struck, as the regular legislative session adjourned without an agreement.</p></li><li><p><strong>Florida. </strong>The Florida Bar is <a href="https://www.floridabar.org/the-florida-bar-news/florida-bar-explores-ai-guardrails/">looking into new rules</a> that address AI risks in the legal profession. The Bar has taken some action on how AI can be used in the practice of law, but recent advancements with AI technology have caused Bar leaders to consider additional guidance.</p></li><li><p><strong>Texas. </strong>Texas Attorney General Ken Paxton <a href="https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-investigates-meta-and-characterai-misleading-children-deceptive-ai">launched an investigation</a> into Meta and Character.ai over controversies with their chatbots. Attorney General Paxton emphasized that the companies may have engaged in &#8220;deceptive trade practices and misleadingly market[ed] themselves as mental health tools.&#8221; The Texas investigation comes shortly after Congressional Republicans <a href="https://techcrunch.com/2025/08/15/sen-hawley-to-probe-meta-after-report-finds-its-ai-chatbots-flirt-with-kids/">announced</a> their own investigation into Meta&#8217;s chatbots.</p></li></ul><p><strong>Africa. </strong>Google <a href="https://www.barrons.com/news/google-commits-37-million-to-ai-development-in-africa-aaf0c167">announced</a> that it is committing $37 million to AI development across countries in Africa. The new investment is aimed at AI research and supporting local AI projects. The announcement comes as Ghana and Lesotho <a href="https://iafrica.com/ghana-lesotho-forge-ai-and-digital-partnership-to-strengthen-africas-tech-future/">launch a new digital partnership</a> to facilitate cooperation on setting digital standards and frameworks, as well as leveraging AI for agricultural support.</p><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China. </strong>Sam Altman <a href="https://www.cnbc.com/2025/08/18/openai-altman-china-ai.html">expressed concerns</a> over where the U.S. stands against China on AI. During a recent interview, Altman noted that he is &#8220;worried about China&#8221; and that the AI arms race between the two countries is more complex than meets the eyes. Specifically, he highlighted China&#8217;s capacity to build and flaws in current U.S. policy (i.e., chip export controls).</p></li><li><p><strong>Malaysia.</strong> The Association of Southeast Asian Nations (ASEAN) convened the <a href="https://asean.org/malaysia-to-host-inaugural-asean-ai-malaysia-summit-2025/">ASEAN Malaysia AI Summit</a> in Kuala Lumpur. Secretary-General of ASEAN, Dr. Kao Kim Hourn, <a href="https://tvbrics.com/en/news/asean-leaders-to-adopt-ai-safety-network-framework/">announced</a> during the summit that he expects the ASEAN AI safety framework will be established &#8220;by early 2026.&#8221; The Malaysian government also <a href="https://www.cloudcomputing-news.net/news/malaysia-to-launch-cloud-policy-at-asean-ai-summit/">launched</a> their National Cloud Computing Policy at the summit. Simultaneously, Huawei hosted the Huawei Cloud AI Ecosystem Summit APAC 2025 at which they <a href="https://aimagazine.com/news/huawei-cloud-targets-30-000-ai-talents-in-malaysia-push">announced</a> their intent to train 30,000 AI professionals in Malaysia over the next three years. Huawei has been a source of controversy in the U.S. because of its close ties to the Chinese government.</p></li><li><p><strong>Japan.</strong> NTT Data, a multinational Japanese information technology company, <a href="https://technologymagazine.com/news/google-ntt-ai-partnership-to-boost-cloud-modernisation">announced a strategic partnership</a> with Google to &#8220;accelerate enterprise adoption of agentic AI and cloud modernisation.&#8221; This is the first time a Japanese company has signed a contract of this nature with Google.</p></li></ul><p><strong>Middle East. </strong>AI-related policy developments in the Middle East include:</p><ul><li><p><strong>Saudi Arabia. </strong>The Saudi government is doubling-down on AI skills development with a recent <a href="https://education.economictimes.indiatimes.com/news/saudi-arabia-launches-ai-engineering-camp-in-partnership-with-oxford-university/123300278">partnership announced</a> between the Saudi Data and Artificial Intelligence Authority and Oxford University. The new venture creates an &#8220;intensive artificial intelligence application engineering camp aimed at training both Saudi and international graduates in advanced AI technologies.&#8221; The Saudi Ministry of Education also <a href="https://cairoscene.com/Buzz/Saudi-Arabia-Launches-AI-Curricula-and-Global-University-Partnerships">launched</a> new AI-related curricula for students and training programs educators.</p></li><li><p><strong>UAE.</strong> Representatives from the UAE Council for Fatwa <a href="https://timesofindia.indiatimes.com/world/middle-east/uae-explains-how-it-plans-to-use-ai-in-issuing-fatwas-to-uphold-islamic-values-and-prevent-misuse/articleshow/123299088.cms">previewed plans</a> to utilize AI for issuing fatwa (legal opinions on points of Islamic law). The discussion came amidst a conference held in Cairo on the intersection between religious scholarship and AI.</p></li></ul><p><strong>North America. </strong>AI-related policy developments in outside of the U.S. in North America include:</p><ul><li><p><strong>Canada. </strong>Quebec's government healthcare corporation, Sant&#233; Qu&#233;bec, is <a href="https://www.cbc.ca/news/canada/montreal/sante-quebec-ai-scribe-doctors-1.7606998">rolling out</a> a pilot program that uses AI to help doctors transcribe medical notes from patient visits. AI transcription tools remain popular, however they continue to raise concerns with <a href="https://www.npr.org/2025/08/15/g-s1-83087/otter-ai-transcription-class-action-lawsuit">confidentiality</a> and <a href="https://www.ap.org/news-highlights/best-of-the-week/honorable-mention/2024/researchers-say-an-ai-powered-transcription-tool-used-in-hospitals-invents-things-no-one-said/">hallucinations</a>.</p></li><li><p><strong>Mexico. </strong>BBVA Mexico is moving <a href="https://medium.com/@martareyessuarez25/bbva-m%C3%A9xico-adopts-generative-artificial-intelligence-to-transform-customer-service-and-eliminate-68786558f46a">to eliminate touch-tone options</a> in its phone service and replacing it with a generative AI system called Blue. The system is meant to help reduce the time it takes for customers to find the assistance they need.</p></li></ul><div><hr></div><h4>5. What GPT-5 can tell us about AI Evaluations?</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ilo5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ilo5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!ilo5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!ilo5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!ilo5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ilo5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ilo5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!ilo5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!ilo5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!ilo5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9e566133-68fa-4427-aa28-067c94852f3f_1200x630.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Amidst controversies about misleading graphs and GPT-5&#8217;s dramatic change in personalities, Trustible took a deeper look at its System Card and other supporting documentation for the Trustible Model Ratings. As with previous ratings, OpenAI scored poorly on the Data section (covering the topic in only a few vague sentences), but received a High Transparency rating on several Evaluation categories. Their analysis points to a key few good and bad practices for model evaluation:</p><ul><li><p>Real-World Data: OpenAI used ChatGPT production data to complement existing benchmarks. Testing on realistic data provides a more accurate insight into model functionality. It also reduces the likelihood of test data leakage, as many off-the-shelf benchmarks inadvertently end up in training datasets. This works assuming the evaluation production data was explicitly excluded from the training data - GPT models are trained on ChatGPT data (excluding enterprise accounts and people who opted out).</p></li><li><p>Learning from Audiences: The System Card introduces several new types of evaluations: one for sycophancy (given how an update in May resulted in overly sycophantic behavior) and one for measuring conversations specifically in health conversations. These choices reflect adaptations to concerns discovered during real-world use.</p></li><li><p>LLM-as-a-Judge: Many of the benchmarks are evaluated using LLM-as-a-Judge. This evaluation method uses a second LLM to check if GPT-5 produced a correct output. The method is more scalable than human review and more flexible than an exact text matching strategy that checks if the model output matches a reference output exactly. However, there are several drawbacks with OpenAI&#8217;s approach: they only use o3 as the grader - while best practice recommends using a model from a different family or better yet &#8220;LLM-as-a-Jury&#8221; aka an ensemble of several models. An additional best practice would be to capture an error range, since LLM judges can also make mistakes.</p></li><li><p>Underspecified Methodology: Many key details of the actual evaluation process, especially for benchmarks in the press release and not the system card were not included. Small details about prompt formatting or whether each example was run once or several times (LLMs have a lot of variance and some evaluations will run each input multiple times to get a better sense of performance) can significantly affect the final score on a benchmark. Once &#8220;tool use&#8221; is introduced, results get even more complicated to recreate. While OpenAI acknowledges the latter issue and intentionally chooses to not compare their models to other developers - it&#8217;s still not the most transparent approach. Ultimately, external evaluators are the best source of consistent results because they can use the same exact set-up when testing each model.</p></li></ul><p><strong>Key Takeaway:</strong> Improved performance on multiple benchmarks does not directly translate to better user experience - many users were unhappy with the GPT-5 update. For practitioners, it is important to evaluate models on data reflective of your unique task. Our review of GPT-5 showed a mix of good and bad practices that can be applied or avoided when doing your own analyses. <a href="https://aimodelratings.com/gpt-5/">You can read the full model rating here</a>.</p><div><hr></div><h4>6. Webinar: Building Guardrails for Enterprise AI - Exploring the Databricks AI Governance Framework and Beyond</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DafO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DafO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!DafO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!DafO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!DafO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DafO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DafO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!DafO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!DafO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!DafO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b28b0a-9b70-4a47-9141-15329736bd38_1600x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On September 3rd at 1 p.m. ET, join experts from Databricks, Schellman, and Trustible for an exploration of the Databricks AI Governance Framework, including its principles, architecture, and role in helping enterprises scale AI responsibly. We&#8217;ll examine how it aligns with emerging regulations and standards (like ISO 42001, NIST AI RMF, and the EU AI Act), and discuss practical considerations for implementing governance controls and assurance programs across the AI lifecycle, while creating a feedback loop between governance and compliance teams and AI internal stakeholders that can increase and accelerate safe AI adoption.</p><p>This 45-minute session is designed to equip technology, risk, and compliance leaders with insights they can apply to operationalize AI governance in their organizations.</p><p><strong>Key Highlights:</strong></p><ul><li><p>An In-Depth Look at the Databricks AI Governance Framework and Databricks AI Governance tools: Explore its components, objectives, and how it addresses the unique risks of enterprise AI adoption. Focus on how tools like Unity Catalog and MLFlow can help on the highly technical aspects of AI governance.</p></li><li><p>Bridging Frameworks to Practice: How organizations can align the Databricks framework, and other standards such as ISO 42001, with other emerging global standards and regulatory obligations.</p></li><li><p>Operational and Assurance Considerations: Practical insights into implementing governance controls, testing for compliance, and providing assurance over AI systems.</p></li><li><p>Real-World Perspectives: Lessons from industry practitioners, auditors, and governance experts on avoiding common pitfalls and building resilient AI governance programs.</p></li></ul><p>You can register to <a href="https://app.livestorm.co/trustible/building-guardrails-for-enterprise-ai-exploring-the-databricks-ai-governance-framework-and-beyond">save your seat here</a>.</p><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>.</p><p>AI Responsibly,</p><p>- Trustible Team</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[How Trump’s AI Action Plan Reshapes Enterprise AI ]]></title><description><![CDATA[Plus, the hidden cost of &#8220;Almost Right&#8221; AI, a policy roundup from around the globe, and Virginia DOGE&#8217;s AI-enabled regulatory cull]]></description><link>https://insight.trustible.ai/p/how-trumps-ai-action-plan-reshapes</link><guid isPermaLink="false">https://insight.trustible.ai/p/how-trumps-ai-action-plan-reshapes</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 06 Aug 2025 12:35:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9rLT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! Yesterday was a busy one for major model update announcements, with OpenAI launching their very first <a href="https://openai.com/index/introducing-gpt-oss/">Open Weight</a> models, and Anthropic waiting in the wings with <a href="https://www.bloomberg.com/news/articles/2025-08-05/anthropic-unveils-more-powerful-model-ahead-of-gpt-5-release?embedded-checkout=true">Claude 4.1</a>. But, all eyes are waiting for the long-awaited, impending release of OpenAI&#8217;s ChatGPT-5, expected any day now. We&#8217;re hard at work at Trustible analyzing the model cards for these new releases, and we&#8217;ll be publishing our findings and updated ratings on <a href="http://aimodelratings.com">aimodelratings.com</a> soon.</p><p>In the meantime, in today&#8217;s edition (5-6 minute read):</p><ol><li><p>What the Trump Administration&#8217;s AI Action Plan means for enterprises</p></li><li><p>The Hidden Cost of &#8216;Almost Right&#8217; AI</p></li><li><p>Global &amp; U.S. Policy Roundup</p></li><li><p>DOGE and Virginia using AI to eliminate regulatory rules</p></li></ol><div><hr></div><h2><strong>1. What the Trump Administration&#8217;s AI Action Plan Means for Enterprises</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9rLT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9rLT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!9rLT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!9rLT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!9rLT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9rLT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9rLT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!9rLT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!9rLT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!9rLT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc0d45ec-6cf4-4441-846d-3f2090ba51d0_1024x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Recently, the Trump Administration released "Winning the AI Race: America&#8217;s AI Action Plan" (AI Action Plan), following a January executive order aimed at enhancing U.S. AI leadership. This plan proposes roughly 90 policy recommendations within three key pillars: AI innovation, infrastructure, and national security with international engagement. Although primarily focused on federal actions, several recommendations could significantly impact private-sector companies involved in AI development, deployment, or utilization.</p><p>Notably, three themes emerge regarding enterprise implications:</p><p>Firstly, the administration introduces uncertainty by challenging existing regulatory frameworks. Recommendations such as a "shadow" moratorium on state AI regulations by restricting federal funding could disrupt businesses navigating state-level rules. Similarly, proposed FTC reviews of AI-related investigations could further cloud compliance expectations.</p><p>Secondly, contradictions arise between some AI priorities and other administrative goals, notably around talent and energy policies. For example, recommendations to streamline energy infrastructure permitting for AI data centers clash with existing sustainability efforts. Similarly, proposed removal of Diversity, Equity, and Inclusion (DEI) references from NIST&#8217;s AI Risk Management Framework could complicate internal workforce policies and talent attraction.</p><p>Thirdly, federal interest in setting AI standards could result in cascading obligations for government contractors. Updates to federal procurement guidelines emphasizing ideologically unbiased AI models could complicate AI vendor selection, while requirements for critical infrastructure cybersecurity and AI incident response guidance necessitate organizations to adopt stringent federal standards.</p><p>The AI Action Plan also highlights opportunities for private-sector influence, such as industry-specific stakeholder engagement aimed at accelerating national AI standards adoption. Recommendations to bolster AI literacy and reskilling via tax incentives further provide tangible benefits to businesses.</p><p>Finally, despite significant ambitions, the Plan&#8217;s effectiveness relies heavily on congressional action and federal agency capacity, both of which face uncertainties, particularly after recent workforce reductions and potential political shifts in 2026. Nonetheless, the administration&#8217;s stance sends clear signals shaping the broader AI regulatory and operational environment, compelling businesses to proactively adapt strategies in anticipation of changing federal priorities.</p><p>You can read our full analysis <a href="https://www.trustible.ai/post/what-the-trump-administration-s-ai-action-plan-means-for-enterprises">on our blog here</a>.</p><div><hr></div><h2><strong>2. The Hidden Cost of &#8216;Almost Right&#8217; AI</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cZtQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cZtQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cZtQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cZtQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cZtQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cZtQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cZtQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cZtQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cZtQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cZtQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61720cab-45e6-4a6e-b1e5-bf439409d3eb_1536x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For all the hype around AI as a productivity tool, a stubborn truth is emerging: using AI doesn&#8217;t always save time. In some cases, it might even make things worse.</p><p>The risk isn&#8217;t just in bad outputs&#8212;it&#8217;s in over-relying on AI that sounds confident but gets the details wrong. The more trust you place in these systems without sufficient oversight, the more likely you are to spend your time cleaning up after them.</p><p>Two recent studies underscore this dynamic. METR,<strong> </strong>an AI research lab, recently found that <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">experienced engineers working on open source developmen</a>t were often <em>slowed down</em> by AI code suggestions. Even with expert users, &#8220;plausible but wrong&#8221; outputs led to wasted debugging time and reduced net productivity. Similarly, <a href="https://survey.stackoverflow.co/2025/ai#developer-tools-ai-frustration">Stack Overflow&#8217;s recent annual developer survey data</a> revealed a hidden productivity tax tied to AI-assisted coding. Developers often spent more time fixing or validating answers than they would have spent solving the problem themselves. Finally, <a href="https://www.atlassian.com/blog/developer/developer-experience-report-2025">Atlassian&#8217;s recent developer survey</a> found that while engineers self-reported development speed up from AI, all the downstream processes, such as code reviews, manual testing, deployment etc, were getting slowed down because they were now overwhelmed with more code changes and bugs than before. This highlights a bit of a &#8216;pipeline&#8217; bottleneck problem that is caused by uneven AI use along a value chain.</p><p>If teams aren't careful, they may spend more time double-checking and redoing AI outputs than if they&#8217;d just done the work manually. Worse, over-reliance can create blind spots&#8212;leading people to accept wrong answers without realizing it.</p><p><strong>Our Take:</strong> AI performs best on constrained tasks where outcomes are easy to verify&#8212;like short text generation or specific coding challenges. For complex tasks like multi-step coding, the validation cost can outweigh the generation benefit compared to a human baseline.</p><div><hr></div><h3><strong>3. Global &amp; U.S. Policy Roundup</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!42zd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!42zd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!42zd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!42zd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!42zd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!42zd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!42zd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!42zd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!42zd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!42zd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F224566b0-e989-46c2-8775-9e89fcba1551_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here is our quick synopsis of the major AI policy developments:</p><p><strong>U.S. Federal Government.</strong> Beyond releasing the AI Action Plan, the Trump Administration is leaning on Asian countries to develop an AI approach that differs from the EU. The Securities and Exchange Commission also <a href="https://www.sec.gov/newsroom/press-releases/2025-103-sec-creates-task-force-tap-artificial-intelligence-enhanced-innovation-efficiency-across-agency">announced</a> an AI task force aimed at improving AI adoption across the agency. In Congress, Senator Mike Rounds (R-UT) introduced a <a href="https://www.rounds.senate.gov/newsroom/press-releases/rounds-reintroduces-legislation-supporting-ai-innovation-in-financial-services">bipartisan bill</a> on setting guardrails for AI usage in financial services.</p><p><strong>U.S. States. </strong>AI-related policy developments at the state level include:</p><ul><li><p><strong>California. </strong>ADM regulations.The California Privacy Protection Agency voted unanimously to <a href="https://iapp.org/news/a/cppa-board-finalizes-long-awaited-admt-risk-assessment-rules">finalize regulations</a> on cybersecurity audits, risk assessments, and automated decisionmaking technology. Rules for automated decisionmaking technology will take effect on January 1, 2027.</p></li><li><p><strong>Florida. </strong>Governor Ron Destantis (R-FL) <a href="https://www.tallahassee.com/story/news/local/state/2025/08/04/desantis-artificial-intelligence-policy/85473530007/">indicated</a> that he will unveil legislation related to AI safeguards. DeSantis was a critic of the attempted federal AI moratorium on state AI laws. The expected legislative proposals would continue to put the Governor at odds with the Trump Administration, which generally views AI regulations as unnecessarily burdensome and hampering AI innovation.</p></li><li><p><strong>Illinois. </strong>Governor JB Pritzker <a href="https://idfpr.illinois.gov/news/2025/gov-pritzker-signs-state-leg-prohibiting-ai-therapy-in-il.html#:~:text=The%20Wellness%20and%20Oversight%20for,for%20licensed%20behavioral%20health%20professionals.">signed</a> the Wellness and Oversight for Psychological Resources Act, which prohibits using AI to provide mental health and therapeutic decision-making. The law would allow licensed behavioral health professionals to use AI tools for administrative purposes and supplementary support services. The new law comes as OpenAI <a href="https://www.nbcnews.com/tech/tech-news/chatgpt-adds-mental-health-guardrails-openai-announces-rcna222999">announced</a> new mental health guardrails for ChatGPT.</p></li><li><p><strong>Michigan. </strong>The Michigan Unemployment Insurance Agency <a href="https://www.michigan.gov/leo/news/2025/07/30/uia-launches-ai-chatbot-to-provide-information-for-workers-employers">launched </a>a new chatbot to help &#8220;deliver quick and accurate responses to questions from workers and employers.&#8221; This is the first Michigan state agency to utilize a chatbot on its public facing website.</p></li><li><p><strong>Texas.</strong> A <a href="https://www.newsweek.com/texas-data-center-water-artificial-intelligence-2107500">recent report</a> found that two data centers outside of San Antonio consumed approximately 463 million gallons of water between 2023 and 2024. The water usage was particularly jarring given that Texas residents were under water restrictions during the same period of time due to an ongoing drought. Data centers in Texas are expected to account for almost 7% of total water usage by 2030.</p></li></ul><p><strong>Asia. </strong>AI-related policy developments in Asia include:</p><ul><li><p><strong>China.</strong> During the annual World Artificial Intelligence Conference in Shanghai, the Chinese government unveiled its <a href="https://www.theguardian.com/technology/2025/jul/26/china-calls-for-global-ai-cooperation-days-after-trump-administration-unveils-low-regulation-strategy">AI Action Plan</a>. Notably, the Chinese AI Action Plan was released 3 days after the Trump Administration released its AI Action Plan. China&#8217;s plan emphasizes more participation in international fora to shape AI standards and increase AI adoption.</p></li><li><p><strong>Singapore. </strong>Microsoft and Digital Industry Singapore <a href="https://news.microsoft.com/source/asia/2025/08/01/microsoft-and-disg-launch-agentic-ai-accelerator-to-help-300-singapore-businesses-in-ai-transformation-as-part-of-the-enterprise-compute-initiative/">announced</a> a new Agentic AI Accelerator program. The announcement comes as <a href="https://www.straitstimes.com/singapore/ai-investments-in-singapore-over-the-last-12-months">more AI companies</a> have set-up shop in Singapore over the past year, in part due to the country's business-friendly climate.</p></li><li><p><strong>Thailand.</strong> The government&#8217;s Department of Special Investigation (DSI) is <a href="https://eastasiaforum.org/2025/07/31/thailand-shows-how-ai-might-expose-political-misconduct/">using AI</a> to substantiate claims of cheating Thailand's 2024 Senate elections. Specifically, the DSI is using AI to analyze &#8220;14 terabytes of CCTV footage and other voting data&#8221; as part of its investigation.</p></li></ul><p><strong>EU. </strong>The next set of EU AI Act obligations <a href="https://artificialintelligenceact.eu/implementation-timeline/">took effect</a> on August 2, the most notable provisions being <a href="https://digital-strategy.ec.europa.eu/en/news/eu-rules-general-purpose-ai-models-start-apply-bringing-more-transparency-safety-and-accountability">obligations for general purpose AI (GPAI) systems</a>. The new set of obligations kick-in as Google <a href="https://blog.google/around-the-globe/google-europe/eu-ai-code-practice/">announced</a> it would sign on to the EU&#8217;s GPAI Code of Practice, whereas xAI <a href="https://www.euractiv.com/section/tech/news/elon-musks-xai-to-sign-safety-part-of-eus-generative-ai-code/">agreed</a> to sign on to only the security and safety chapter.</p><p><strong>Middle East. </strong>As Middle East countries continue to lead on AI infrastructure, those countries are <a href="https://restofworld.org/2025/gulf-ai-water-crisis/">facing water issues</a>. The United Arab Emirates (UAE) in particular is one of the most water-stressed countries in the world, yet it is estimated that it will use up to 61 billion liters of water annually by 2030.</p><p><strong>North America. </strong>AI-related policy developments in outside of the U.S. in North America include:</p><ul><li><p><strong>Canada. </strong>The Canadian government is committing $1 million to a <a href="https://ca.finance.yahoo.com/news/canada-commits-funding-joint-ai-070034050.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAALohZ2OYHdp8FuIDbai0Hx0iPzs28JRRY4aEj_5-9jLuLnBbcK7POtr5MUAr_g7o5oWl2pRdq82Eg5ojJeApmDEVhgTwDZnEae67QIRC_V6h_gjpPmnjpf-1S4rryLuUlXNA-OuffJMs1euo9lwv6ABrTmR03lqjr5nUgwT1gegv">joint AI safety initiative</a> with the U.K. The announcement comes as part of a <a href="https://www.pm.gc.ca/en/news/statements/2025/06/15/joint-statement-prime-minister-mark-carney-and-prime-minister-sir-keir-starmer">broader collaborative effort</a> on AI between Canada and the U.K.</p></li><li><p><strong>Mexico.</strong> The Mexican government <a href="https://mexiconewsdaily.com/business/mexicos-ambitious-ai-project-set-to-launch/">announced</a> that it is developing its own LLM. The government intended to make its LLM deployable to &#8220;5 million university students and more than 5 million businesses,&#8221; however the country does face some infrastructure constraints. The move to develop a more culturally appropriate LLM has gained traction this year, after a bloc of Latin American countries <a href="https://www.nbcnews.com/news/latino/latamgpt-aims-create-ai-better-represents-regions-diversity-rcna197523">unveiled plans</a> to develop a Latin America-specific LLM.</p><div><hr></div></li></ul><h2><strong>4. DOGE and Virginia using AI to eliminate regulatory rules</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6d5k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6d5k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6d5k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6d5k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6d5k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6d5k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg" width="692" height="405" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:405,&quot;width&quot;:692,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6d5k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6d5k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6d5k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6d5k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a7ff685-2587-452e-a7d6-02b736bd4c8f_692x405.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Virginia and the U.S. Department of Government Efficiency (DOGE) are both rolling out agentic AI tools to scan regulatory code&#8212;flagging outdated, redundant, or conflicting rules for removal. <a href="https://www.pymnts.com/artificial-intelligence-2/2025/virginia-becomes-first-state-to-use-agentic-ai-for-regulatory-streamlining/">Virginia&#8217;s system is already combing</a> through the state&#8217;s administrative code. <a href="https://www.washingtonpost.com/business/2025/07/26/doge-ai-tool-cut-regulations-trump/">DOGE says its tool will review over 200,000 federal rules</a>, and agencies like HUD and CFPB are already testing it. DOGE&#8217;s initiative is supposedly only targeting rules that are no longer required by law, although that determination requires legal expertise to make.</p><p>This is one of the first real deployments of AI into the public sector with direct regulatory implications. While these tools are technically &#8220;advisory,&#8221; reports suggest agencies are treating the AI outputs as strong signals. At HUD, some regulations were flagged as outside statutory authority, even when they weren&#8217;t. It&#8217;s not clear what evaluation criteria are being used&#8212;or who has final say. If AI-generated flags trigger removals or policy shifts, these could qualify as high-risk or decision-making systems under several AI governance frameworks.</p><p>Nothing in <a href="https://www.governor.virginia.gov/newsroom/news-releases/2025/july/name-1053152-en.html">Governor Youngkin&#8217;s announcement</a> details what AI system is being used, or how it may be adapted for the nuances of regulatory text which have specific stylistic elements that could cause issues for AI systems.</p><p><strong>Our take:</strong> This is likely only the beginning of AI being applied directly to regulatory tasks and informing decisions made by regulators. Many organisations may be looking to see how successful these initiatives are, as analyzing policy documents and suggesting simplification could be hugely valuable for large enterprises with overlapping and conflicting policies.</p><div><hr></div><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>.</p><p>AI Responsibly,</p><p>- Trustible Team</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Browsers Emerge to Redefine the State of Browsing ]]></title><description><![CDATA[Plus Meta charting a new course, AI models have a legal Ship of Theseus problem, and a global policy roundup]]></description><link>https://insight.trustible.ai/p/ai-browsers-emerge-to-redefine-the</link><guid isPermaLink="false">https://insight.trustible.ai/p/ai-browsers-emerge-to-redefine-the</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 23 Jul 2025 14:03:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ksjW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! We&#8217;re awaiting the Trump administration&#8217;s three expected AI Executive Orders to drop today, including a possible Moratorium 2.0 push impacting state-level AI regulatory autonomy. We saw this approach play out earlier this month in Congress, and <strong><a href="https://www.trustible.ai/post/trustible-s-perspective-the-ai-moratorium-would-have-been-bad-for-ai-adoption">don&#8217;t feel</a></strong> this approach serves anyone&#8217;s best interests. But, stay tuned on the Trustible blog for updates as we learn more about what this means for AI adoption.</p><p>In the meantime, in today&#8217;s edition (5-6 minute read):</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><ol><li><p>Building A Moat: Agentic AI Web Browsers</p></li><li><p>AI Model Fine Tuning&#8217;s Ship of Theseus Problem</p></li><li><p>Global &amp; U.S. Policy Roundup</p></li><li><p><em>Meta</em> Analysis</p></li><li><p>FaaCT Finding: AI Takeaways from ACM FAccT 2025</p></li></ol><div><hr></div><h3><strong>Building A Moat: Agentic AI Web Browsers</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ksjW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ksjW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!ksjW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!ksjW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!ksjW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ksjW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!ksjW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 424w, https://substackcdn.com/image/fetch/$s_!ksjW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 848w, https://substackcdn.com/image/fetch/$s_!ksjW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 1272w, https://substackcdn.com/image/fetch/$s_!ksjW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbccf85a9-ce44-44e4-99ba-184044d63390_1488x837.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Several prominent AI companies, including Perplexity and <strong><a href="https://www.reuters.com/business/media-telecom/openai-release-web-browser-challenge-google-chrome-2025-07-09/">OpenAI, have announced</a></strong> the creation of their own web browsers. Perplexity&#8217;s browser, <strong><a href="https://comet.perplexity.ai/">Comet</a></strong>, has been in preview recently and the first few positive reviews (<strong><a href="https://www.techradar.com/computing/artificial-intelligence/perplexitys-comet-is-here-and-after-using-it-for-48-hours-im-convinced-ai-web-browsers-are-the-future-of-the-internet">TechRadar</a></strong>, <strong><a href="https://mashable.com/article/perplexity-ai-browser-comet-features-to-try">Mashable</a></strong>, <strong><a href="https://www.fastcompany.com/91370139/ai-browsers-perplexity-comet-reshape-internet-media">FastCompany</a></strong>) have some in highlighting how integrating AI directly with a browser&#8217;s capabilities can allow for creating agentic workflows, summarizing and highlighting key information in websites, or conducting deep-research type tasks with the advantage of being logged into systems. While AI features directly integrated with web browsers could offer a more seamless experience than separate applications with ChatGPT, it&#8217;s also worth considering some of the additional risks, or potential motives for these initiatives. Aside from competing with Google, as <strong><a href="https://www.businessinsider.com/perplexity-ceo-google-ai-agent-dilemma-suffer-browser-war-2025-7">Perplexity&#8217;s CEO seeks to do</a></strong>, here are our theories on why a web browser could be massively beneficial to big AI companies:</p><ul><li><p>Bypassing Anti-AI Scraping - Many websites have rolled out tools to try and prevent scraping by AI bots, and <strong><a href="https://www.cloudflare.com/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/">Cloudflare recently made this a lot easier</a></strong>. However, anti-scraping systems are meant to allow humans to view the content and fully render it on your browser. If the browser then had the rendered html/text, it can capture that and share copies of it for training, therefore bypassing the anti-scraping bots and evading issues around robots.txt. This can give a huge competitive advantage to the browser creator, while creating massive privacy risks.</p></li><li><p>Building Popularity Metrics - Search Engine Optimization (SEO) has long been a major focus for marketing teams everywhere, as search result rankings can make or break entire companies. However now orgs are thinking about the idea of <strong><a href="https://thehustle.co/news/is-ai-focused-aeo-is-the-new-seo">&#8216;Answer Engine Optimization</a></strong>&#8217;, ensuring that your results come recommended by AI. The era of &#8216;back-links&#8217; being prominent may be over, but there is still a lot of value in finding the websites others find the most useful or popular. Browsers that collect and track the top websites and other activity could deliver this information at scale, improving the quality of AI responses.</p></li><li><p>Collecting Human-generated Content - As the internet becomes flooded with more AI generated content, it will become increasingly difficult to find &#8216;human written&#8217; content to fuel additional model quality growth. This is partly because AI models trained on content they themselves generate seem to suffer from &#8216;model collapse&#8217;. However, if the browser can track what a user writes, and whether they used AI, or did it from scratch, they could identify and capture the human generated content from AI generated ones, and then further train on that content.</p></li></ul><p><em><strong>Disclosure: </strong>These are potential ways a browser could be exploited, and hypotheses about risks. Trustible has not reviewed the systems in question, nor their terms of service, to know whether these practices would be allowed.</em></p><p><strong>Our Take:</strong> AI web browsers could become a powerful tool for personal productivity and reduce some of the friction and clunkiness of using a separate platform. However, a browser could also be the perfect tool for solving a number of current and emerging challenges faced by major AI companies, and could heavily expose people to massive privacy risks. The winner of the AI browser wars could emerge with an insurmountable advantage in their ability to collect and evaluate content.</p><div><hr></div><h3><strong>AI Round Up &#8216;Round the World</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FfM1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FfM1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!FfM1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!FfM1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!FfM1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FfM1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!FfM1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!FfM1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!FfM1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!FfM1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7d48016-0b98-4f19-accd-d19d2ae99d06_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here is our quick synopsis of the major AI policy developments:</p><ul><li><p><strong>U.S. Federal Government.</strong> The Trump Administration is expected to release its <strong><a href="https://www.axios.com/pro/tech-policy/2025/07/17/whats-inside-trumps-20-page-ai-action-plan">AI Action Plan</a></strong>, which was mandated under the President&#8217;s AI Executive Order, alongside <strong><a href="https://www.reuters.com/legal/litigation/white-house-unveil-plan-push-us-ai-abroad-crack-down-restrictive-rules-document-2025-07-22/">3 new AI-related Executive Orders</a></strong>. The Trump Administration is also continuing to invest in AI infrastructure, recently <strong><a href="https://www.forbes.com/sites/michaeltnietzel/2025/07/20/nsf-gives-georgia-tech-20-million-to-build-ai-focused-supercomputer/">announcing</a></strong> that Georgia Tech would receive $20 million to build a new supercomputer that will use AI for scientific research. Meanwhile, a <strong><a href="https://www.axios.com/pro/tech-policy/2025/07/21/hawley-blumenthal-introduce-ai-protection-bill">bipartisan bill</a></strong> was introduced in the Senate to protect certain types of data from being used to train AI models.</p></li><li><p><strong>U.S. States</strong>. AI-related policy developments at the state level include:</p></li><li><p><strong>California. </strong>The California Judicial Council adopted a <strong><a href="https://jcc.legistar.com/View.ashx?M=F&amp;ID=14303119&amp;GUID=0C94642A-28D3-47C0-8AE9-1E4DE3A96DFC">rule</a></strong> requiring courts to develop policies for the use of generative AI by judges and court employees. The new policies must be in place by September 1, 2025.</p></li><li><p><strong>New York.</strong> Mayor Eric Adams emphasized that he would <strong><a href="https://nypost.com/2025/07/19/us-news/nyc-mayor-adams-vows-to-use-ai-blockchain-tech-to-boost-services-if-re-elected/">leverage more AI</a></strong> technologies to improve city services if he were re-elected to a second term.</p></li><li><p><strong>Pennsylvania</strong>. Tech and energy companies plan on <strong><a href="https://www.cnn.com/2025/07/15/tech/ai-energy-summit-90-billion-trump-anthropic-meta-google">investing</a></strong> over $90 billion in Pennsylvania as part of a broader effort to turn the state into an AI hub. The investments are primarily aimed at securing new energy sources for AI infrastructure.</p></li><li><p><strong>Argentina. </strong>Netflix <strong><a href="https://www.theguardian.com/media/2025/jul/18/netflix-uses-generative-ai-in-show-for-first-time-el-eternauta">recently announced</a></strong> that it used generative AI in its first TV show, an Argentinian sci-fi series called El Eternauta (The Eternaut). Using AI remains a contentious issue within the entertainment industry due to concerns over job cuts.</p></li><li><p><strong>Asia</strong>. AI-related policy developments in Asia include:</p></li><li><p><strong>China.</strong> Beijing hosted China International Supply Chain Expo, which featured over 650 companies from 60 countries. Nvidia CEO, Jensen Huang, praised China during the expo as a &#8220;catalyst for global progress&#8221; because of its open source AI. The comments came one day after Nvidia <strong><a href="https://www.cnbc.com/2025/07/15/nvidia-says-us-government-will-allow-it-to-resume-h20-ai-chip-sales-to-china.html">announced</a></strong> it would resume sales of its H20 chips to China. While the U.S. has generally been concerned about chip exports to China, the Trump Administration was <strong><a href="https://www.cnbc.com/2025/07/15/howard-lutnick-says-china-is-only-getting-nvidias-4th-best-ai-chip.html">less worried</a></strong> because the H20 chips are Nvidia&#8217;s &#8220;fourth best&#8221; AI chips.</p></li><li><p><strong>Kazakhstan.</strong> In a bid to boost its AI industry, Kazakhstan <strong><a href="https://www.euronews.com/next/2025/07/20/the-most-powerful-supercomputer-in-central-asia-launches-in-kazakhstanin-bid-for-ai-boost">launched</a></strong> central Asia&#8217;s most powerful supercomputer. While the current government has expressed an interest in bolstering AI investments, the inability to retain appropriate talent may hinder the country&#8217;s ambitions.</p></li><li><p><strong>EU. </strong>Most of the major providers are agreeing to sign the EU&#8217;s Code of Practice for general purpose AI models, with Microsoft <strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/microsoft-likely-sign-eu-ai-code-practice-meta-rebuffs-guidelines-2025-07-18/">likely</a></strong> to sign and Anthropic<strong> <a href="https://www.anthropic.com/news/eu-code-practice">announcing</a></strong> it would sign. Meta continues to be outlier as the only model provider to <strong><a href="https://www.politico.eu/article/meta-wont-sign-eu-ai-code/">publicly state</a></strong> it will not sign the Code of Practice.</p></li><li><p><strong>Middle East. </strong>The U.S. <strong><a href="https://www.timesofisrael.com/israel-and-us-to-forge-200m-tech-hub-for-ai-and-quantum-science-development/">launched an initiative</a></strong> with Israel to build a strategic tech hub that focuses on AI and quantum development in an effort to counter influences from China, Iran, and Russia. The partnership will likely expand to include other Gulf and Central Asian nations. The announcement comes as a billion dollar AI chips deal was <strong><a href="https://techcrunch.com/2025/07/17/uaes-deal-to-buy-nvidia-ai-chips-reportedly-on-hold/">placed on hold</a></strong> between Nvidia and the UAE over concerns that the technology may end up in China.</p></li><li><p><strong>North America.</strong> AI-related policy developments in outside of the U.S. in North America include:</p></li><li><p><strong>Canada.</strong> Canadian AI company, <strong><a href="https://www.msn.com/en-ca/money/other/ai-company-cohere-wants-canada-to-use-its-g7-presidency-to-set-a-global-artificial-intelligence-agenda/ar-AA1FGbDn?apiversion=v2&amp;noservercache=1&amp;domshim=1&amp;renderwebcomponents=1&amp;wcseo=1&amp;batchservertelemetry=1&amp;noservertelemetry=1">Cohere</a></strong>, is reportedly lobbying Canadian government officials in an effort to have Canada influence AI policy for the G7, as it currently holds the bloc&#8217;s presidency.</p></li><li><p><strong>Mexico.</strong> Voice actors in Mexico are <strong><a href="https://www.yahoo.com/news/mexican-voice-actors-demand-regulation-080910321.html?guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&amp;guce_referrer_sig=AQAAALohZ2OYHdp8FuIDbai0Hx0iPzs28JRRY4aEj_5-9jLuLnBbcK7POtr5MUAr_g7o5oWl2pRdq82Eg5ojJeApmDEVhgTwDZnEae67QIRC_V6h_gjpPmnjpf-1S4rryLuUlXNA-OuffJMs1euo9lwv6ABrTmR03lqjr5nUgwT1gegv&amp;guccounter=2">demanding</a></strong> that the Mexican government enact regulations that would prohibit voice cloning without consent.</p></li><li><p><strong>Industry.</strong> Delta <strong><a href="https://www.theverge.com/news/709556/delta-air-lines-ai-ticket-price-rollout">announced</a></strong> that it would increase the volume of ticket prices that are influenced by AI from about 3% to 20%. Lawmakers and privacy groups have <strong><a href="https://fortune.com/2025/07/16/delta-moves-toward-eliminating-set-prices-in-favor-of-ai-that-determines-how-much-you-personally-will-pay-for-a-ticket/">voiced concerns</a></strong> over the initiative, characterizing it as invasive and predatory.</p></li></ul><div><hr></div><h3><strong>AI Model Fine Tuning&#8217;s Ship of Theseus Problem</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aykD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aykD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 424w, https://substackcdn.com/image/fetch/$s_!aykD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 848w, https://substackcdn.com/image/fetch/$s_!aykD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 1272w, https://substackcdn.com/image/fetch/$s_!aykD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aykD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png" width="1040" height="780" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:780,&quot;width&quot;:1040,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!aykD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 424w, https://substackcdn.com/image/fetch/$s_!aykD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 848w, https://substackcdn.com/image/fetch/$s_!aykD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 1272w, https://substackcdn.com/image/fetch/$s_!aykD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f26da5d-8f3f-4c0b-be31-4b2e97dc5747_1040x780.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On July 18, 2025, the European Commission <strong><a href="https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act">released guidelines</a></strong> to help clarify obligations for general purpose AI model providers and complement the recently published Code of Practice. While the focus has primarily been on obligations for frontier model providers, there are concerns about how model fine-tuning may pass those obligations from the model providers to the modifying organization.</p><p>Think of this from the lens of the Ship of Theseus thought experiment: for a wooden ship to have each of its planks and nails replaced piece by piece over time, is the whole of the entirely replaced ship still the same ship? The European Commission&#8217;s guidelines raise a parallel real-world conundrum for AI model providers and fine tuners.</p><p>The latest Commission guidance clarifies that when a downstream organization is considered to be the model provider when the training compute used to modify the model is &#8220;<strong>greater than a third of the training compute of the original model.</strong>&#8221; This threshold was chosen, instead of a fixed number of FLOPs, because the amount of compute needed to significantly modify a model is relative to its size. In contrast to previous texts, this document provides some specific methods for calculating compute and provides placeholder values when compute for the original model is not disclosed (as is the case for a majority of systems).</p><p>Many common applications are not likely to reach the specified thresholds. For instance, LLama-4 Maverick would need roughly 5.5 trillion words of fine-tuning data to meet the modification threshold. In contrast, common guidance recommends fine-tuning with a much smaller data set, typically suggesting starting with several thousand examples. Moreover, organizations who provide applications that require substantial modification may have a more difficult time understanding how they may reach these thresholds. The Act&#8217;s annex emphasizes that almost all forms of compute used should be tallied with some enumerated exceptions. However, not all compute can be easily ascertained. For example, reinforcement learning processes are often used to instill &#8220;helpful&#8221; and &#8220;harmless&#8221; behavior into models that use larger amounts of compute but do not have a direct formula to estimate it.</p><p><strong>Our Take: </strong>Many organizations may not need to be concerned with becoming general purpose model providers because they modify an existing GPAI system. However, the Act misses the mark on how smaller changes could substantially impact the model, especially when some techniques like parameter efficient fine-tuning can modify models with a much smaller amount of compute. The Act&#8217;s threshold may ultimately punish organizations that fine-tune at scale without making significant changes to the model as compared to those organizations that make smaller modifications that significantly affect alignment and alter model behavior.</p><div><hr></div><h3><em><strong>Meta</strong></em><strong> Analysis</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!liKJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!liKJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 424w, https://substackcdn.com/image/fetch/$s_!liKJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 848w, https://substackcdn.com/image/fetch/$s_!liKJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 1272w, https://substackcdn.com/image/fetch/$s_!liKJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!liKJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png" width="1200" height="797" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:797,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!liKJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 424w, https://substackcdn.com/image/fetch/$s_!liKJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 848w, https://substackcdn.com/image/fetch/$s_!liKJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 1272w, https://substackcdn.com/image/fetch/$s_!liKJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8f6dc0f-ad43-44de-929b-d8d61db7b73b_1200x797.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Meta has dominated several AI news cycles over the past few weeks with major headlines of <strong><a href="https://techcrunch.com/2025/06/27/meta-is-offering-multimillion-dollar-pay-for-ai-researchers-but-not-100m-signing-bonuses/">$100+ million signing bonuses</a></strong> for top AI researchers, and <strong><a href="https://techcrunch.com/2025/06/13/scale-ai-confirms-significant-investment-from-meta-says-ceo-alexandr-wang-is-leaving/">a purchase/investment in Scale AI</a></strong> that made Scale&#8217;s CEO the new head of Meta&#8217;s AI division. However, hidden amongst some of these market moves, there have also been two core stories that may impact organizations using Meta&#8217;s AI technologies. It&#8217;s worth looking into these key events, and understanding what it means for technology and policy professionals.</p><ul><li><p><strong>Meta refuses to sign EU AI Act Code of Practice - </strong>As of the time of writing, Meta is the only major AI lab to <strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/microsoft-likely-sign-eu-ai-code-practice-meta-rebuffs-guidelines-2025-07-18/">confirm that they</a></strong> <em><strong><a href="https://www.reuters.com/sustainability/boards-policy-regulation/microsoft-likely-sign-eu-ai-code-practice-meta-rebuffs-guidelines-2025-07-18/">won&#8217;t</a></strong></em> sign the newly released EU AI Act General Purpose AI Code of Practice. Competitors Anthropic, OpenAI, Mistral have confirmed signatures, and others like Microsoft are considered likely to do so. The Code outlines copyright, transparency, and safety testing protocols for frontier models, and is technically voluntary until full AI Act enforcement begins in August of 2026.</p></li><li><p><strong>Key Takeaway:</strong> Meta has had repeated frustrations with the EU over digital regulations, and could simply withdraw or prohibit their AI systems in the EU altogether. However, they have the world&#8217;s top leading <strong><a href="https://www.llama.com/">open-weight models</a></strong>, which makes avoiding the EU, or enforcing bans, particularly hard. Ironically, when <strong><a href="https://www.tomsguide.com/ai/apples-refusing-to-launch-apple-intelligence-in-the-eu-heres-why">Apple previously withheld their AI</a></strong> systems from the EU over regulatory concerns, they still landed in regulatory issues as it was considered an unfair use of market power under the DMA.</p></li><li><p><strong>Meta considers going closed source</strong> - Meta&#8217;s new AI head, Alexandr Wang, is reportedly <strong><a href="https://www.nytimes.com/2025/07/14/technology/meta-superintelligence-lab-ai.html">considering a major strategy pivot</a></strong> away from the open-weight Llama models, into closed proprietary models. Releasing open source (or open weight) models has been a major issue in AI policy discussions, as it can both enable innovation, but also cannot ever be &#8216;reigned in&#8217; or controlled once released.</p></li><li><p><strong>Key Takeaway:</strong> While a closed model could allow for some more flexibility to compete for Meta, there may be National Security aspects to consider. Most Chinese LLMs are also open weight, including Alibaba&#8217;s Qwen 3, and DeepSeek&#8217;s R1. While Llama is seen as less capable than these for now, halting its development would remove the leading US-centric open weight model which is still preferred by people for a range of applications. Chinese open models must meet testing requirements by Chinese authorities and align with CCP viewpoints making them unsuitable for a wide range of purposes in the US. We think it&#8217;s unlikely that Meta will pull back on their Llama models, especially if they&#8217;re going to want the Trump Administration&#8217;s help pushing back on heavy EU digital regulations, and avoiding current anti-trust scrutiny.</p></li></ul><p><strong>Our Take: </strong>Meta clearly has big ambitions to become a foundational layer amongst the likes of OpenAI and Anthropic, and their investments in talent and M&amp;A makes that clearer than ever. Meta is distinct amongst the landscape as the best performing Western open source LLM on the market, something until recently Meta has highlighted as their stand out differentiator. <strong><a href="https://www.cnbc.com/2025/03/06/meta-is-targeting-hundreds-of-millions-of-businesses-for-agentic-ai.html">Many organizations</a></strong> have indicated they want to build on top of Llama directly, or fine-tune it to create their own AI intellectual property, and to host and run entirely themselves on-prem, offering higher privacy and cyber protections. It&#8217;s also one of the most common models used in academia for research, as smaller versions of Llama can actually run on your average desktop computer without specialized AI hardware. But Meta&#8217;s recent actions have created uncertainty and unease on the apparent pivot, which may actually harm them in the short and long term. It&#8217;s unclear whether a closed-ecosystem system will be compliant in the EU, whether building on Llama will be further developed, or whether Meta&#8217;s systems will benefit from the improved security and reliability that accompany being a heavily-researched open system.</p><div><hr></div><h3><strong>FAccT Finding: AI Takeaways from ACM FAccT 2025</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4CIs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4CIs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!4CIs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!4CIs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!4CIs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4CIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Article content&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Article content" title="Article content" srcset="https://substackcdn.com/image/fetch/$s_!4CIs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!4CIs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!4CIs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!4CIs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcdbff403-5e3d-469c-959d-999fefb327a3_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In June, while policy makers debated the finer points of the US AI Moratorium and the EU AI Act Code of Practice, researchers from around the world (including our Director of Machine Learning, Anastassia Kornilova) gathered at the ACM FAccT Conference to discuss the finer points of building responsible AI systems. What we learned reinforced the importance of involving affected populations in AI system development and testing, and the breadth and depth of testing necessary to build safer AI systems. At the same time, we heard that gaps persist between the richness of this research and how policy is crafted both at an organizational and government levels.</p><p>You can read more about our takeaways <strong><a href="https://www.trustible.ai/post/facct-finding-2025">on our blog</a></strong>.</p><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <strong><a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a></strong>.</p><p>AI Responsibly,</p><p>- Trustible Team</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Trustible and Databricks team up to operationalize the DAGF]]></title><description><![CDATA[Plus why the AI moratorium (RIP) would have backfired, why AI slop is making human-generated content a premium, and the inflection point in the AI copyright debate]]></description><link>https://insight.trustible.ai/p/trustible-and-databricks-team-up</link><guid isPermaLink="false">https://insight.trustible.ai/p/trustible-and-databricks-team-up</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 09 Jul 2025 13:32:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ECV5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! The newscycle in AI never slows down, and that goes for holiday weekends for all of us here in the U.S. Between last week&#8217;s AI regulatory moratorium eventually failing to pass, separate lawsuits involving Anthropic, Meta, and Midjourney being both filed and resolved on the topic of AI copyright infringement, and the European Commission signaling its full steam ahead for implementation of the EU AI Act, we&#8217;re watching history being made in real-time.</p><p>In today&#8217;s edition (5-6 minute read):</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><ol><li><p>Trustible operationalizing Databricks AI Governance Framework</p></li><li><p>Why the AI moratorium would have backfire</p></li><li><p>AI policy &amp; regulatory roundup</p></li><li><p>Why AI slop matters</p></li><li><p>The great AI copyright conundrum</p></li></ol><div><hr></div><ol><li><p><strong>Trustible Announces Databricks AI Governance Framework Implementation</strong></p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ECV5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ECV5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ECV5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ECV5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ECV5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ECV5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:703090,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/167903507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ECV5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ECV5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ECV5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ECV5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc0b8bc7-7836-42d4-af40-023681f5b866_1024x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last week, <strong><a href="https://www.trustible.ai/post/ai-governance-at-scale-trustible-becomes-official-databricks-technology-partner?utm_campaign=16247900-CN-2025-06-DatabricksFramework&amp;utm_medium=email&amp;_hsenc=p2ANqtz-9HxC5vBUrOVfnIMmDZZ21fGDQEpQzkefBIlsq2tD5tYY9LvHxs2da9G9bMZ171gVCrVf7Z99GYyKdOWbigpkUHz8uANA&amp;_hsmi=370311913&amp;utm_content=370311913&amp;utm_source=hs_email">our partner Databricks</a></strong> introduced their <strong><a href="https://www.databricks.com/blog/introducing-databricks-ai-governance-framework?utm_campaign=16247900-CN-2025-06-DatabricksFramework&amp;utm_medium=email&amp;_hsenc=p2ANqtz--ah8slUZfdNaMF5Fx36tZRDaE8kKM37d0-qRezk1WFIlUcoUEchxQixC2acoA1arkPPwob3_3_suS8rIPswH_YA4CeeA&amp;_hsmi=370311913&amp;utm_content=370311913&amp;utm_source=hs_email">AI Governance Framework</a></strong> (DAGF v1.0), a structured and practical approach to governing AI adoption across the enterprise. The DAGF acknowledges what many organizations are already discovering: AI governance is not simply a technical exercise. It&#8217;s about aligning people, processes, policies, and platforms to ensure that AI systems are trustworthy, compliant, and scalable.</p><p>The Databricks AI Governance Framework marks a pivotal step in helping organizations balance innovation with responsible deployments. But success depends on operationalizing them effectively across people, processes, and technology.</p><p>Yesterday, <strong><a href="https://www.trustible.ai/post/trustible-becomes-official-implementation-partner-for-the-databricks-ai-governance-framework-dagf?utm_campaign=16247900-CN-2025-06-DatabricksFramework&amp;utm_medium=email&amp;_hsenc=p2ANqtz-8P7Up976uxLfQYoIWGvJYbVBU0NztyXEIi8Gst0cWxk0aqEvgPUW60tAUOmBERSffcrAHi9SaXon4dFlq1XytojsW8Kg&amp;_hsmi=370311913&amp;utm_content=370311913&amp;utm_source=hs_email">we announced</a></strong> that Trustible is proud to serve as the official Technology Implementation Partner of the Databricks AI Governance Framework and a key contributor alongside leading organizations such as Capital One, Meta, Netflix, Grammarly, and others. DAGF offers a practical, flexible framework designed to help enterprises embed AI governance into day-to-day operations, regardless of where they are in their AI maturity journey. You can read more about how we&#8217;ve interpreted the DAGF in the <strong><a href="https://insights.trustible.ai/hubfs/Databricks%20AI%20Governance%20Framework%20-%20Trustible.pdf">Trustible platform in our whitepaper here</a></strong>.</p><p><strong>Key Takeaway:</strong> Starting this week, Trustible customers will be able to align their AI governance efforts directly to the framework through a dedicated DAGF module within the Trustible platform and help embed AI governance into the fabric of your AI strategy so you can build, deploy, procure, and scale with confidence. This is the first of many partnerships to come with AI deployers, infrastructure providers, and ecosystem partners to ensure enterprises of all size and shapes have access to ready-to-deploy governance solutions that adapt as quickly as the market.</p><div><hr></div><p><strong>2. Why the AI Moratorium would have backfired</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vGxg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vGxg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 424w, https://substackcdn.com/image/fetch/$s_!vGxg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 848w, https://substackcdn.com/image/fetch/$s_!vGxg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 1272w, https://substackcdn.com/image/fetch/$s_!vGxg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vGxg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png" width="1170" height="780" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:780,&quot;width&quot;:1170,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:543429,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/167903507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vGxg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 424w, https://substackcdn.com/image/fetch/$s_!vGxg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 848w, https://substackcdn.com/image/fetch/$s_!vGxg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 1272w, https://substackcdn.com/image/fetch/$s_!vGxg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2f283e6-7b0f-4532-a2b6-3c3c9ad6fd38_1170x780.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>While we now know the ultimate fate of the proposed State AI Legislative Moratorium that was included in the &#8216;One Big Beautiful Bill&#8217; budget (<strong><a href="https://apnews.com/article/congress-ai-provision-moratorium-states-20beeeb6967057be5fe64678f72f6ab0">it was removed by the Senate in a 99-1 vote</a></strong>), the idea is likely to stick around, and similar proposals may appear in the future. We supported its removal for a variety of reasons, but our biggest argument against it was that it would have backfired. Specifically, we think banning AI regulations in the absence of any federal clarity would have hurt the very startups and innovative environment that its proponents were trying to protect. Here&#8217;s a brief outline of why:</p><ul><li><p><strong>Trust</strong> - The majority of Americans don&#8217;t trust AI. Even the perception that AI will become <em>less</em> regulated, would hurt that trust even more. Trust in AI systems is directly proportional to revenue for AI companies which can fuel further innovation.</p></li><li><p><strong>Level Playing Field</strong> - Big companies and Big Tech have dozens of lawyers, machine learning experts, and marketing teams that are able to deal with the uncertainty of the current environment. Smaller companies and startups don&#8217;t. A clear set of AI standards can help a startup be on even footing as Big Tech from a compliance perspective.</p></li><li><p><strong>Legal Uncertainty</strong> - Let&#8217;s be honest, with so much opposition from State Governors and Attorney Generals, this Moratorium would be challenged almost instantly, and would likely take years to work through the federal circuits. During that time some State laws in Colorado and Texas would be in a State of uncertainty which businesses and investors hate.</p></li><li><p><strong>A Solution Before A Problem</strong> - The overwhelming majority of the &#8216;1000+&#8217; AI bills at the State level don&#8217;t actually regulate AI directly (most simply mention the term &#8216;Artificial Intelligence), and many have common sense overlapping requirements around issues like AI disclosure, and protecting against Deep Fakes. Perhaps ironically, Big Tech&#8217;s State lobbying efforts have already created a fairly lightweight and consistent State regulatory environment.</p></li><li><p><strong>Reduced Information Sharing</strong> - At the moment, there is virtually no information sharing within the AI ecosystem for AI vulnerabilities and incidents. That is primarily because of legal liability issues where companies do not want to admit on paper about any incident. This can be done with a regulator or standards body to get around competition issues. This lack of information sharing hurts everyone because it means we&#8217;re not learning from the mistakes of others and innovating as fast.</p></li></ul><p>For a bigger deep-dive analysis, we have <strong><a href="https://www.trustible.ai/post/trustible-s-perspective-the-ai-moratorium-would-have-been-bad-for-ai-adoption">a more in-depth post here.</a></strong></p><p><strong>Our Take:</strong> The idea that all regulation is bad for business is a little simplistic, and can often just be a form of regulatory capture. A balanced federal framework would be the best approach for everyone involved, but pre-empting things before that framework has even been discussed would hurt the ability of all but the biggest AI companies.</p><div><hr></div><p><strong>3. Policy Round-Up</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wHpf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wHpf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!wHpf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!wHpf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!wHpf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wHpf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:163471,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/167903507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wHpf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!wHpf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!wHpf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!wHpf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64cbb6b-23b3-473d-92ca-e78b44e5ee80_1024x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here is our quick synopsis of the major AI policy developments:</p><ul><li><p><strong>U.S. Federal Government.</strong> The failed federal AI moratorium (see our write-up for more details) has <strong><a href="https://www.axios.com/2025/07/03/artificial-intelligence-moratorium-future-regulation">some speculating</a></strong> that it may renew a push for federal legislation that explicitly preempts state laws, though the contours of potential legislation remain murky.</p></li><li><p><strong>U.S. States. </strong>AI-related policy developments at the state level include:</p><ul><li><p><strong>California. </strong>The California Civil Rights Council <a href="https://calcivilrights.ca.gov/2025/06/30/civil-rights-council-secures-approval-for-regulations-to-protect-against-employment-discrimination-related-to-artificial-intelligence/">approved</a> a final regulation that clarifies how existing discrimination laws apply to AI tools for employment decisions. The new rules will take effect on October 1, 2025.</p></li><li><p><strong>New York.</strong> Governor Kathy Hochul <a href="https://www.theguardian.com/us-news/2025/jun/23/new-york-nuclear-power-plant">announced</a> the construction of a new nuclear power plant in upstate New York to fulfill new energy demands, which is caused in part by growing AI usage. The announcement comes amidst a push from the <a href="https://www.whitehouse.gov/fact-sheets/2025/05/fact-sheet-president-donald-j-trump-deploys-advanced-nuclear-reactor-technologies-for-national-security/">Trump Administration</a> and <a href="https://www.cbsnews.com/news/big-techs-big-bet-on-nuclear-power-to-fuel-artificial-intelligence/">big tech companies</a> to power AI computing infrastructure with nuclear energy.</p></li></ul></li><li><p><strong>Canada.</strong> It appears Canada&#8217;s new government <strong><a href="https://thelogic.co/news/exclusive/canada-ai-regulation-copyright-evan-solomon/">will not revive</a></strong> the <strong><a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">Artificial Intelligence and Data Act</a></strong>, which Parliament terminated ahead of the federal elections in April 2025. Canadian lawmakers are considering which aspects of the former legislation they may want to pursue, such as addressing issues with copyright and AI. The move aligns with a broader global movement away from AI regulation and towards AI innovation.</p></li><li><p><strong>South America. </strong>AI-related policy developments in South America include:</p><ul><li><p><strong>Chile. </strong>Chilean lawmakers are <a href="https://www.bnamericas.com/en/news/tech-and-business-sector-concerns-over-chiles-artificial-intelligence-bill">facing opposition</a> to their proposed comprehensive AI law. The proposed bill is an EU-inspired framework, which critics claim will harm technological investments and development if enacted.</p></li><li><p><strong>Brazil.</strong> Leaders of the BRICS countries (Brazil, Russia, India, China, and South Africa) are <a href="https://www.reuters.com/world/china/brics-leaders-call-data-protections-against-unauthorized-ai-use-2025-07-06/">expected</a> to release a statement that calls for data protections against unauthorized AI use. The push will come as part of a two-day summit among BRICS leaders in Rio de Janeiro. BRICS serves as a diplomatic forum for developing countries and has recently been <a href="https://time.com/7300395/trump-tariffs-threat-brics-anti-american-concerns/">accused</a> by President Trump of promoting "anti-American policies."</p></li></ul></li><li><p><strong>EU.</strong> The next set of EU AI implementation deadlines is approaching on August 2 and the European Commission <strong><a href="https://techcrunch.com/2025/07/04/eu-says-it-will-continue-rolling-out-ai-legislation-on-schedule/">squashed rumors</a></strong> that it may pause enforcement obligations for certain EU AI Act provisions. Tech companies have been working behind the scenes to delay the Act&#8217;s enforcement timelines, with the obligations for general purpose AI models being top of mind (which kick-in on August 2). Tech companies argued that delays with the voluntary Codes of Practice warranted the postponement, which <strong><a href="https://www.reuters.com/business/media-telecom/code-practice-help-companies-with-ai-rules-may-come-end-2025-eu-says-2025-07-03/">may not be released</a></strong> until the end of 2025.</p></li><li><p><strong>Industry.</strong> Microsoft <strong><a href="https://www.bbc.com/news/articles/cdxl0w1w394o">announced</a></strong> that it would layoff approximately 4% of its workforce as it seeks to make heavier investments in developing its own AI. The move comes as big tech remains locked in an AI arms race, which has seen companies like Meta offer $100 million signing bonuses for top AI talent.</p></li></ul><div><hr></div><p><strong>4. Why AI Slop Matters</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OK-S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OK-S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 424w, https://substackcdn.com/image/fetch/$s_!OK-S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 848w, https://substackcdn.com/image/fetch/$s_!OK-S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 1272w, https://substackcdn.com/image/fetch/$s_!OK-S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OK-S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png" width="1084" height="1072" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1072,&quot;width&quot;:1084,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1936449,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/167903507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OK-S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 424w, https://substackcdn.com/image/fetch/$s_!OK-S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 848w, https://substackcdn.com/image/fetch/$s_!OK-S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 1272w, https://substackcdn.com/image/fetch/$s_!OK-S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01e4ad95-fc48-4f2f-97e4-e460e7cc3e38_1084x1072.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>With recent improvements in the quality and cost-effectiveness of AI generated content, it seems impossible to escape <strong><a href="https://en.wikipedia.org/wiki/AI_slop">&#8216;AI Slop&#8217;</a></strong> - low quality generated content used mainly for driving online engagement. We see it on our social media feeds, in the content we read, and even increasingly in our professional work. The public is becoming increasingly aware of it as well, with recent news stories covering its impact on events<strong><a href="https://gizmodo.com/people-hated-the-squid-game-ending-so-theyre-using-ai-to-make-new-ones-2000624999"> like the most recent season of Squid Games</a></strong>, or even <strong><a href="https://www.theguardian.com/technology/2025/jun/29/fake-diddy-ai-videos-youtube">the Sean Combs trial</a></strong>. The topic was even covered in-depth by <strong><a href="https://www.youtube.com/watch?v=TWpg1RmzAbc&amp;pp=ygUibGFzdCB3ZWVrIHRvbmlnaHQgd2l0aCBqb2huIG9saXZlcg%3D%3D">John Oliver in a recent &#8216;Last Week Tonight&#8217; episode.</a></strong> How big of an issue is &#8216;AI Slop&#8217; however? Is it simply the new &#8216;Spam&#8217;, something to be ignored that will eventually fade into the background, or are there major governance implications for it? Here&#8217;s a brief overview of why &#8216;AI Slop&#8217; may be relevant to organizations using AI.</p><p><strong>Wasteful Spend</strong> - Even before the current &#8216;Age of AI&#8217;, there was a conspiracy theory called the &#8216;<strong><a href="https://en.wikipedia.org/wiki/Dead_Internet_theory">Dead Internet Theory</a></strong>&#8217; that postulated that the majority of online content and interactions were driven by automated bots. The biggest challenge with this is that many organizations spend massive amounts of money on ads for &#8216;engagement&#8217;, or derive market insights from it. Big Tech platforms unfortunately <strong><a href="https://lifehacker.com/tech/meta-is-experimenting-with-ai-generated-comments">have an incentive to &#8216;boost&#8217; engagement artificially</a></strong>, even though it will yield poor ROI for the advertisers.</p><p><strong>Degraded Reputation</strong> - Many organizations differentiate themselves based on the quality of the services they provide. Consider every top tier law firm, editorial publication, or consulting business that charges high rates for access to top thinkers. The problem: What if those &#8216;top thinkers&#8217; are using the same AI as everyone else? The temptation to use AI systems may win-out, even as studies show that the <strong><a href="https://www.nature.com/articles/s41562-025-02173-x">diversity of AI content is actually quite low</a></strong>, and that <strong><a href="https://arxiv.org/abs/2506.08872?ref=404media.co">overuse can degrade our own cognitive abilities over time.</a></strong> It will be a constant fight for organizations that seek to differentiate based on <em>quality</em> to avoid their reputation degrading as a result of too much slop.</p><p><strong>Key Takeaway:</strong> While AI generated content is quick and easy to create, there may be a persistent bias against such content, and the internet will likely become overwhelmed for it. This could create a market for authentic human-generated content, but then maintaining that quality and output could become difficult to maintain. For enterprises that rely on AI to generate content, especially as part of their marketing strategy, expect that lack of authenticity ultimately to reduce the effectiveness of their strategies. It&#8217;s an important reminder that while AI is a transformative technology, it&#8217;s also a tool, not to replace human content generation.</p><div><hr></div><p><strong>5. The Great AI Copyright Conundrum</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hPqx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hPqx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 424w, https://substackcdn.com/image/fetch/$s_!hPqx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 848w, https://substackcdn.com/image/fetch/$s_!hPqx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!hPqx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hPqx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png" width="1456" height="966" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:966,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3020784,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/167903507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hPqx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 424w, https://substackcdn.com/image/fetch/$s_!hPqx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 848w, https://substackcdn.com/image/fetch/$s_!hPqx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 1272w, https://substackcdn.com/image/fetch/$s_!hPqx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ce86cee-1cdc-4c03-940c-c0909d3db81c_1746x1158.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Source: New York Times</p><p>Over the span of three days, two major cases on AI and copyright law were released. On June 23, 2025, a judge <strong><a href="https://apnews.com/article/anthropic-ai-fair-use-copyright-pirated-libraries-1e5cece51c2e4bd0bb21d94de2abb035">ruled in favor of Anthropic</a></strong> after a group of authors sued Anthropic for copyright infringement by alleging that the company trained Claude on their protected works without their permission. The judge found that, while Anthropic may have broken the law when it trained Claude on millions of pirated books, the books that were legally purchased for model training purposes did not violate copyright law. The judge reasoned training the model on the books was a fair use because Claude&#8217;s outputs generated new text that was "quintessentially transformative&#8221; from the original material.</p><p>Two days later, a judge <strong><a href="https://apnews.com/article/meta-ai-copyright-lawsuit-sarah-silverman-e77968015b94fbbf38234e3178ede578">found in favor of Meta</a></strong> after a separate group of authors sued the company for using their copyrighted books to train Llama. The judge found that Llama&#8217;s outputs did not cause sufficient market harm to the authors because Llama was not able to generate &#8220;any meaningful portion&#8221; of the authors&#8217; books that would threaten the books&#8217; market value. Moreover, the authors did not present meaningful evidence that Meta&#8217;s use of their books diluted their value. The judge seemingly left open the door to further litigation on this issue by noting that the ruling only impacted &#8220;the rights of [the] 13 authors&#8212;not the countless others whose works Meta used to train its models &#8230; this ruling does not stand for the proposition that Meta&#8217;s use of copyrighted materials to train its language models is lawful.&#8221;</p><p>The big tech companies are heralding this as a win, however it does not solve the broader issues related to how AI model providers are using protected works to train their models. Companies should also take note that this does not alleviate them from infringing on someone&#8217;s intellectual property (IP) rights. Not all models are created equal and it is important to understand whether the underlying model(s) for a company&#8217;s AI products or services have guardrails in place to avoid violating IP laws. Companies should understand how a model handles IP in their training data. For instance, <strong><a href="https://aimodelratings.com/Amazon_Nova_Family/">Trustible Model Ratings</a></strong> identifies when a model has policies around IP in their training data. Moreover, groups like Creative Commons are working on <strong><a href="https://techcrunch.com/2025/06/25/creative-commons-debuts-cc-signals-a-framework-for-an-open-ai-ecosystem/">frameworks</a></strong> that help balance the IP rights with the need to have high quality datasets available to train AI systems.</p><p><strong>Our Take:</strong> Big tech won the battle but is far from winning the war when it comes to how AI uses IP. Policymakers are thinking about how best to balance IP law with AI innovation, but do not expect an answer in the foreseeable future. In the meantime, companies should be taking steps to ensure that training data is properly licensed when appropriate, selecting models that have IP safeguards in place, and reviewing outputs to avoid using potentially infringing materials.</p><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <strong><a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a></strong>.</p><p>AI Responsibly,</p><p>- Trustible Team</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Future of Jobs in the AI Era and the Risk of AI Over-Reliance]]></title><description><![CDATA[Plus an overview of AI risk management strategies, AI benchmarking needs an uplevelling, and a global policy and regulatory roundup]]></description><link>https://insight.trustible.ai/p/the-future-of-jobs-in-the-ai-era</link><guid isPermaLink="false">https://insight.trustible.ai/p/the-future-of-jobs-in-the-ai-era</guid><dc:creator><![CDATA[Trustible]]></dc:creator><pubDate>Wed, 25 Jun 2025 20:17:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!02Ui!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! Last week, the Trustible team was out in force at the Responsible AI Summit North America in Reston, Virginia, connecting with AI governance practitioners from leading enterprises around the world scaling their efforts. (And this week, we&#8217;re trying not to melt every time we step outside our HQ in Rosslyn, Virginia.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!02Ui!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!02Ui!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 424w, https://substackcdn.com/image/fetch/$s_!02Ui!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 848w, https://substackcdn.com/image/fetch/$s_!02Ui!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!02Ui!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!02Ui!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3769100,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trustible.substack.com/i/166841461?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!02Ui!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 424w, https://substackcdn.com/image/fetch/$s_!02Ui!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 848w, https://substackcdn.com/image/fetch/$s_!02Ui!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!02Ui!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F094f6490-ad9c-42b5-a261-8562739a92dc_4283x4283.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In today&#8217;s edition (5-6 minute read):</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><ol><li><p>The Future of Jobs in AI Age</p></li><li><p>How to Respond to AI Risks</p></li><li><p>The plague of test data contamination (AI Cheating)</p></li><li><p>The Growing Problem of Over-Relying on AI</p></li><li><p>AI Policy &amp; Regulatory Roundup</p></li></ol><div><hr></div><ol><li><p>The Future of Jobs in the AI Age</p></li></ol><p>The jury is still out on how much AI will impact the workforce. Plenty of leaders, including those at <a href="https://www.washingtonpost.com/technology/2025/06/17/amazon-jobs-ai-workforce-reduction/">AWS</a> and <a href="https://www.entrepreneur.com/business-news/jpmorgan-to-cut-headcount-in-some-divisions-due-to-ai/491864?utm_source=chatgpt.com">JPMorgan</a>, have started laying the groundwork for shrinking white-collar staff as a result of AI. Some groups argue that this is yet another <a href="https://www.cnn.com/2025/06/18/business/ai-warnings-ceos">&#8216;fear&#8217; tactic aimed</a> at trying to encourage <a href="https://x.com/tobi/status/1909231499448401946">teams to learn AI skills</a>, drive down salary costs, and increase productivity. Policymakers have also been acutely aware of potential negative employment impacts and have sought to offer some pushback, such as requiring employers to disclose when <a href="https://news.bloomberglaw.com/daily-labor-report/ais-power-to-replace-workers-faces-new-scrutiny-starting-in-ny">AI contributes to layoffs</a>. There are some leaders that argue AI will not initially replace workers, but rather a human who is highly skilled in <em>using</em> AI tools, and thus more productive, will do so. There are even some who argue that <a href="https://www.forbes.com/councils/forbestechcouncil/2025/06/05/why-economists-may-be-severely-underestimating-ais-jobs-impact/">economists are </a><em><a href="https://www.forbes.com/councils/forbestechcouncil/2025/06/05/why-economists-may-be-severely-underestimating-ais-jobs-impact/">underestimating the impact</a></em><a href="https://www.forbes.com/councils/forbestechcouncil/2025/06/05/why-economists-may-be-severely-underestimating-ais-jobs-impact/"> AI</a> may have on jobs and the workforce, and the pace at which it could happen. <a href="https://www.nytimes.com/2025/05/30/technology/ai-jobs-college-graduates.html">New college graduates seem to be the first impacted</a>, as AI cannot yet replace expertise and experience, but can be better at some tasks than entry level knowledge workers. This of course begs the question of how we will keep training up the experts of the future without these early experiences, especially <a href="https://www.fastcompany.com/91355975/your-reliance-on-chatgpt-might-be-really-bad-for-your-brain">if using AI itself may reduce critical thinking.</a></p><p>However it&#8217;s not all doom and gloom. The counter-argument on these impacts is that humans will adapt their skills and preferences in the AI era, and new jobs will also emerge as a result. A recent <a href="https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html?unlocked_article_code=1.RU8.QOOs.nnt4HAayO6ks&amp;smid=url-share">New York Times Magazine article dug deep into 22</a> new roles that will be created <em>because</em> of AI. The article breaks down the 22 roles into the categories of Trust (those involved in overseeing, governing, and auditing AI systems), Integration (people doing the &#8216;last mile&#8217; work of connecting data and systems to AI), and Taste (creative roles that are specifically aimed to finding that that AI is <em>not</em> capable of doing). It&#8217;s unclear how many roles in this category there will be, and many do still require a high skill level, and ironically, a deep understanding of how AI systems work, and their limitations.</p><p><strong>Key Takeaway:</strong> A big question for corporate leaders, and policy makers, is whether these impacts will be fairly sudden, or gradual over time, and how fast people will be able to retrain (ironically using AI) towards new economic opportunities, or whether &#8216;AI replacement&#8217; will become the next big political issue much like &#8216;globalized of production&#8217; has become recently in the US.</p><ol start="2"><li><p><strong>How to respond to AI risks</strong></p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!viuO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!viuO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 424w, https://substackcdn.com/image/fetch/$s_!viuO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 848w, https://substackcdn.com/image/fetch/$s_!viuO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!viuO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!viuO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!viuO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 424w, https://substackcdn.com/image/fetch/$s_!viuO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 848w, https://substackcdn.com/image/fetch/$s_!viuO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!viuO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d0cb3f2-e383-4cda-8ab7-bd00a6b05964_1600x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>AI risk management is a key (perhaps the key) part of an AI governance strategy. Identifying and managing risks can be an incredibly difficult task for many enterprises given the fast pace of change in AI systems and the evolving landscape of AI regulations (something help with at<a href="https://www.linkedin.com/company/trustible/"> Trustible</a>). Moreover, you may not even be able to control every risk yourself since so many use cases rely on leveraging third-party platforms.</p><p>There are four types of Enterprise Risk Management techniques that every AI governance professional should know about.</p><ul><li><p><strong>Avoid &#8211; </strong>Sometimes the safest move is to say &#8220;no.&#8221; If an AI system can&#8217;t meet your privacy or cybersecurity standards, don&#8217;t use it. However, saying &#8220;no&#8221; of course may not always be an option - let&#8217;s face it, AI is incorporated into more and more SaaS products each and every day, and IT and risk professionals are becoming more aware of the &#8220;Shadow AI&#8221; problems swiftly emerging across enterprise tech stacks.</p></li><li><p><strong>Mitigate &#8211; </strong>Many risks can be reduced. Steps like removing personal data, checking for bias, adding human-in-the-loop review, and monitoring the system can bring risk down to an acceptable level. But not all mitigations are technical in nature; human mitigation, such as <a href="https://www.trustible.ai/ai-literacy-training">AI literacy training</a>, is an effective form of mitigation that forms a key part of a comprehensive mitigation approach.</p></li><li><p><strong>Accept &#8211; </strong>After careful review, some leftover risk may be worth taking. Conduct the right risk assessment, document the decision, get the right approvals, and move forward with implementation.</p></li><li><p><strong>Transfer &#8211; </strong>When risk remains high, shift it elsewhere. In the AI value chain, risk is transferred downstream to groups building a model, those hosting the infrastructure supporting that model, the developers potentially customizing the model, and the end users consuming outputs. From a third-party risk management perspective, contracts that require vendors to cover certain losses or specialized insurance policies can carry part of the load and shield some risk.</p></li></ul><p>Our take: Too many organizations simply avoid the risks altogether. This slows down or even halts AI adoption inside of their organizations. These four categories of risk response strategies can help you gain confidence in your organization&#8217;s ability to manage the risk and move forward with your AI innovation strategy.</p><ol start="3"><li><p><strong>The Plague of Test Data Contamination</strong></p></li></ol><p>While students have increasingly been accused of cheating using LLMs, models are doing the same through test data contamination. A <a href="https://arxiv.org/pdf/2506.12286">recent study</a> shows that models that excel at SWE-Bench, a popular software engineering benchmark, may be doing so through memorization, not a genuine understanding of coding tasks. The memorization occurs due to test data contamination, wherein the answers to the benchmark question are present in the training data. This invalidates the results, because the goal of the benchmarks is to measure how models perform to <strong>unseen</strong> data. The phenomena is not new - the MMLU benchmark has largely been deprecated due to <a href="https://aclanthology.org/2024.naacl-long.482.pdf">major contamination issues</a>. Unlike traditional machine learning, where training and test datasets consist of datapoints that can be easily enumerated, LLMs are trained on enormous datasets that are harder to filter. If benchmark data is public, it may inadvertently be included in a web scrape (especially if a copy of it is published on an unofficial website).</p><p>Model providers use benchmarks to demonstrate on increasingly harder tasks; however, contamination issues put into question the construct validity of these assessments. Novel tasks, like those in Apple&#8217;s <a href="https://machinelearning.apple.com/research/illusion-of-thinking">recent reasoning paper</a>, raise a question about the actual abilities of LLMs. Several recent benchmarks have attempted to protect against contamination, including <a href="https://arxiv.org/pdf/2506.11928">LiveCodeBenchPro</a>, which is regularly updated with novel coding questions, and MLCommon&#8217;s <a href="https://ailuminate.mlcommons.org/benchmarks/?language=en_US">AILuminate</a>, a closed benchmark run by the creator against the models&#8217; APIs. For such benchmarks, it is crucial that the creators are impartial parties (unlike FrontierMath that was <a href="https://www.lesswrong.com/posts/8ZgLYwBmB3vLavjKE/some-lessons-from-the-openai-frontiermath-debacle">partially funded by OpenAI</a>), and that each model is only evaluated once (to avoid gamifying the results). In addition to reducing contamination, such benchmarks can improve reproducibility, as LLM performance is sensitive to prompt formatting and parameters. Closed benchmarks have limitations, too, like reduced visibility into potential biases and bugs in their data.</p><p><strong>Key Takeaway: </strong>Many popular LLM benchmarks are likely overestimating model performance due to test data contamination. Some groups are starting to implement semi-private benchmarks that can combat this concern, but this industry still has room for improvement, as a broader set of <a href="https://arxiv.org/html/2411.12990v1">better benchmark criteria</a> is being developed.</p><ol start="4"><li><p><strong>The Growing Problem of Over-Relying on AI</strong></p></li></ol><p>When we generally think about AI risks, the primary focus tends to be on bias or discrimination in the AI systems. However, these risks are realized and cause the most impact when applied in high-risk use cases (e.g., creditworthiness or employment-related decisions). Yet, the vast majority of use cases are more likely than not <a href="https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025">lower risk uses</a>. When we think about risks through the dimensions of likelihood and severity, we may overlook one of the more likely risks that arise within lower risk use cases that has an evolving severity level: overreliance on AI.</p><p>Trustible&#8217;s risk taxonomy explains that overreliance on AI &#8220;occurs when users start to accept incorrect AI system outputs due to excessive trust in the system.&#8221; AI&#8217;s widespread use, especially generative AI systems, exacerbates the likelihood that this risk will occur. For instance, attorneys are increasingly relying on AI to help write legal filings but are <a href="https://www.sltrib.com/news/politics/2025/05/29/lawyer-punished-filing-brief-with/">falling prey</a> to the AI&#8217;s confidence in legal citations. While the system&#8217;s hallucinations are the primary risk, it is the attorney&#8217;s <a href="https://www.reuters.com/technology/artificial-intelligence/ai-hallucinations-court-papers-spell-trouble-lawyers-2025-02-18/">confidence in content</a> that manifests the harm. The education system is also seeing <a href="https://www.axios.com/2025/05/26/ai-chatgpt-cheating-college-teachers">major upheavals</a> as more students rely on generative to complete assignments, which has led to an increase in cheating or plagiarized assignments. Higher education professors have also relied on generative AI for their lessons, which raised questions over the <a href="https://www.newsweek.com/college-ai-students-professor-chatgpt-2073192">value of college classes</a>.</p><p>The severity of harm from overreliance on AI is the other dimension to consider, though one that is still coming into focus. A recent <a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/">MIT study</a> indicates that overreliance on AI systems has negative cognitive impacts, such as with memory or executive function. This may have an outsized impact on students who use generative AI to complete school work, but can also have negative consequences for adults who over-rely on the technology to accomplish tasks that require a certain level of cognitive ability.</p><p><strong>Our Take:</strong> Overreliance on AI is a far-reaching risk but one that is preventable. Implementing AI literacy programs can help users understand how to leverage the technology without blindly accepting its outputs and excluding personal expertise or thought. Moreover, having an explanation accompany system outputs can help users easily identify source material to determine its validity and applicability.</p><ol start="5"><li><p><strong>AI Policy &amp; Regulatory Roundup</strong></p></li></ol><p>Here is our quick synopsis of the major AI policy developments:</p><ul><li><p><strong>U.S. Federal Government.</strong> The federal AI moratorium took a big step towards becoming law when the Senate Parliamentarian <a href="https://www.politico.com/news/2025/06/22/senate-parliamentarian-greenlights-state-ai-law-freeze-in-gop-megabill-00416499">determined</a> it did not violate Senate procedure and could remain in the Republican&#8217;s reconciliation bill. The moratorium survived after Senate Republicans amended the language to condition states&#8217; access to certain broadband funds on their temporarily pausing AI laws. The language could still be stricken from the bill via floor amendment. It also faces opposition by House Freedom Caucus members.</p></li><li><p><strong>U.S. States. </strong>AI-related policy developments at the state level include:</p><ul><li><p><strong>New York. </strong>The state legislature passed the <a href="https://www.nysenate.gov/legislation/bills/2025/A6453/amendment/A">Responsible AI Safety and Education (RAISE) Act</a>, which is a modified version of California's SB 1047. Notable differences from SB 1047 include not requiring a &#8220;kill switch&#8221; for frontier models and caps on penalties. Legislators have not yet sent the bill to Governor Kathy Hochul, but can do so at any time in 2025. Once it arrives on the Governor&#8217;s desk, she will have 30 days to act on it.</p></li><li><p><strong>Texas. </strong>On June 22, 2025, Governor Greg Abbott signed the <a href="https://capitol.texas.gov/BillLookup/Text.aspx?LegSess=89R&amp;Bill=HB149">Texas Responsible AI Governance Act</a> into law. It will <a href="https://capitol.texas.gov/BillLookup/History.aspx?LegSess=89R&amp;Bill=HB149">take effect</a> on January 1, 2026.</p></li></ul></li><li><p><strong>European Union.</strong> <a href="https://www.politico.eu/article/swedish-pm-calls-to-pause-eu-ai-rules/">Sweden&#8217;s Prime Minister</a>, Ulf Kristersson, is the latest (and first EU government leader) to support pausing the EU AI Act. His comments come as the EU considers a <a href="https://single-market-economy.ec.europa.eu/single-market/simplification_en">digital omnibus package</a> that aims to simplify regulations and could include amendments to the AI Act.</p></li><li><p><strong>Latin America.</strong> A group of 12 Latin American countries, led by Chile, are working to <a href="https://www.reuters.com/world/americas/latin-american-countries-launch-own-ai-model-september-2025-06-17/">develop an LLM</a> focused on the cultural and linguistic diversity of the region. Latam-GPT is an open source project that is intended to help boost AI accessibility across Latin American countries.</p></li><li><p><strong>United Nations (UN). </strong>The United Nations Working Group on Business and Human Rights released a <a href="https://www.ohchr.org/en/documents/thematic-reports/ahrc5953-artificial-intelligence-procurement-and-deployment-ensuring">report</a> warning that AI systems must be developed to align with the UN Guiding Principles on Business and Human Rights. The report was presented as part of the 59th session of the Human Rights Council.</p></li></ul><p>&#8212;</p><p>As always, we welcome your feedback on content! Have suggestions? Drop us a line at <a href="mailto:newsletter@trustible.ai">newsletter@trustible.ai</a>.</p><p>AI Responsibly,</p><p>- Trustible Team</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://insight.trustible.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Trustible AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>