Trustible AI Newsletter #46: AI Needs Fallbacks
Plus an AI incident tests the boundaries of Section 230, why Reddit hold a special place in the eyes of LLMs, and our global AI policy roundup
Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! In this week’s edition, we’re covering:
Trustible’s Take - The Need for AI Fallbacks
AI Incident Spotlight - (AI Incident 1248)
Reddit’s Hidden Hand in AI Training and Why It Matters
Policy Round-Up
1. Trustible’s Take - The Need for AI Fallbacks
Several owners of Eight Sleep, a tech enabled ‘smart bed’, were suddenly woken up early last Monday by their beds going haywire. It turns out that a massive AWS outage related to a misconfigured Domain Name Service (DNS) system was able to take down more than just websites and SaaS applications. This event highlights two core risks related to the evolving AI ecosystem.
The first, is that there is a lot of infrastructure that still supports AI systems. For example, all internet traffic relies on DNS to figure out what servers to talk to, AI systems need to read and write information to databases like AWS DynamoDB, and the physical hardware that supports the relevant computation is also at risk. One reason the AWS outage was so widespread is because there are only a handful of hyperscaled cloud service platforms, and the set of affected data centers (‘us-east-1 based in northern Virginia) happens to be one of the earliest, and most used regions. As we start to incorporate more AI into everyday systems and rely on it more, the risk of creating ‘central points of failure’ increases.
The second issue at play was that the devices with embedded ‘smart’ capabilities did not have appropriate fallback mechanisms. Instead of simply disabling the smart features and acting as a plain old bed, some of the smart beds start increasing temperature endlessly. A few days after the outage, Eight Sleep did roll out a dedicated ‘outage mode’ that is able to use local bluetooth to send instructions to the smart bed. At this point in time, we have a ‘non-AI’ path and process for most things, however will that be true 10 years from now? One popular AI policy proposal is to ensure that there are non-AI fallbacks, bypass, or appeal processes, as a main mitigation towards this potential AI overreliance.
Key Takeaway: The AI ecosystem is not yet mature enough to have ‘multi-cloud’ deployments, partly because not all model providers are available across all major cloud providers. This maturity, and clear standards for ‘non-AI’ modes are likely going to be necessary before AI is adopted for highly critical applications in fields like healthcare or national infrastructure.
2. AI Incident Spotlight - (AI Incident 1248)
What Happened: US Conservative activist Robby Starbuck has sued Google claiming that Google’s AI models regularly defame him by accusing him of sexual assault. He filed a similar lawsuit against Meta earlier this year alleging similar things coming from Meta’s AI tools. That case was settled before going to trial. Many of his allegations date back to an earlier model, Bard, that Google has since deprecated.
Why it Matters: The exact cause of the hallucination isn’t known. It’s possible that there was politically motivated intentional misinformation posted online that the systems incorporated (a form of data poisoning), or it could simply be a hallucination because of similarities between Robby and others. Regardless of the source of the misinformation, this case, and others like it, will likely test the precedent of applying ‘Section 230’ to AI systems. The issue at the core is who is liable for false information like this. Under current interpretations, web platforms are not directly responsible for the defamatory content created by users of their platform. Whether an AI system counts as a ‘platform’ (protected), or user (liable) could radically shift the liability scheme for LLM providers. So far, no one has been successful in winning a defamation case against AI in the US.
How to Mitigate: Most of the most recent LLM systems have the ability to conduct real-time web searches in order to pull in ‘fresh’ information from reliable sources that can help pull in facts. However this web searching capability is a feature of the system not the models themselves, and therefore many AI features built on just model APIs won’t have this ability, and allowing searches introduces additional privacy and cyber risks. System prompts that cover how to respond when doing research or answering questions about people can help ensure that only verified information is shared, and to avoid stating potentially negative information altogether.
3. Reddit’s Hidden Hand in AI Training and Why It Matters
Few platforms have shaped the modern internet like Reddit. Its open, chaotic mix of expertise, argument, and lived experience has become a goldmine for organic community discussion for brands, but also, for LLMs. Because Reddit posts are conversational, self-correcting, and wide-ranging, they provide the kind of nuanced human expression that makes AI sound more human. In fact, many of the citations or answers you see in AI-generated content trace their roots back to Reddit threads.
But this organic data source has become a flashpoint. Reddit’s recent lawsuits against Perplexity and other AI vendors highlight an emerging legal and ethical battleground: who owns public discourse when it fuels AI? The platform argues that scraping and repurposing Reddit data without consent undermines both its community and its business model, especially as Reddit now licenses its content to certain model developers under paid agreements (representing $35M in revenue for Reddit as of Q2 2025, up 24% year over year.)
Technically, Reddit’s data is particularly valuable because of its structure. Its posts and comment trees offer deeply nested, timestamped dialogues, rich with slang, reasoning chains, code snippets, and emotional context, all of which are perfect for pretraining and fine-tuning models to understand how humans argue, explain, and empathize. AI crawlers systematically follow thread hierarchies and metadata (like upvotes or subreddit topics) to learn which ideas communities endorse or reject, turning social feedback loops into learning signals. This makes Reddit data uniquely high-quality - and uniquely sensitive.
For enterprises, Reddit represents both opportunity and risk. It’s a valuable source for sentiment analysis, market research, and training domain-specific chatbots. Yet the same exposure that makes Reddit content powerful also makes it volatile. A viral post, an out-of-context quote, or an AI model trained on outdated or toxic Reddit data can easily amplify reputational risks.
Why is it Relevant: Reddit’s fight with AI companies is a preview of how data ownership, consent, and community ethics will define the next phase of AI governance.
Key Takeaway: AI governance professionals should be alert to how LLMs draw from social data ecosystems like Reddit, and monitor these channels for keyword and thematic discussions where misinformation may be shared. Engaging in these channels, setting the record straight, and controlling your organization’s own footprint (from ads to AMAs) helps protect your brand. These are not neutral channels, they are living communities whose norms, biases, and moderation dynamics shape AI behavior.
4. Policy Round-Up
Trustible’s Top AI Policy Stories
Anthropic vs the White House. Anthropic has been in an interesting back-and-forth with White House AI Czar David Sacks, who has criticized the company as promoting an agenda “to backdoor Woke AI and other AI regulations.”
Our Take: Anthropic has been a leading voice on AI safety and supported state laws, including the recently enacted SB 53. The spat shows the fine line companies need to walk between promoting AI regulations, innovation, and navigating a tenuous political environment.
EU Prepared to Expand Copyright Laws to AI. It appears MEPs are prepared to require EU copyright law apply to AI training regardless of where the training occurs.
Our Take: The decision would put US tech companies and their frontier models in the cross hairs. But depending on the actual text, it could have big implications for companies that are training models on external data sources.
Frontier Model Providers Get EU AI Act Warning. The Dutch Data Protection Authority warned four frontier model providers (OpenAI, xAI, Google, and Mistral) that their chatbots’ advice on the Dutch parliamentary elections could classify them as high-risk systems under the EU AI Act.
Our Take: While the EU AI Act has not come into full effect yet, this is a good reminder that “low risk” systems can evolve into high-risk systems, which is why continuous oversight is necessary.
In case you missed it, here are additional AI policy developments:
United States Congress. A recent letter to Federal courts Senate Judiciary Chair Senator Chuck Grassley (R-IA) revealed that interim guidance has been issued to federal courts, which addressed “non-technical suggestions on the use, procurement, and security of AI tools.” Grassley supports formal AI regulations for the federal judiciary, given the errors and controversy that have arisen with AI and the courts.
Trump Administration. The International Trade Administration (ITA), which sits within the Department of Commerce, is launching a program to develop full-stake AI export controls. ITA issued a request for information, in accordance with President Trump’s Executive Order on Promoting the Export of the American AI Technology Stack, to seek industry input on how to establish and implement the program.
Africa. Gebeya (an Ethiopia-based platform) launched Gebeya Dala, an AI-powered app builder designed with African cultural considerations. This is the latest in a series of culturally-specific AI tools that have launched this year that align with regional or non-western cultures.
Asia. AI-related policy developments in Asia include:
China. China’s legislature formally adopted amendments to China’s cybersecurity law, which promotes stronger ethical AI standards, risk monitoring, and assessments.
Japan. The Government of Japan entered into a Memorandum of Cooperation with the U.S. government, in part to cooperate on accelerating AI adoption and innovation.
Vietnam. The Government of Vietnam is working on amending its intellectual property regulations to promote AI innovation. The government is also supporting an open source AI ecosystem as part of their broader strategic plans as a regional AI player.
Australia. The Labour government indicated that it will not exempt frontier model providers from copyright laws for text and data mining. The government also launched an investigation into how chatbot companies like Character.ai implement safeguards for children. The government is also suing Microsoft because of deceptive price hikes related to copilot integrations into Microsoft 365.
Europe. AI-related policy developments in Europe include:
Albania. Albania’s Prime Minister announced that its AI minister, Diella, is pregnant with “83 children.” The AI-generated offspring will serve members of parliament as their assistants.
EU. The EU’s standards-setting body caused controversy by announcing that it would “fast-track” the most delayed EU AI Act standards with a smaller group of experts. The move was characterized as “unprecedented.” The decision caused pushback from some members of the standards body because of “serious unintended consequences.”
UK. The Labor Government announced a new partnership with OpenAI that will allow its UK business customers to host their data within the UK. A local news station also piloted an AI newscaster in a story about whether AI will replace humans in the workforce.
North America. The Canadian government is considering a series of AI-related laws that will address deepfake, data transfers and age assurances for chatbots. Canada abandoned its efforts to pass a comprehensive AI law after their federal elections earlier this year.
Middle East. AI-related policy developments in the Middle East include
Saudi Arabia. Saudi-based AI company Humain is making plans to be listed on the Saudi stock exchange as well as the NASDAQ. Humain and Qualcomm also announced a partnership on deploying advanced AI infrastructure in Saudi Arabia.
UAE. G42 (UAE-based AI provider) and Cisco announced a partnership to build “secure, trusted and high-performance [AI] infrastructure.”
South America.
Argentina. A recent study showed that a third of media professionals in Argentina use AI to assist with their jobs, which includes “help write and edit articles, craft headlines and translate text.” There are some concerns that the lack of regulations for how journalists use AI could hurt the industry financially, as well as a need for AI-related journalistic standards.
Chile. The Chilean government is dealing with public backlash over resource issues posed by AI infrastructure, specifically as the government seeks to build more data centers. The outcry over AI energy and water consumption has been brewing in other countries, including the U.S.
—
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team








