Trustible's 2026 AI Predictions
2025 was a transformative year for AI - and we're forecasting even more consequential changes in 2026 across AI governance and the technical, policy, and business landscapes.
Happy Wednesday, Happy New Year, and welcome to the first 2026 edition of the Trustible AI Newsletter! 2025 proved to be a critical - but tumultuous - year in the world of AI, and we don’t anticipate that trend changing in 2026. But, as we navigate what’s to come across the technical, policy, and business landscape of AI, we do believe in one constant: that 2026 will be a transformative year for AI governance, as it becomes the primary business imperative that will drive how enterprises will actualize the positive ROI of AI.
In this week’s edition, we’re sharing our 2026 predictions across what’s in store for AI governance, AI technical trends, AI incident trends, and what’s around the corner in the policy and regulatory sphere.
Let’s dig in.
1. Trustible’s 2026 AI Governance Predictions
We aren’t alone in predicting that 2026 will be the “make or break” year for AI. There are a number of consequential questions that will likely be answered in 2026, including whether AI agents will be adopted at scale, whether major AI regulations in the EU and across U.S. states will actually come into force in their current form, and whether the AI bubble will burst, or continue to grow. These are monumental questions that policymakers and professional talking heads will continue to debate, but all of them also have implications for teams tackling AI governance.
Here are our predictions on AI governance for 2026:
AI Governance Beyond Intake - Many organizations now have robust policies, fully populated inventories, and initial risk assessments. What comes next is a lot of change management for systems already in place. This work may look very different from earlier governance work.
AI Agents Become Mainstream - At this point last year, even many AI practitioners may have never heard of an MCP server, or had reviewed a proposed agentic AI system. This year, many organizations have mandates to deploy agentic AI workflows, and a lot of people are hoping that agentic AI provides the value that chatbot copilots did not.
Growing Third Party Risks - As agentic AI rolls out inside organizations, knowing which tools and platforms are connected to AI systems will become its own inventorying challenge, and source of risks. In addition, many vendors may deploy their own agents for work, introducing new potential risks for their customers to stay on top of.
Pressure for AI ROI - After several years of high budget experimentation, many organizations are now looking for tangible ROI from their AI systems. AI vendors will be under a lot of pressure to show revenue, and so prices for AI tools are likely to increase at the same time that organizations are looking to focus their efforts more on high value AI. Calculating that value will be a major challenge and narrative going forward.
AI Policy Moves ‘Up the Stack’ - 2025 had many AI policy proposals focused on the foundational model level, with the EU publishing their relevant Code of Practice for GPAI, and bills in California and New York being signed to regulate them. However, these types of regulations are being specifically targeted by the Trump administration, and trying to pass new ones is likely to face pushback. We think policymakers are more likely to target specific AI use cases or types of systems for further regulation, especially focused on protecting kids from AI, or regulating use of AI for mental health purposes.
You can read our full 2026 trends and prediction piece here.
2. Technical Deep Dive - 2026 Technical Look-Ahead
2025 brought a strong new generation of AI models from many providers, with an increased focus on training for reasoning and tool use. While some providers focused on creating increasingly large models, others continued to explore how smaller models trained on high-quality data can produce competitive results. We expect to see steady improvements across these areas, but for our 2026 predictions, we focus on the broader picture beyond just performance:
World Models are AI models that aim to understand and model the physical world directly (in contrast to LLMs that are trained to predict the next word and as a byproduct encode some knowledge about the world). In 2025, there were some early developments in this space from Fei-Fei Li’s company World Labs and China’s Tencent; we expect to see some progress in this space, but these models are unlikely to overtake LLMs in usability and popularity in 2026, because they require large amounts of complex data that is not readily available.
AI-Generated Videos will become impossible to distinguish from real videos; many examples from state of the art models, like Google’s Veo-3, already lack tell-tale “AI” signs (like background objects that move in unrealistic ways). However, generating longer videos (> 30 seconds) may remain difficult because of challenges with maintaining character and scene consistency.
New Year, Same Risks: In late 2025, adversarial poetry, a new jailbreaking technique, was able to overcome the defences of a large number of popular LLMs. At the same time, hallucination rates remained high on multiple benchmarks. We do not expect either problem to be “solved” in 2026, these risks are inherent to LLMs because LLMs are trained to generate text, not recognize factuality or the potential danger of the generated content.
Model Transparency will continue to play an uncertain role in adoption and trust. While transparency decreased overall in 2025, according to the Stanford Transparency Index, AI adoption increased. In the open model space, Qwen models, whose providers disclose little information about the training data, were top downloads from Hugging Face, while the highly transparent OLMO models did not receive much attention. In addition, new nuances have emerged around “transparency”: popular LLM providers release increasingly long System Cards, but many of the evaluation results now rely on LLM-as-a-Judge style automated evaluations which introduce biases and a new layer of complexity.
None of these predictions point to a single dominant shift in 2026, but some of the most exciting developments may come from the non-LLM side of AI. More broadly, if 2025 was the year of AI pilots and experiments, 2026 will be the year of transforming them into hardened production-ready systems. Meanwhile the continued uncertainty around risks and transparency, points to a need for increased education around AI risks and evaluations.
3. AI Incident Spotlight - 2025 AI Incident Recap & 2026 Predictions
The AI Incident Database catalogued 345 distinct incidents in 2025, a record high. Our analysis of these incidents shows 3 major trends in 2025:
Deep Fake Scams - The majority of incidents in 2025 involved some form of scams, often involving deepfakes. These included using AI tools to generate flashy ‘phishing’ websites, using AI to create a massive web footprint for a fraudulent company, and many instances of scammers exploiting fake videos of celebrities claiming to love the scam’s target (Incident 901, Incident 1126, Incident 1185).
Chatbots & Mental Health - Unfortunately there were many incidents involving deaths associated with chatbot use. These included instances of chatbots providing recommendations on how to tie a better noose, affirming a man’s belief that his mother was plotting to kill him leading to murder-suicide, and a teenager being induced to commit suicide by a fake AI-powered Game of Thrones character. In a sign of how bad things have gotten, Wikipedia now has a dedicated page linked to them. According to OpenAI’s own data, using AI for chats about mental health issues is one of the top user use cases for ChatGPT.
Early ‘Agentic’ Incidents - 2025 saw some of the first incidents directly linked to AI agents, including a Replit agent deleting a production database, Claude Code’s agent mode autonomously conducting a cyber attack, and Wall Street Journal reporters successfully jailbreaking an AI powered vending machine. These were attained not with direct improvements in the AI models themselves, but rather by connecting models with a lot of different tools capable of performing actions.
Here are a few of our predictions for AI incidents in 2026:
Agentic AI - There were only a few incidents directly linked to AI agents in 2025, but as agentic AI gets more adoption, we expect to see many incidents linked to agents. There are a few good reasons to suspect this, including the poor security status of many MCP servers and the relatively early maturity on how to evaluate or red-team agents.
AI in Healthcare - It’s estimated that over two-thirds of medical practitioners are now using AI tools in their jobs, with notes transcription being the leading use case. However the impact of AI errors may take time to resolve. We expect to see some incidents linked to errors produced especially by earlier model versions that haven’t yet been acted upon.
AI Videos - The quality of AI generated videos from tools like OpenAI Sora-2, or Google’s Nano Banana, is truly impressive, and many videos will become increasingly difficult for many people to identify as AI generated. We expect more scams and misinformation incidents specifically linked to hyper-realistic looking videos.
4. Trustible’s 2026 Policy Predictions
AI policy in 2025 was a roller coaster of new developments domestically and globally. The new Trump Administration upended AI safety work from the Biden Administration, the EU squabbled over whether to delay the AI Act (which it ultimately did), and governments at every level moved ahead with their own AI rules. Here are our top three thoughts on what to expect in AI policy in 2026:
AI Lawsuits Will Test Regulatory Limits. Last year we tracked various lawsuits related to AI harms, from companion bot-related deaths to copyright infringement. We do not expect those battles to fade away, but a new one is about to heat up. States have been passing AI laws and those are in the crosshairs for new legal fights, as well the litigation coming from President Trump’s AI moratorium Executive Order (EO). These new legal fights will forge a new path on old laws and rights as they apply to AI rules.
Turnaround for the AI Trust Gap. The past year saw another dip in AI trust among the general public even though adoption was on the rise. A large part of the distrust stems from a lack of regulation, which some studies show would help ease concerns over the technology. As some countries start to implement AI rules and safeguards, do not be surprised if there is a (slight) uptick on AI trust.
AI Innovation Hits a Roadblock in the US. The second Trump Administration started with a bang for AI innovation, effectively undercutting any efforts that could burden the American AI ecosystem. Expect that line of thought to shift ever so slightly in 2026, as the Trump Administration grapples with challenges that AI presents to national security and critical infrastructure. The AI moratorium EO acknowledges the need for a federal AI framework, which is a marked shift from where the Administration stood last January. Expect further guidance on AI security and resiliency, as well as guidelines for certain industries (NIST recently announced a new workstream for AI and advanced manufacturing).
Overall, this year will blend “more of the same” with some new challenges. We expect US states to continue regulating AI, even as the federal government tries to clamp down on the AI legal patchwork. We also expect to see more interest in agentic AI, though regulatory frameworks are still a few years away. What we see this year is an opportunity to clarify some legal uncertainty, while also increasing a push for basic AI governance to help address safety and security concerns.
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team




