Corgi, the Y Combinator-backed insurance carrier, has launched AI Insurance Coverage, a purpose-built product designed to protect businesses when their AI systems malfunction, hallucinate, or make decisions that cause financial harm. The coverage addresses the full spectrum of AI failures: biased algorithms, inaccurate generated content, training data disputes, adversarial attacks, and autonomous system breakdowns.
The timing is pointed. Traditional insurers have started excluding AI-related risks from their policies altogether, leaving companies exposed precisely when they need protection most. Corgi is stepping into that gap with a modular approach that lets customers select only the coverage relevant to their risk profile.
The Risk Is Real and Growing
AI failures are no longer theoretical. A database maintained by HEC Paris researcher Damien Charlotin now tracks 1,394 court cases where generative AI produced hallucinated content that ended up in legal filings. Sanctions have ramped up: a Cherry Hill attorney was fined $5,000 last week for submitting a brief with false AI-generated citations, his second such penalty. The Sixth Circuit recently imposed a $30,000 sanction against two attorneys for more than two dozen fake case citations.
These aren't isolated incidents. According to an EY survey cited in a recent security briefing from Stanford's Trustworthy AI Research Lab, 64% of companies with annual turnover above $1 billion have lost more than $1 million to AI failures. The 2026 International AI Safety Report, authored by over 100 experts and backed by 30 countries, states explicitly that AI agents "pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm."
The compound failure problem makes this worse. One reliability analysis found that even if an AI agent were 85% reliable at each step, a 10-step workflow would succeed end-to-end only about 20% of the time. Scale that to the longer workflows production agents actually run, and cascade failures become nearly inevitable.
What Corgi's Coverage Actually Does
Corgi's AI liability coverage sits inside its Technology E&O offering with what the company calls "affirmative language" for AI-specific claims. This responds when a model hallucinates, produces biased outputs, or causes downstream financial harm to a customer. Generic Tech E&O policies written before generative AI often have ambiguous or silent treatment of model outputs, which is exactly where claims are being filed now.
The coverage spans liability for LLMs providing false or harmful information, protection against claims of discriminatory outcomes in hiring, lending, or healthcare AI, and legal defense for intellectual property disputes related to training data. The company also offers cyber coverage for data breaches involving training data, model theft, and regulatory fines under data protection laws.
Defense costs for regulatory inquiries under the EU AI Act, Colorado's AI law, and similar state-level rules typically sit inside the Tech E&O and Cyber Liability modules. The legal cost of responding to an investigation is where most of the spend lands, even if fines themselves are often uninsurable.
Why This Could Become Standard
Enterprise buyers increasingly require proof of AI risk management and cyber coverage before integrating vendor APIs. Investors audit data provenance and IP posture during due diligence. As regulations tighten, having coverage in place signals governance and readiness to customers and partners.
The market is moving fast. Gartner projects 40% of enterprise applications will embed AI agents by mid-2026, up from less than 5% in early 2025. Analysts expect AI-related legal claims to exceed 2,000 cases by year-end. Anthropic just paid $1.5 billion to settle a copyright case over pirated training data. Universal Music filed a $3.1 billion lawsuit against the same company in January.
For businesses deploying AI in production, the question is no longer whether something will go wrong, but whether they'll have coverage when it does. Corgi, which has raised $108 million and hit $40 million in annual recurring revenue since receiving regulatory approval in July 2025, is betting that specialized AI coverage will become as essential as cyber liability is today.
The traditional insurance market has been slow to adapt to emerging technology risks. That's an opportunity for carriers willing to underwrite them directly. Whether Corgi's approach becomes the industry standard depends on how quickly AI failures translate into claims, and early evidence suggests that won't take long.


