Quick Facts
- The New Analysis: The Register (UK tech publication) published April 13, 2026 — today — confirming that AI hallucination cases in global legal proceedings have reached 1,200+ documented instances, with approximately 800 from U.S. courts; the rate is "still increasing" despite record sanctions
- Q1 2026 Sanctions Total: At least $145,000 in U.S. court penalties for AI-related attorney misconduct — the highest quarterly total in history — including a $109,700 record in Oregon, a $30,000 Sixth Circuit fine, and a $55,000 incompetency-finding penalty in Alabama
- The Deterrence Failure: Ten courts sanctioned AI-hallucination cases on a single day in early April 2026. The Register: "The rate, they say, is still increasing." Record penalties have not slowed the epidemic. They were not designed to.
- Nebraska's Double Move: While Nebraska's Supreme Court considers suspending attorney Greg Lake for alleged AI use in a brief, the Nebraska legislature simultaneously passed LB 525 — a chatbot regulation bill — expanding legal-establishment control over AI beyond courtrooms into statute law
- The Admission the Bar Won't Make: The Register notes that "responsible lawyers... report that using AI needs as much time to verify as it saves, but that it's still worthwhile if used judiciously" — meaning the tool works, the users just need better training. The profession's answer instead: escalate punishment
- The Institutional Paradox: The legal profession is simultaneously imposing career-ending sanctions on attorneys for AI hallucinations while 61.6% of federal judges use AI in their judicial work — without disclosure requirements, training mandates, or any accountability when the AI makes errors
- Sources: The Register (Apr. 13, 2026); Damien Charlotin/HEC Paris Smart Law Hub (Apr. 2026); Troutman Privacy & AI Law Blog (Apr. 13, 2026); ComplexDiscovery (Apr. 9, 2026); KSNB Local 4/Nebraska (Apr. 9, 2026); Northwestern University/Sedona Conference Journal (Mar. 2026)
Today, April 13, 2026, The Register — the long-running British technology publication known for its sharp analysis of the technology industry — published a comprehensive account of what has happened since artificial intelligence went viral in American and British legal practice. The headline: "AI went viral among attorneys. We have the numbers on what happened next."
What happened next, according to those numbers, is a plague.
Damien Charlotin of HEC Paris has now documented more than 1,200 AI hallucination cases in legal proceedings around the world. Approximately 800 of those are from U.S. courts. In early April 2026 alone, ten courts across ten different jurisdictions flagged AI-generated fake citations on a single day. The quarter just ended produced at least $145,000 in monetary sanctions against individual attorneys — the highest quarterly total in the history of American legal practice.
And yet: "The rate, they say, is still increasing."
That sentence, buried near the end of The Register's analysis, is the one the legal establishment cannot afford to let be heard too clearly. The profession has spent fifteen months escalating its sanction campaign — from $500 fines to six-figure penalties to career-ending incompetency findings to supreme court public inquisitions. It has created per-infraction formulas. It has issued mandatory public humiliation orders. It has referred practitioners to state disciplinary authorities. It has suspended the licenses of attorneys who denied using AI under oath.
And the hallucinations are still spreading.
If the sanctions were about protecting courts from bad citations, they have failed comprehensively. If they were about something else entirely, they are working exactly as designed.
The Register's Uncomfortable Numbers
The Register's April 13 analysis is not sympathetic to attorneys who submit AI-generated hallucinations to courts. It is clear-eyed about the dynamic: lawyers use AI because it "looks so good, so convincing that it is very human to accept the AI promise of vastly improved productivity and hope for the best." They get burned. The profession calls it a crisis. Courts escalate penalties.
But embedded in The Register's account are several observations that the legal establishment's public statements on AI discipline carefully omit.
First: "Responsible lawyers known to The Reg report that using AI needs as much time to verify as it saves, but that it's still worthwhile if used judiciously."
Read that again. Responsible use of AI in legal practice — with appropriate verification — is still worthwhile. The tools work. The problem is not the technology's fundamental unreliability; it is that the legal profession has not built the training, workflow, and supervision infrastructure to use the technology responsibly. Experienced practitioners who have done the work to understand the tools' failure modes and who apply appropriate verification find the tools valuable. The problem is not AI. The problem is the absence of the institutional support that would make responsible AI adoption possible.
Second observation from The Register: "In at least one case, the underling was told to use AI to generate a brief but was not given access to the legal database they needed to check cases. Saves money, right?"
This is the structural dynamic that the legal profession's sanction campaign has carefully deflected attention from. The attorneys being sanctioned for AI hallucinations are, in many cases, not reckless technology cowboys. They are junior practitioners and overworked solo attorneys who have been asked — by the economics of the profession, by the billing pressures of their firms, by the financial constraints of their clients — to produce more work product faster and cheaper than human capacity allows. They reached for AI. Their employers or clients provided no training, no verification workflow, no quality control infrastructure. The hallucinations ended up in court. The institutions that created the conditions for the error have faced no accountability. The individual practitioner faces career destruction.
Third observation, and the most revealing: The Register's "smart money" prediction is that "the legal system shutting down the problem before AI is fixed" — meaning courts will use their "effectively infinite powers of sanction" to suppress AI adoption in legal filings before the technology's hallucination problem is resolved. The author frames this as a likely outcome. What he does not say explicitly is that this is precisely what the legal profession is attempting to engineer.
Why Deterrence Was Never the Point
When an enforcement campaign fails to deter the behavior it targets, the reasonable institutional response is to ask whether the campaign's design is adequate to its stated goal. Are the penalties large enough? Are they imposed consistently enough? Is the underlying rule clearly communicated? Is the behavior being deterred susceptible to deterrence at all?
The legal profession has not asked any of these questions. Instead, it has escalated.
When $500 fines didn't stop AI hallucinations, it imposed $109,700 against a single attorney. When monetary sanctions didn't stop them, it recommended suspensions and incompetency findings. When individual attorney penalties didn't stop them, it began issuing mass-notification orders requiring practitioners to distribute copies of their public humiliation to every client, opposing counsel, and judge in every case they handle. When all of this didn't stop them, courts and bars began expanding the regulatory apparatus into state legislatures, mandatory disclosure requirements, and protective orders restricting AI use not just in filings but throughout the entire litigation process.
At every escalation point, the profession has characterized the escalation as a proportionate response to persistent misconduct. And at every escalation point, the hallucinations have continued — because the escalation was never calibrated to address hallucinations. It was calibrated to address AI adoption.
The economics are the tell. A $109,700 sanction against an individual attorney for fabricated citations — citations that were caught before they affected any judicial outcome, citations that cost the opposing party nothing in the ultimate disposition of the case — is not proportionate to the harm caused by those citations. It is proportionate to the threat posed by the technology that generated them. The sanction is not a remedy for the harm done. It is a price signal: this is what AI costs when it fails. Whether you are lucky or unlucky, whether your citations are caught and corrected or allowed to stand, the potential liability is career-ending. Price the risk accordingly. Most attorneys, faced with that calculation, choose not to pay it.
And yet the hallucinations continue. Because not all attorneys make that calculation correctly, because some clients cannot afford the traditional legal work that AI was meant to supplement, because the tools are embedded in the software that the profession cannot avoid using, because the economic pressures that led to AI adoption in the first place have not been addressed by any of the sanctions that have been imposed.
The sanctions create misery for individual practitioners. They do not solve the hallucination problem. They were not designed to.
Nebraska's Two-Track Strategy: Suspend Lawyers, Regulate Chatbots
The week of April 7–13, 2026 produced a case study in how the legal profession's AI gatekeeping campaign is expanding beyond individual attorney sanctions into structural institutional control.
On one track: Nebraska's Supreme Court continues to consider the temporary suspension of Omaha attorney Greg Lake, whose Supreme Court brief in a divorce case contained errors in 57 of 63 citations — fabricated cases, misquoted statutes, invented authorities. Lake had a week from April 9 to file a response before the court issued its final ruling. That deadline has now passed or is passing as of today. Lake denied using AI under direct questioning from the justices. The court found his explanation — that a computer malfunction on his wedding anniversary caused him to file the wrong draft — lacking credibility. Suspension appears imminent.
Lake's client, Jason Regan, is fighting in a family court to remain present in his daughter's life. He has been billed $17,000 for opposing counsel's fees, faces $35,000 more in legal costs, and told local media he does not know how he will pay for a malpractice claim against the attorney who let him down. The family court battle continues. The suspension of the attorney who was supposed to represent him serves no remedial purpose for Regan. It serves the institutional purpose of making AI use in legal practice dangerous enough to suppress.
On the other track: Nebraska's unicameral legislature passed LB 525, a chatbot regulation bill, in the same week the Lake suspension was being processed. The bill requires operators of conversational AI services to disclose when the service is not human and imposes regulatory requirements on AI systems that interact with consumers. Buried in the bill's agricultural provisions is an expanding definition of which AI services require disclosure — a definitional framework that could, in future applications, extend to AI legal research tools, AI drafting assistants, and the very platforms that attorneys have been sanctioned for using.
The simultaneous moves — suspending attorneys who use AI in legal practice while legislating disclosure requirements for AI services — represent the two prongs of the legal establishment's AI control strategy. In the courts: make AI use by practitioners so professionally dangerous that adoption is chilled. In the legislature: impose regulatory frameworks on the AI companies themselves that create compliance burdens, liability exposure, and institutional barriers to the deployment of AI in legal settings.
Neither prong is primarily concerned with hallucinations. Hallucinations are the pretext. The goal is control over who gets to use AI, when, and under what conditions — conditions that systematically favor large institutional actors with compliance infrastructure over individual practitioners and the clients they represent.
The Georgia Viral Moment and What It Was Actually Saying
The Register's article references a Georgia case that went viral in 2026, generating five million views. A Georgia prosecutor used AI to prepare material that contained hallucinated legal authorities. The moment — captured on video and widely circulated — showed the attorney being grilled by the court about the errors in their filing.
The viral spread of that moment was not accidental. It was the legal establishment's most effective public relations asset in the AI sanction campaign: a vivid, shareable demonstration of the consequences of AI use in legal practice, served to a public audience of other practitioners and potential clients who understand, correctly, that what happened to that attorney could happen to them.
What the viral video did not show: the judges who questioned the prosecutor about AI hallucinations in the filing went home that evening and opened their AI-assisted legal research tools. The Northwestern University survey of 112 federal judges found that 61.6% use AI in their judicial work, 30% specifically for legal research — the same activity that generates the hallucinations for which attorneys are sanctioned — and 45.5% received no training from their courts on how to use AI responsibly.
The Georgia moment spread to five million views. The Northwestern survey generated a few legal technology news stories that reached, at most, a few thousand legal practitioners. The asymmetry in public attention to these two data points is not random. The viral video supports the narrative the legal establishment wants: AI is dangerous for lawyers. The survey undermines it: AI is used freely by judges. The market for the first narrative is enormous. The market for the second is limited to people paying close enough attention to notice the contradiction.
What "Still Increasing" Actually Means for the Profession's Future
When The Register reports that the rate of AI hallucination cases is "still increasing" despite record sanctions, two explanations are possible.
The first explanation: the sanctions are insufficient. More severe penalties, applied more consistently, would eventually deter enough attorneys that the hallucination rate would decline. Under this theory, the profession is simply not yet at the penalty level required to change behavior, and continued escalation will eventually produce the desired deterrent effect.
The second explanation: the hallucination rate reflects the underlying rate of AI adoption in legal practice, which is being driven by forces — cost pressure, client demand, productivity requirements, tool embeddedness — that monetary sanctions cannot neutralize. Under this theory, no penalty level will stop AI adoption; the sanctions will simply redistribute the population of attorneys willing to pay the risk premium, leaving behind those wealthy enough to have enterprise AI compliance infrastructure and eliminating those who cannot afford it.
The evidence strongly supports the second explanation. The technologies generating hallucinations — large language models embedded in legal research platforms, in drafting tools, in document review software — are not being removed from the market in response to sanction waves. They are being updated, improved, and more deeply integrated. Thomson Reuters, LexisNexis, Harvey, CoCounsel, and dozens of other legal technology companies are building AI deeper into their platforms. The legal profession's largest firms are deploying AI governance frameworks not to stop AI use but to create liability-shielding compliance procedures that allow AI use to continue safely. Small firms and solo practitioners, who cannot afford those frameworks, face the full risk exposure that the sanctions impose.
The "still increasing" hallucination rate, in this context, is not a story about an inadequate sanction regime. It is a story about an unequal one: a regime that creates compliance costs high enough to eliminate the practitioners who most need AI to level the playing field, while leaving unchanged the AI adoption of the practitioners who were already ahead.
The Training Gap the Profession Created and Refuses to Close
The Register observes that law school professor Carla Wale has developed "optional AI ethics training" for students interested in using AI responsibly. Optional. For those interested.
This is the legal profession's institutional response to a technology it has simultaneously declared dangerous, endorsed through its judicial users, embedded in its research platforms, and punished through its disciplinary apparatus. Optional training for interested students. Career-ending sanctions for practitioners who use the technology without adequate training that the profession has chosen not to require.
The American Bar Association issued Formal Opinion 512 in 2024, providing guidance on attorney obligations when using generative AI. The opinion covers competence, confidentiality, candor, and supervision. It is thoughtful. It is also not a mandatory training program. It is a statement of principles that an attorney must independently discover, read, interpret, and apply to their specific practice context — in the absence of any employer-provided training, any court-provided guidance, or any institutional infrastructure that translates principle into workflow.
Every major profession that has confronted the adoption of a powerful new technology with significant failure-mode risks — medicine with electronic health records, aviation with automated flight systems, financial services with algorithmic trading — has responded with mandatory training, certification requirements, and institutional support for responsible adoption. The legal profession's response to AI has been to announce that the existing competence requirements apply, to punish practitioners who fail to meet those requirements without providing the training needed to meet them, and to use the punishment as evidence that the technology is too dangerous for widespread adoption.
This is not professional regulation. It is institutional self-protection dressed in the vocabulary of professional ethics.
The Epidemic Metaphor and What the Immune System Is Actually Defending
The Register uses the word "plague" and describes the legal profession's response as an "immune system response." The metaphor is useful, but it points in a direction the author may not have intended.
An immune system's purpose is to protect the organism from external threats. When the legal profession's "immune system" responds to AI hallucinations with escalating sanctions, bar referrals, incompetency findings, and legislative regulation, the natural question is: what organism is being protected, and from what threat?
The threat is not, in any meaningful sense, bad citations in court filings. The legal system has mechanisms for addressing bad citations that have existed for centuries: show cause orders, corrective sanctions, attorney supervision requirements, and appellate review. The sanctions imposed on AI hallucinations are several orders of magnitude more severe than any comparable response to equivalent errors produced by human legal research assistants, paralegal services, or attorneys' own mistakes. The intensity of the response is not proportionate to the harm done by hallucinated citations. It is proportionate to the threat posed by the technology that generated them.
What the immune system is protecting is not the integrity of legal filings. It is the economic model of the legal profession — a model built on the scarcity of legal expertise, the billable hour, the gate that professional licensing erects between legal need and legal service. AI threatens that model not by producing bad citations but by producing good legal work cheaply, quickly, and at a scale that the traditional professional model cannot match.
The sanctions are the immune system's response to a technology that, if widely and freely adopted, would transform the economics of legal practice in ways that the institutional legal establishment cannot control and would not survive. The hallucinations are the pretext. The institution is the organism being defended.
Ten Courts, One Day: The Scale of the Deterrence Failure
Damien Charlotin's report of ten courts flagging AI-hallucinated filings on a single day in early April 2026 is the most concise available evidence that the sanction campaign has failed as deterrence.
Consider what would have to be true for ten practitioners in ten different courts to file AI-hallucinated documents on the same day, despite the extensive coverage of the AI sanction wave in legal media, despite the viral Georgia moment, despite the Oregon record and the Nebraska suspension and the Alabama incompetency finding. Either those practitioners were unaware of the sanctions — in which case the profession's communication strategy has failed completely. Or they were aware of the sanctions and calculated that the risk was worth taking — in which case the sanctions are insufficient deterrents for practitioners under sufficient cost pressure. Or they were aware and did not believe the sanctions would apply to them — in which case the profession's enforcement consistency has failed.
All three explanations point toward institutional failure in the sanction campaign's design. None of them point toward a solution that involves making the sanctions larger. More severe penalties do not communicate better to practitioners who haven't heard of them. They do not change the cost calculations of practitioners under financial pressure. They do not improve the consistency of enforcement. They simply raise the stakes for the practitioners who are already paying attention and already trying to comply — while doing nothing for the practitioners who are not.
The profession's answer to "ten courts on one day" has been to announce more sanctions. Its answer to "the rate is still increasing" will be the same. The cycle will continue because the cycle's purpose is not to stop hallucinations. The purpose is to make AI adoption in legal practice expensive enough that it remains a privilege of institutional actors rather than becoming a democratizing force.
What an Honest Institutional Response Would Look Like
An honest institutional response to The Register's analysis would look like this: mandatory AI competence training as a condition of bar admission and annual CLE credit; institutional investment in verification tools and workflow infrastructure that practitioners can actually afford to implement; disclosure requirements that apply equally to judges and practitioners; and a frank acknowledgment that the technology's benefits in legal practice — the democratization of legal research, the increased accessibility of legal services, the reduction in the cost of routine legal work — are worth the training investment required to realize them safely.
It would also look like this: accountability for institutional actors — law firms, courts, bar associations — that deploy AI without adequate training and governance infrastructure, not just for the individual practitioners who use what those institutions deploy. The junior associate told to use AI without access to verification databases should not bear the professional consequences alone. The partner who gave the instruction, the firm whose resource allocation created the situation, should bear some portion of the liability.
None of this is happening. What is happening is the escalation The Register documents: record sanctions, spreading plague, "smart money" betting on the legal system suppressing AI adoption before the technology improves sufficiently to resolve its hallucination problem on its own.
The smart money is not always right. The smart money in 2000 bet on incumbents over internet disruptors. The smart money in 2005 bet on physical music distribution over digital downloads. The smart money in 2010 bet on cable television over streaming. The institutional actors who are today deploying the legal establishment's sanction apparatus against AI adoption will eventually face the same reckoning those incumbents faced.
What will remain from this period — preserved in the 1,200 case files Charlotin is documenting, in the $145,000 sanctions total, in the career-ending incompetency findings and suspension orders — is a record of an institution that had the opportunity to lead the most significant technological transition in its history, and chose instead to punish the practitioners trying to make the transition work.
The plague is spreading. The sanctions are escalating. The rate is still increasing. And the institution calling it an epidemic is the one that could have built the hospital.
Sources and Citations
- The Register / McMillan, R. (Apr. 13, 2026). "AI went viral among attorneys. We have the numbers on what happened next." theregister.com
- Charlotin, D. (2026). AI Hallucinations in Legal Proceedings — Worldwide Tracker. HEC Paris Smart Law Hub. damiencharlotin.com
- Troutman Privacy & AI Law Blog. (Apr. 13, 2026). "Proposed State AI Law Update: April 13, 2026." Nebraska LB 525. troutmanprivacy.com
- ComplexDiscovery. (Apr. 9, 2026). "The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures." complexdiscovery.com
- KSNB Local 4 / WOWT. (Apr. 9, 2026). "Nebraska attorney faces suspension over alleged AI use in state Supreme Court brief." ksnblocal4.com
- Northwestern University / Sedona Conference Journal. (Mar. 2026). "Federal judges report broad adoption of AI tools." news.northwestern.edu
- LawFuel. (2026). "Georgia Prosecutors AI Hallucination Moment Goes Viral — 5 Million Views and Counting." lawfuel.com
- NPR. (Apr. 3, 2026). "Penalties stack up as AI spreads through the legal system." npr.org
- ABA Formal Opinion 512 (2024). Generative Artificial Intelligence Tools.
