Omid Zareh: New York Attorney Publicly Censured After Federal Court Finds Legal Brief Was AI-Generated and Unverified

a wooden gaven sitting on top of a white counter

INTRODUCTION

The legal profession has a word for what an attorney does when they cite a case to a court: representation. When a lawyer writes “see Smith v. Jones, 400 F.3d 112 (5th Cir. 2005)” in a brief, they are telling the judge — under the professional obligations that come with their license and under the rules of civil procedure — that the case exists, that it says what they claim it says, and that they have checked. That representation is one of the foundational acts of legal practice. Courts depend on it. Opposing counsel depends on it. The entire system of adversarial justice is built on the assumption that when a lawyer cites the law, they have actually looked at the law.

Omid Zareh did not look.

What the United States District Court for the Northern District of Texas found — and what the New York Appellate Division confirmed in a public censure issued February 10, 2026 — was that a brief submitted by Zareh and co-counsel contained citations to cases that did not exist as described, legal principles that were misstated, and errors that the court independently determined matched the kind of hallucinations produced by AI language models like ChatGPT. The court further found that when asked to explain the errors, the attorneys could not — because they had never read the cases they cited.

The Zareh case is not primarily a story about artificial intelligence. It is a story about what happens when an attorney treats the most basic act of legal practice — verifying a citation — as someone else’s job. In the age of AI, that failure has a new name. But the professional obligation it violates is as old as the bar itself.

Omid Zareh: New York Attorney Publicly Censured After Federal Court Finds Legal Brief Was AI-Generated and Unverified

Attorney Accountability Series | New York Disciplinary Coverage | February 2026

Quick Facts

Attorney: Omid Zareh New York Bar Admission: 1996 Address: Merrick, NY 11566 Practice Area: Litigation Case No.: 2025-05041 Department: First Judicial Department, New York Supreme Court Appellate Division Grievance Committee: Attorney Grievance Committee for the First Judicial Department Discipline: Public Censure — February 10, 2026 Originating Court: U.S. District Court for the Northern District of Texas Originating Violation: Federal Rule of Civil Procedure 11(b); Texas Disciplinary Rule of Professional Conduct 3.03 Basis for NY Reciprocal Discipline: Judiciary Law § 90(2); 22 NYCRR 1240.13 Core Finding: Brief submitted to federal court contained AI-generated fake citations; attorneys failed to verify cited cases before filing AI Tool Implicated: ChatGPT (independently identified by the District Court)

The Texas Federal Court Case

Zareh, as one of the reviewing attorneys representing the plaintiff, faced scrutiny after a brief they submitted contained numerous citation errors and misrepresented case law. Defense counsel in the Texas case suggested the brief may have been drafted using artificial intelligence. The Florida Bar

This is the starting point of the entire disciplinary chain: a legal brief filed in a federal court in Texas that opposing counsel flagged as containing citations that did not accurately reflect existing law. The errors were not minor technical mistakes — miscited volume numbers or transposed page references. They were substantive: cases described in ways that did not match the actual decisions, legal principles stated in ways that no actual court had stated them, and citations that appeared to reflect the kind of confident, fluent, entirely fabricated output that AI language models produce when they generate legal text without any grounding in real case databases.

The errors were acknowledged by the counsel, who attributed them to a lack of familiarity, siloed research, and poor integration of work among multiple attorneys. However, the District Court found these explanations insufficient, particularly concerning the misstated legal principles and incongruous citations. The Florida Bar

The explanation offered — that multiple attorneys working in silos had produced a poorly integrated brief through a breakdown in coordination — might have been more persuasive if the errors themselves had looked like coordination failures. They did not. They looked like something else entirely.


The Court’s AI Finding

The District Court ordered counsel to demonstrate why they should not be sanctioned for violating Federal Rule of Civil Procedure 11 and Texas Disciplinary Rule of Professional Conduct 3.03, due to the misrepresentations. The court independently determined that ChatGPT, an AI model, described at least one of the cited cases in a similar erroneous manner, leading to the conclusion that the errors were AI-generated. The Florida Bar

This is the most significant finding in the entire case. The court did not simply accept defense counsel’s suggestion that AI was involved. It independently tested the hypothesis — running the cited cases through ChatGPT and finding that the AI produced the same errors that appeared in the brief. That comparison gave the court a basis for its conclusion that went beyond inference: the errors in the brief matched, in character and content, the errors produced by a specific AI tool when asked about the same legal questions.

Federal Rule of Civil Procedure 11(b) requires that every attorney who signs a pleading, written motion, or other paper certifies that to the best of their knowledge, information, and belief — formed after reasonable inquiry — the legal contentions therein are warranted by existing law or by a nonfrivolous argument for the extension, modification, or reversal of existing law. The operative phrase is “after reasonable inquiry.” Submitting a brief without reading the cases cited in it is not a reasonable inquiry. It is no inquiry at all.

While the District Court did not believe the counsel, including Zareh, knowingly used AI, it found that their failure to review the cited cases and their inability to explain the errors constituted bad faith. The court concluded that counsel presented an unreviewed AI-drafted brief, violating FRCP 11(b) by failing to ensure the claims and legal contentions were warranted by existing law after reasonable inquiry. The Florida Bar

The distinction the court drew — between knowingly using AI and failing to verify AI output — is legally important. The sanction was not for the use of an AI tool. It was for the failure of professional responsibility that occurred when the attorneys submitted the AI’s output without checking it against actual legal sources.

New York Reciprocal Discipline

The Attorney Grievance Committee sought reciprocal discipline, citing Judiciary Law § 90(2), 22 NYCRR 1240.13, and the doctrine of reciprocal discipline. The court found that Zareh had sufficient due process, having been notified of the AI-related allegations and given opportunities to address them. The court also noted that the District Court’s finding of AI use was based on an examination of the brief’s contents and errors, an adverse credibility determination of the drafting attorney, and rejection of Zareh’s arguments. The Florida Bar

Under 22 NYCRR 1240.13, New York courts have authority to impose reciprocal discipline when an attorney has been sanctioned in another jurisdiction — including a federal court — for conduct that would constitute misconduct under the New York Rules of Professional Conduct. The Appellate Division, First Department found that the conduct for which Zareh was sanctioned in Texas satisfied that standard.

The New York Supreme Court, Appellate Division, First Judicial Department determined that the conduct for which Zareh was sanctioned would constitute misconduct under the New York Rules of Professional Conduct. The Florida Bar

The relevant New York rules are clear. Rule 3.3 requires candor toward the tribunal — including the obligation not to make false statements of law. Rule 1.1 requires competence — which includes the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. Submitting a brief containing fabricated citations, without verifying a single one, is a failure of both.

The February 10, 2026 Censure

On Tuesday, February 10, 2026, the New York Supreme Court, Appellate Division, First Judicial Department publicly censured attorney Omid Zareh following disciplinary proceedings initiated by the Attorney Grievance Committee for the First Judicial Department. The Florida Bar

A public censure — also called a public reprimand — is the least severe form of public discipline available to the Appellate Division. It is less severe than a suspension and far less severe than disbarment. The fact that the court chose censure rather than suspension reflects several factors that likely operated in Zareh’s favor: he was one of multiple attorneys involved in the brief, the court found no knowing use of AI, and the underlying conduct — while a serious professional failure — did not involve harm to a client in the traditional sense.

That said, a public censure is not a private warning. It is a formal finding of professional misconduct, entered on the attorney’s disciplinary record, published in the court’s official decisions, and accessible to anyone who searches his disciplinary history. It follows him professionally for the rest of his career.

The Growing Problem of AI-Generated Legal Citations

The Zareh case is part of a rapidly growing body of disciplinary and sanctions decisions arising from attorneys submitting AI-generated legal content without verification. The most widely reported early case was Mata v. Avianca, Inc. (S.D.N.Y. 2023), in which New York attorneys Steven Schwartz and Peter LoDuca were sanctioned after submitting a brief citing multiple non-existent cases generated by ChatGPT — cases with fictional party names, fictional judges, and fictional holdings. The court in that case made findings strikingly similar to those made in the Texas case underlying Zareh’s censure.

Since Mata, courts across the country have seen a steady stream of similar incidents. Disciplinary bodies in multiple states have moved to issue guidance, impose sanctions, and — as in Zareh’s case — enter formal public discipline records against attorneys who fail to verify AI-generated content before filing it with a court.

The core lesson that every bar association, every federal court, and every disciplinary body is now communicating in consistent terms is this: AI is a tool, not a lawyer. The professional responsibility for everything in a filing belongs to the attorney who signs it. Rule 11 does not have an AI exception. The duty of candor to the tribunal does not have an AI exception. Competence does not have an AI exception.

In New York, the Attorney Grievance Committee for the First Judicial Department has jurisdiction over attorney conduct in Manhattan and the Bronx. Members of the public who wish to file a complaint about an attorney in those counties may contact the AGC directly. Members of the public wishing to verify any New York attorney’s current license status may search the NYS OCA Attorney Search portal.

What This Means for Attorneys Using AI

The Zareh censure, read alongside the Mata sanctions and the growing body of federal court decisions on AI-generated citations, establishes a clear professional standard that every practicing attorney must understand:

— Using an AI tool to draft or assist with legal research is not, by itself, prohibited or sanctionable. — Submitting the output of that tool to a court without independently verifying every case cited, every legal proposition stated, and every quotation attributed to a court is a violation of Rule 11, the duty of candor, and the duty of competence. — The fact that an AI produced the error does not transfer professional responsibility away from the attorney. The attorney signed the brief. The attorney is responsible for everything in it. — “I didn’t know the AI hallucinated” is not a defense. The obligation of reasonable inquiry exists precisely to catch errors — whether made by a human researcher, a junior associate, or a language model.

For a comprehensive overview of how New York attorney discipline works, the NYSBA Guide to Attorney Discipline (2023) provides a thorough reference.

Disciplinary Timeline

Pre-2025 — Zareh and co-counsel represent plaintiff in civil matter before U.S. District Court for the Northern District of Texas; legal brief submitted containing numerous citation errors and misrepresented case law; defense counsel flags brief as potentially AI-generated Pre-2025  District Court orders counsel to show cause why sanctions should not be imposed under FRCP 11 and Texas Disciplinary Rule 3.03; court independently tests ChatGPT and confirms AI produced same errors; court makes adverse credibility finding against drafting attorney; finds bad faith; imposes sanctions for submitting unreviewed AI-drafted brief 2025  Attorney Grievance Committee for the First Judicial Department initiates reciprocal discipline proceedings against Zareh under Judiciary Law § 90(2) and 22 NYCRR 1240.13 (Case No. 2025-05041) 2025  Zareh notified of AI-related allegations; given opportunity to respond February 10, 2026  Appellate Division, First Judicial Department publicly censures Zareh; finds conduct would constitute misconduct under New York Rules of Professional Conduct Current Status — Licensed; public censure entered on disciplinary record

Share the Post:

Related Posts