Posted inEmergent Tech

Lawyer in hot water for using fake cases from ChatGPT in court

The lawyer admitted to having “consulted” ChatGPT in order to “supplement” his legal research

In a rather sticky situation, a lawyer from New York City has landed himself in trouble by openly confessing to employing fabricated information sourced from ChatGPT for his case against Avianca airlines.

According to the lawsuit, Roberto Mata alleges that his knee suffered an injury when a metallic serving cart collided with him during a flight from El Salvador to New York’s Kennedy International Airport in 2019.

Avianca decided to challenge the case, urging a Manhattan judge to dismiss it on the grounds of the statute of limitations having elapsed. In response, Steven Schwartz, the attorney representing Mata, submitted a detailed 10-page brief comprising a handful of pertinent court precedents.

However, the mentioned lawsuits, namely Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines, as cited in the submitted documents, turned out to be entirely nonexistent.

The falsified references came to light when Avianca’s legal team raised concerns to Judge Kevin Castel of the Southern District of New York, stating that they were unable to locate the cases mentioned in Mata’s lawyers’ brief within legal databases.

It was at this point that Schwartz submitted his affidavit last week, admitting to having “consulted” ChatGPT in order to “supplement” his legal research, only to discover that the AI tool proved to be an unreliable source.

He added that it was the first time he’d used ChatGPT for work and “therefore was unaware of the possibility that its content could be false.”

As revealed on May 4 by Judge Kevin Castel of the Southern District of New York, it came to light that at least six of the cited cases in question were, in fact, fictitious judicial decisions containing fabricated quotes and internal references.

When ChatGPT was initially introduced in November, it generated a mix of enthusiasm and concern due to its remarkable ability to produce human-like essays, poems, form letters, and even conversational responses to a wide range of queries. However, as this lawyer discovered, the technology still possesses limitations and is prone to unreliability.

In a recent affidavit, Schwartz, a member of Levidow, Levidow & Oberman law firm, extended his apologies after being called out by the presiding judge in the case. He acknowledged utilising a source that had proven itself to be untrustworthy.

Schwartz further expressed deep regret over his decision to rely on generative artificial intelligence for legal research in this particular instance, vowing to never do so again without absolute verification of its authenticity.

He claimed to have no prior experience using such technology for research purposes and insisted that he was unaware of the potential for its content to be false. In fact, Schwartz even sought confirmation from ChatGPT regarding the authenticity of the provided cases. In an exchange submitted to the judge, Schwartz directly asked the bot if “varghese” was a genuine case and whether the other cases were fake. The bot’s response stated that “varghese” was a real case and that the other cases could be found in reputable legal databases.

Peter Loduca, another attorney whose name appeared on the fraudulent court filing, distanced himself from the research process but stated that he had no reason to doubt the sincerity of Schwartz’s work.

A hearing is scheduled for June 8 to discuss potential sanctions against Schwartz, a seasoned New York lawyer with three decades of experience. He has been ordered to provide an explanation to the judge as to why he should not face repercussions for engaging in “fraudulent notarisation.”