Q&A Lawyer Liability and Ethics: Skynet Smiles — The Courts Start to Address AI

by Joseph Brophy for the Maricopa Lawyer, a publication of the Maricopa County Bar Association 

In the classic movie Terminator, during a lull in the gunfire, Kyle Reese explained to Sarah Connor that it was easy to spot the first version of the terminator infiltration units created by the artificial intelligence called Skynet to wipe out humanity because those early terminators were not great replicas of humans.  Then Skynet created a more human looking and acting cyborg killing machine in the form of that inconspicuous everyman, seven-time Mr. Olympia Arnold Schwarzenegger.  In other words, Skynet had a learning curve.

The artificial intelligence (AI) application ChatGPT has made headlines this year sparking predictions that it will replace many humans in many jobs.  That is probably true.  Multiple companies are developing AI programs to perform legal research and even draft legal documents.  However, a recent case out of New York offers a cautionary tale for lawyers interested in using AI to lighten their workload (as Mr. Schaack writes about in this month’s CourtWatch beginning on page 1).

Client hired Lawyer 1 to bring a personal injury suit in New York for injuries sustained when a metal serving cart struck his knee when he was a passenger on an airline.  Defendant removed the case to federal court asserting federal question jurisdiction under the infamous Convention for the Unification of Certain Rules Relating to International Carriage by Air, Done at Montreal, Canada, on 28 May 1999.  A treaty so important that its name is 21 words long.  Defendant moved to dismiss the complaint as time barred under the Montreal Convention.

On March 1, 2023, Lawyer 1 submitted an opposition to the motion to dismiss citing and quoting seven judicial decisions on the subject, complete with citations to the Federal Reporter, Federal Supplement and Westlaw.  All citations to cases were in the correct format, i.e., Zicherman v. Korean Air Lines Co., Ltd., 516 F.3d 1237 (11th Cir. 2008).  The cases cited in the opposition even cited other cases cited in the opposition, giving the appearance of typical legal writing and analysis.  Because Lawyer 1 was not admitted in New York, he asked his partner, Lawyer 2, to submit the opposition under Lawyer 2’s name.

There was only one problem: none of the seven cited decisions existed.  The case names were not real, and the quotes and legal principles from the cases were never written or cited by any other court.
Defendant brought this to the court’s attention.  Plaintiff’s counsel did not withdraw the opposition to the motion.  On April 11, 2023, Judge Kevin Castel, who could not find the cited cases either, ordered that counsel provide the court with copies of the cases.  Rather than immediately fall on his sword, Lawyer 2 submitted documents with the “opinions” written on them.  But the cases were not from Westlaw or Lexis, an actual court order, or a federal compendium; they were just words on a blank piece of paper.  Each “opinion” identified its authors as real judges who were actually on the bench in the jurisdiction from which the opinions were purportedly issued.  Judge Castel called the legal analysis in these opinions “gibberish.”  Many of the cases cited by the “courts” in the phony “opinions” also did not exist.

Lawyer 2, in an affidavit drafted by Lawyer 1, explained to the court that the purported decisions provided to the court “may not be inclusive of the entire opinions but only what is made available by online database.”  But Lawyer 2 did not identify what “online databases” the cases came from.  Judge Castel personally contacted the court from which each opinion was issued and confirmed the cited decisions did not exist.  Then he demanded an explanation.

According to Lawyer 1, his law firm used a legal research service called Fastcase.  But Fastcase did not have much in the way of authority on the Montreal Convention.  So Lawyer 1, having heard about this newfangled AI technology that could produce accurate analysis in response to a human’s questions, turned to ChatGPT to provide relevant legal authority.  ChatGPT evidently did not have answers either.  But rather than simply inform Lawyer 1 of this important fact, when prompted to “provide case law” or to “show me specific holdings,”  ChatGPT made-up judicial decisions, complete with accurately formatted case citations to fake cases involving airlines and the statute of limitations under the Montreal Convention, and with references to real judges.  Lawyer 1 admitted he did not verify the accuracy of ChatGPT’s research.  Lawyer 2 told the court that he only read the opposition for language and flow, trusting his partner of 25 years to have done the appropriate research.

Lawyer 1 and Lawyer 2 were each fined $5,000 under Rule 11 and were ordered to inform their client in writing about what they had done.  The lawyers were also ordered to inform each of the judges who “wrote” the phony opinions about what the lawyers had done—apologies optional.  It is too early to tell whether the state bars where they are licensed will take any action.

Apparently, what happened here with ChatGPT making up answers to questions was not an isolated incident.  For example, in April of this year a blogger who writes about the history of South Dakota turned to ChatGPT to generate content for a blog post about South Dakota’s past governors.  ChatGPT responded with a beautifully written summary about the tenure of the great Crawford H. “Chet” Taylor, the 14th (and youngest) governor of South Dakota from 1949-51, who was born on July 23, 1915 and went to meet his maker on December 14, 1987 at the ripe old age of 72.  ChatGPT even provided a painting of the dashing Governor Taylor.  However, no such person ever existed, much less served as governor of South Dakota.  Tom Berry (the 14th governor of South Dakota), George T. Mickelson (the governor of South Dakota from 1949-51), and Richard E. Kneip (South Dakota’s youngest governor) were no doubt rolling over in their graves.

In June 2023, Florida radio host Mark Walters, founder of Armed American Radio and the self-proclaimed “loudest voice in America fighting for gun rights,” sued ChatGPT’s creator, OpenAI, LLC in Georgia for defamation.  ChatGPT allegedly claimed that Mr. Walters was accused of using his position as treasurer of The Second Amendment Foundation (SAF) to defraud and embezzle funds from that organization.  ChatGPT’s story, which was published by certain gun industry publications, came complete with a case number from SAF’s lawsuit against Mr. Walters.  However, Mr. Walters never worked for SAF in any capacity, including treasurer, and has never been sued by SAF for any reason.  The case number, like the lawsuit itself, was a product of ChatGPT’s imagination.

Why does ChatGPT make things up?  Because it is not “intelligence” in the human sense of the word.  ChatGPT pulls information from the internet and generates words that are statistically likely to follow each other, without regard for factual accuracy or logical consistency.  This is how it produces gibberish legal analysis.  Since most humans cannot admit when they are wrong or do not know something, it may be a while before AI masters this important skill.

Lawyers are not a group that adapts quickly to technological changes.  To illustrate, email has been in wide use for almost 30 years, yet it was only last year that the Supreme Court of Arizona and the American Bar Association got around to issuing ethical opinions on whether a lawyer can “reply all” to an email sent by another lawyer who copies his client without violating ER 4.2’s prohibition on communicating with represented parties.  So it is notable that as word of the New York case spread in May of this year, the federal courts started taking action.

In May, Judge Brantley Starr of the Northern District of Texas issued an order requiring attorneys to certify that no portion of their filings was drafted by generative AI tools such as ChatGPT and Harvey.AI, or that content drafted by these tools has been checked for accuracy by a human.  On June 6, 2023, Judge Michael Baylson of the Eastern District of Pennsylvania ordered that any party filing in his court disclose whether AI has been used in any way and certifying the accuracy of the submission.  On June 21, 2023, Chief Judge Stacey Jernigan of the Bankruptcy Court for the Northern District of Texas, ordered that if any portion of a filing was drafted using AI, then the filing party must verify that the filing was checked for accuracy.  Judge Stacey’s order pointed out that AI holds no allegiance to any client, the rule of law, or the laws and Constitution of the United States.  Judge Stephen Vaden of the Court of International Trade issued a similar order.

There will come a day when AI, at a minimum, becomes a reliable practice tool for lawyers, or perhaps even makes us obsolete.  But that day will not be here for a while longer.  For now, AI appears to be like most associates—a walking malpractice suit if you rely on it too much.  Act accordingly.

About Joseph A. Brophy

Joseph Brophy is a partner with Jennings Haug Keleher McLeod in Phoenix.  His practice focuses on professional responsibility, lawyer discipline, and complex civil litigation.  He can be reached at jab@jhkmlaw.com.

The original article appeared in the August 2023 issue of Maricopa Lawyer and can be viewed here: 

https://jhkmlaw.com/wp-content/uploads/2023/08/230803-ML.pdf.