Q&A Lawyer Liability and Ethics: ChatGPT Claims Another Victim

by Joseph Brophy for the Maricopa Lawyer, a publication of the Maricopa County Bar Association 

As we lawyers wait for our new robot overlords to take all our jobs in the form of generative artificial intelligence (“AI”) programs such as ChatGPT, the legal gods have issued yet another stark warning: ChatGPT is not ready for prime time as a lawyer substitute for legal writing or research.  This story will probably sound familiar because lawyers in New York made the same mistake earlier this year and received a lot of unwanted publicity.

In April 2023, a client hired Lawyer to prepare a motion to set aside judgment.  Lawyer, who was only in his second year practicing law, had never drafted such a motion.  But being the young, technologically savvy fellow that he was, Lawyer figured he would be efficient and rely on the artificial intelligence platform, ChatGPT to whip up a motion, complete with legal citations.

In May 2023, Lawyer filed the motion with the court.  However, he did not check the citations before submitting the motion. Then things went from bad to worse.  After filing the motion, but before the hearing on the motion, Lawyer discovered that the cases provided and cited by ChatGPT were either inaccurate or entirely made up.  At that point, it appears Lawyer assumed the fetal position.  Or perhaps he decided that prayer was an appropriate legal strategy and therefore maybe the judge would not read what he filed.  In any event, Lawyer did not inform the court of the inaccurate citations or withdraw the motion.  When at the hearing the judge noted that the cases in the motion were not accurate, Lawyer blamed a legal intern

Whether due to guilt or perhaps the intervention of older lawyers at his firm, Lawyer came clean six days after the hearing, explaining to the court that he used ChatGPT to draft the motion.  The presiding disciplinary judge of the Supreme Court of Colorado found Lawyer to have violated ER 1.1 (a lawyer must competently represent a client); 1.3 (a lawyer must act with reasonable diligence and promptness when representing a client); 3.3(a) (1) (a lawyer must not knowingly make a false statement of material fact or law to a tribunal); and 8.4(c) (it is professional misconduct for a lawyer to engage in conduct involving dishonesty, fraud, deceit, or misrepresentation).  Lawyer was also fired.

In an interview with Business Insider, Lawyer bemoaned his fate, claiming that “he was feeling stressed about deadlines and internal workplace dynamics” when his supervising attorneys added more work to his plate.  “My experience is not unique, sadly I’ve heard many attorneys say they too were ‘thrown to the wolves’ early in their career.”  Do tell. What an unfamiliar tale of woe.  Someone better tell him it only gets worse as you progress in your legal career and one day he will look back fondly on his time with the wolves.

According to Lawyer, “when ChatGPT saved me hours of work, it was a tiny ray of sunlight in an abysmal situation.”  In at least some sense, he has a point.  ChatGPT did solve his problems.  Whereas before Lawyer had too much work to do, after getting fired he had no work to do.  No more wolves, no more internal workplace dynamics Problems solved.  Thanks ChatGPT!

ChatGPT is generative AI.  Generative AI are “deep-learning models” that compile data “to generate statistically probable outputs when prompted.”  Often, these programs rely on large language models.  The datasets utilized by generative AI large language models can include billions of parameters making it virtually impossible to determine how a program came to a specific result.  ChatGPT pulls information from the internet and generates words that are statistically likely to follow each other, without regard for factual accuracy or logical consistency.  For these reasons, lawyers cannot blindly rely on generative AI results when performing legal work.

For as much buzz as AI receives and concerns expressed by legal experts on ethics over its rapid advance, the lawyers that have been subject to discipline over their use of AI did not run into trouble because of AI necessarily.  Their problems stemmed from not checking the citations and analysis provided by AI and failing to be candid with the court once they realized they made a terrible mistake.  These are very old problems that, at bottom, do not really have anything to do with AI.

For those of you interested in this topic, the Florida Bar’s Proposed Advisory Opinion 24-1 and the California Bar’s Practical Guidance for the use of Generative Artificial Intelligence in the Practice of Law are good resources until Arizona weighs in, which is expected to occur in the not too distant future.

About Joseph A. Brophy

Joseph Brophy is a partner with Jennings Haug Keleher McLeod in Phoenix.  His practice focuses on professional responsibility, lawyer discipline, and complex civil litigation.  He can be reached at jab@jhkmlaw.com.

The original article appeared in the January 2024 issue of Maricopa Lawyer and can be viewed here: 

https://jhkmlaw.com/wp-content/uploads/2024/01/240116-ML.pdf.