Between You and Me: The lawyer who got his tail caught in...

Between You and Me: The lawyer who got his tail caught in the door, and other AI stories

Pixabay photo

By Leah S. Dunaief

Leah Dunaief

You’ve heard of ChatGPT, yes? So had a lawyer in Brooklyn from his college-aged children. While the lawyer has been in practice for 30 years, he had no prior experience with the Open AI chatbot. But when he was hired in a lawsuit against the airline Avianca and went into Federal District Court with his legal brief filled with judicial opinions and citations, poor guy, he made history.

All the evidence he was bringing to the case was generated by ChatGPT. All of it was false: creative writing generated by the bot.

Here is the story, as told in The New York Times Business Section on June 9. A passenger, who had sued the airline for injury to his knee by a metal serving cart as it was rolled down the aisle in 2019 on a flight from El Salvador to New York, was advised that the lawsuit should be dismissed because the statute of limitations had expired. His lawyer, however, responded with the infamous 10-page brief offering more than half a dozen court decisions supporting their argument that the case should be allowed to proceed. There was only one problem: None of the cases cited in the brief could be found.

The decisions, although they named previous lawsuits against Delta Airlines, Korean Airlines and China Southern Airlines, and offered realistic names of supposedly injured passengers, were not real.

“I heard about this new site, which I falsely assumed was, like, a super search engine,” lamely offered the embarrassed attorney.

“Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet,” explained The NYT.

Now the lawyer stands in peril of being sanctioned by the court. He declared that he had asked questions of the bot and had gotten in response genuine case citations, which he had included in his brief. He also printed out and included his dialogue with ChatGPT, which ultimately at the end, offered him the words, “I hope that helps.”

But the lawyer had done nothing further to ensure that those cases existed. They seemed professional enough to fool the professional.

Now the tech world, lawyers and judges are fixated on this threat to their profession. And warnings exist of that threat being carried over to all of humanity with erroneous generative AI.

But this is not an entirely ominous story.

Researchers at Open AI and the University of Pennsylvania have concluded that 80% of the U.S. workforce could see an effect on at least 10% of their tasks, according to The NYT. That means that some 300 million full-time jobs could be affected by AI. But is that all bad? Could AI become a helpful tool?

By using AI as an assistant, humans can focus on the judgment aspect of data-driven decision-making, checking and interpreting the information provided by the bot. Humans provide judgment over what is provided by a bot.

Ironically, the lawyer’s children probably passed their ChatGPT-fueled courses with good grades. Part of that is the way we teach students, offering them tons of details to memorize and regurgitate on tests or in term papers. The lawyer should have judged his ChatGPT-supplied data. Future lawyers now know they must. 

As for education, emphasis should go beyond “what” and even “so what” to “what’s next.”  Learning should be about once having facts or history, then how to think, to analyze, how to interpret and take the next steps. Can chatbots do that? Perhaps in an elementary way they now can. Someday they will in a larger context. And that poses a threat to the survival of humanity, because machines will no longer need us.