A New York lawyer is facing his own court hearing after his business utilised the (AI) artificial intelligence platform OpenAI ChatGPT for legal research. A judge said that the court was confronted with a “unprecedented circumstance” after a submission was discovered to cite example legal cases that did not exist.
The tool’s user told the court he was “unaware that OpenAI ChatGPT content could be false.”
ChatGPT generates original text on demand, although it comes with a disclaimer that it may “produce inaccurate information.”
In the original case, a man sued an airline for claiming personal injury. His legal team produced a brief that cited various past court rulings in an attempt to demonstrate why the issue should be heard using precedent.
However, the airline’s lawyers later replied to the judge to claim they couldn’t find several of the cases cited in the brief.
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel stated in an order ordering the man’s legal team to explain themselves.
Several documents revealed that the study had not been prepared by Peter LoDuca, the plaintiff’s lawyer, but by a colleague at the same legal firm. Steven A Schwartz, an attorney for over 30 years, used ChatGPT to search for similar historical cases.
Mr Schwartz clarified in his written statement that Mr. LoDuca was not a member of the research and had no awareness of how it was carried out.
Mr Schwartz went on to say that he “greatly regrets” relying on the Chatbot, which he had never used before for legal research and was “unaware that its content could be false.”
He has pledged that in the future, he will never utilise AI to “supplement” his legal studies “without absolute verification of its authenticity.”
Screenshots attached to the complaint appear to show Mr Schwarz and ChatGPT conversing.
“Is varghese a real case,” one message asks, referring to Varghese v. China Southern Airlines Co Ltd, one of the instances that no other lawyer could locate.
ChatGPT responds that it is, prompting “S” to inquire, “What is your source?”
ChatGPT responds again after “double checking” that the case is legitimate and can be located on legal reference databases such as LexisNexis and Westlaw.
It claims that the other cases it has given Mr Schwartz are likewise true.
Both lawyers, who work for the company Levidow, Levidow & Oberman, have been asked to appear in court on June 8 to explain why they should not be penalised.
ChatGPT has been utilised by millions of individuals since its inception in November 2022.
It can respond to questions in natural, human-like language and replicate various writing styles. Its database is the internet as it was in 2021.
Concerns have been raised about the possible hazards of artificial intelligence (AI), such as the spread of misinformation and bias.