‘I heard about this new site, which I falsely assumed was, like, a super search engine’
The two lawyers who submitted fake legal research generated by AI chatbot ChatGPT just got hit with a $5,000 fine and a scolding by a federal judge. The lawyers submitted a legal brief on an airline injury case in May that turned out to be riddled with citations from nonexistent cases. The attorneys, Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman, initially defended their research even after opposing counsel pointed out that it was fake, but eventually apologized to the court.
Schwartz, who created the ChatGPT-generated brief, already had a court hearing on June 8 in which he explained his actions. At the hearing, he said he didn’t know that ChatGPT could create legal precedents, and added that he was humiliated and remorseful.
“I heard about this new site, which I falsely assumed was, like, a super search engine,” Schwartz said.
On Friday, US District Judge Kevin Castel, who presided over the case in Manhattan, filed a sanctions order against Schwartz and LoDuca that said fake legal opinions waste time and money, damage law professionals’ reputation, and deprive the client of authentic legal help.
“Technological advances are commonplace, and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the sanctions read. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
The order continued by saying that the attorneys and firms “abandoned their responsibilities” when they submitted the 10-page brief rife with nonexistent quotes and citations.
The sanctions also reprimanded the lawyers for standing by their research and not admitting the truth for over two months from March to May, even after the court and opposing counsel called their evidence into question.
The judge issued a $5,000 fine to Schwartz and LoDuca as a “deterrence and not as punishment or compensation.” The order is careful to note that using AI should not be prohibited, because “good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias, and databases such as Westlaw and LexisNexis,” and AI is the newest addition to this toolkit. But the judge emphasized that all AI-assisted or generated filings must be checked for accuracy.
The Schwartz and LoDuca incident comes after a Goldman Sachs report in March said AI could automate 44% of all legal work. Another March report by researchers from Princeton University, New York University, and University of Pennsylvania found that “the top industries exposed to advances in language modeling are legal services and securities, commodities, and investments.”
Much of legal work involves researching past cases and precedents, reviewing contracts, and drafting documents, all of which ChatGPT can do exponentially faster than a human. However, Friday’s sanctions underscore that AI is still inaccurate and prone to hallucinations, or fabricating information.
“The thing that I try to caution people [about] the most is what we call the ‘hallucinations problem,’” Sam Altman, CEO of ChatGPT maker OpenAI, told ABC News in March, soon after Schwartz created his brief. “The model will confidently state things as if they were facts that were entirely made up.”
Levidow, Levidow & Oberman did not immediately respond to Fortune’s request for comment. The judge separately dismissed the original case on the grounds that it was untimely.
This story was originally featured on Fortune.com
More from Fortune:
5 side hustles where you may earn over $20,000 per year—all while working from home
Looking to make extra cash? This CD has a 5.15% APY right now
Buying a house? Here’s how much to save
This is how much money you need to earn annually to comfortably buy a $600,000 home