A California lawyer was fined $10,000 for filing a state appeal filled with fake quotations generated by the artificial intelligence (AI) tool ChatGPT, according to Fox News KTVU.
It is the court’s opinion that the cases were made up, and the fine appears to be the largest issued over AI fabrications by a California court. ChatGPT hallucinated 21 of 23 quotes from the cases cited in the attorney’s brief, implying that the prior cases cited in the appeal never existed.
Meanwhile, claims that OpenAI’s ChatGPT model beat 90% of trainee lawyers on the bar exam generated a flurry of media hype last year, according to Live Science. However, a new study argues that these claims are overstated.
“In court parlance, this is called a money sanction,” said Tim Buchanan, a lecturer in media law and ethics at Fresno State. “The court sanctioned the attorney for filing an improper brief with the court and for wasting their time.”
Buchanan said it was an ongoing lawsuit where the attorney, representing one of the parties, filed a brief on behalf of the client that contained false citations. The lawyer was not following the rules, which would require reasonable inquiry and diligence in making sure that what they’re citing to the court supports what they’re saying.
“The court basically said this is an extreme violation of the rules of appellate briefing,” Buchanan said.
He said that he saw OpenAI’s announcement that ChatGPT can pass the bar exam in the top 90% range of trainee lawyers, and didn’t know what to think about it.
“I would be shockingly surprised to find out it’s true,” Buchanan said.
Fox News KTVU also reported that in recent weeks, there have been three documented instances of judges citing fake legal authority in their decisions due to AI-generated inaccuracies.
“The real question is, who is responsible for the judgment?” said Andrew Fiola, a professor of philosophy at Fresno State. “We can’t give up our own accountability and responsibility to the machines.”
Fiola mentioned the cartoon Futurama, where the judges are all robots in the Futurama universe.
“Maybe at some point, we will automate a lot of stuff that previously required human judgment,” Fiola said. “It’s going to be a change, but we’re going to have to figure out what works and doesn’t work.”
Fiola discussed what it means to be a qualified attorney in a series of questions: Is it about passing the bar like ChatGPT? Is that the only thing that matters? Do you also have to have a conscience? Do you have to have compassion? Do you have to have a sense of justice?
“Justice, loyalty, compassion and truthfulness are things you can’t test,” Fiola said.
He said that the AI discussion is distracting people from more important issues, such as who is profiting from this.
“I’m not worried about the AI,” Fiola said. “I’m worried about the billionaires.”
Billionaires are being empowered by this new technology and using it in ways that are out of control, he said.
Fiola said that he didn’t care whether judges used AI or not. What he cares about is whether the judges have a sense of justice and loyalty to the constitutional framework.

Ben Emlyn-Jones • Oct 14, 2025 at 9:12 am
Interesting article. A sign of the times definitely. There are going to be many more legal problems like this concerning AI soon in the future. At least the judges and lawyers can take comfort from the fact that THEY have not been replaced by AI… yet.