AI-Generated Avatar Causes Uproar in New York Appeals Court as it Tried to Argue a Case
- Victor Nwoko
- 7 hours ago
- 3 min read

A routine hearing at the New York State Supreme Court Appellate Division took an unexpected turn on March 26 when judges discovered that the man addressing them in a lawsuit was not only unqualified to practice law but did not exist at all.
The incident occurred during an employment dispute case involving plaintiff Jerome Dewald, who had submitted a prerecorded video for his argument. As the judges prepared to hear from him, Justice Sallie Manzanet-Daniels introduced the video, and a youthful-looking man appeared on the screen.
“May it please the court,” the man began, presenting himself confidently. However, within seconds, the judge grew suspicious.
“Ok, hold on,” Manzanet-Daniels interrupted. “Is that counsel for the case?”
“I generated that. That’s not a real person,” Dewald admitted.
The image on the screen was an avatar created by artificial intelligence, which had been programmed to deliver Dewald’s legal argument. The revelation prompted an immediate reaction from the court.
“It would have been nice to know that when you made your application. You did not tell me that, sir,” the judge reprimanded before ordering the video to be shut off. “I don't appreciate being misled.”
Dewald was allowed to continue his argument in person. He later apologized to the court, explaining that he had not intended to deceive anyone but had sought a way to present his argument more effectively. Without a lawyer to represent him, he had turned to AI to avoid stumbling over his words.
Dewald revealed that he had used a San Francisco-based tech company's software to generate the avatar after receiving approval to submit a prerecorded argument. Initially, he had attempted to create an avatar resembling himself but was unable to do so in time for the hearing.
“The court was really upset about it,” he admitted. “They chewed me up pretty good.”
This incident adds to a growing list of AI-related controversies in the legal profession. In June 2023, a federal judge in New York fined two attorneys and their law firm $5,000 after they submitted legal research containing fictitious case law generated by an AI tool. The firm later said it had made a “good faith mistake” in failing to recognize AI's potential to fabricate information.
Later that year, AI-generated court rulings also appeared in legal filings by Michael Cohen, former personal lawyer to President Donald Trump. Cohen took responsibility, stating he was unaware that the Google tool he had used for legal research was capable of producing false citations.
While AI has led to legal missteps, some courts have embraced its potential. In February, the Arizona Supreme Court introduced AI-generated avatars named "Daniel" and "Victoria" to summarize court rulings for the public on its website.
Daniel Shin, an adjunct professor at William & Mary Law School, remarked that Dewald’s use of an AI-generated avatar in court was not surprising. He noted that while attorneys would likely avoid such actions due to professional regulations and the risk of disbarment, self-represented litigants often lack clear guidance on AI’s risks in legal proceedings.
Dewald, who follows advancements in technology, had recently attended an American Bar Association webinar discussing AI's role in the legal field. His case remains pending before the appellate court.
Comments