New York lawyers fined over fake citations using ChatGPT
A district judge in Manhattan orders Steven Schwartz, Peter LoDuca, and their law firm Levidow, Levidow & Oberman to pay a fine of $5,000 for using fake citations from ChatGPT.
A New York district judge fined Steven Schwartz, Peter LoDuca, and their law firm Levidow, Levidow & Oberman $5,000 for using forged citations from ChatGPT, leading to fictitious legal research in an aviation injury claim.
Schwartz revealed that ChatGPT, an artificial intelligence chat software, invented six cases he referred to in a legal brief in a case against the Colombian airline Avianca.
In a written ruling, Manhattan Judge P Kevin Castel stated that although there was nothing "inherently improper" about employing artificial intelligence to assist in legal work, but attorneys must verify their files are correct.
According to Castel, “Technological advances are commonplace," but "existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings," adding that the attorneys “abandoned their responsibilities when they submitted nonexistent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”
Levidow, Levidow & Oberman in a statement "respectfully" disagreed that they acted in bad faith and explained that they did not believe the software could be making up cases.
ChatGPT had mentioned many examples concerning aircraft incidents that Schwartz had not been able to uncover using his legal firm's standard tactics. Several of the cases were fictitious, with fictional judges or airlines that do not exist.
Chatbots like ChatGPT, built by the US business OpenAI, are susceptible to "hallucinations" or errors. ChatGPT fraudulently charged an American law professor with sexual harassment in one instance, using a nonexistent Washington Post piece in the process.
In February, a promotional film for Google's ChatGPT competitor, Bard, offered an incorrect answer to an inquiry about the James Webb space telescope, sparking fears that the search firm was too quick to respond to OpenAI's achievement.
Since late November, ChatGPT has been used to generate original essays, stories, and song lyrics and has drafted research paper abstracts that fooled some scientists. Some CEOs have even benefitted from the chatbot to write their emails or do accounting work.
ChatGPT has raised some concerns due to some inaccuracies, the possibility to perpetuate biases and spread misinformation, and especially because students have been using it to cheat.