US judge dismisses authors' lawsuit against Meta over AI training
A US judge ruled in favor of Meta, finding its use of copyrighted books to train its LLaMA AI model qualified as transformative fair use, though he warned that the authors could have succeeded had they argued that such training enables mass creation of competing works that may harm the literary market.
-
Attendees are shown at LlamaCon 2025, an AI developer conference, in Menlo Park, California, Tuesday, April 29, 2025. (AP Photo/Jeff Chiu)
Meta secured a legal victory on Wednesday after a US district judge dismissed a lawsuit brought by several authors who accused the company of copyright infringement for using their books to train its LLaMA generative AI system without consent.
Presiding over the case in San Francisco, Judge Vince Chhabria ruled that Meta's use of the materials constituted "transformative" use, falling under the fair use doctrine, scoring a major win for tech firms training large-scale AI models on massive datasets. This was the second courtroom victory in just one week for the AI industry on this front.
However, Chhabria's ruling came with a notable caveat. He acknowledged that while the authors failed to make the right legal argument, the underlying concern remains legitimate. "No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," he wrote in his opinion.
Pirated training
According to court documents, Meta allegedly downloaded pirated copies of various books to train LLaMA. Notable among the works named in the complaint were Sarah Silverman's memoir The Bedwetter and Junot Díaz's Pulitzer Prize–winning The Brief Wondrous Life of Oscar Wao.
Meta praised the ruling. "We appreciate today's decision from the court," a spokesperson said. "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology."
Still, Judge Chhabria was clear that this decision "does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful." Instead, the plaintiffs had pursued a flawed legal strategy, "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one," he said.
Fair use?
The decision echoes another federal ruling handed down earlier this week by Judge William Alsup, who sided with AI startup Anthropic in a similar case involving its Claude language model, a rival to OpenAI's ChatGPT.
Alsup ruled that the company's use of copyrighted books in training Claude was "exceedingly transformative" and covered by fair use protections. He added, "The technology at issue was among the most transformative many of us will see in our lifetimes," likening AI training to how humans learn from reading.
However, Alsup declined to shield Anthropic entirely. He rejected the company's argument that downloading millions of pirated books to build a permanent training dataset was justifiable under fair use. That specific allegation, unlike Meta's case, is set to be tested in a separate trial in December, where financial penalties for the alleged misuse of pirated content will be considered.
Read more: Israeli tech firm NSO fined $167mln in Pegasus WhatsApp hack case
That case was brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who claim their books were used without authorization to train Claude.
While courts appear increasingly open to the notion that AI training can be transformative and legally protected, judges have also signaled that the unchecked use of pirated material and the potential market harm to creators remain unresolved legal vulnerabilities for the industry.