fair use
I Wasn't Sure I Wanted Anthropic to Pay Me for My Books--I Do Now
I Wasn't Sure I Wanted Anthropic to Pay Me for My Books--I Do Now Anthropic agreed to a $1.5 billion settlement for authors whose books were used to train its AI model. As an author who fits that description, I've come around to the idea. A billion dollars isn't what it used to be--but it still focuses the mind. At least it did for me when I heard that the AI company Anthropic agreed to an at least $1.5 billion settlement This came after a judge issued a summary judgement that it had pirated the books it used. The proposed agreement--which is still under scrutiny by the wary judge--would reportedly grant authors a minimum $3,000 per book.
- Asia > China (0.05)
- South America (0.04)
- North America > United States > California (0.04)
- (3 more...)
- Law > Litigation (1.00)
- Government > Regional Government > North America Government > United States Government (0.95)
Scott Farquhar thinks Australia should let AI train for free on creative content. He overlooks one key point
Farquhar, the Tech Council of Australia CEO, told ABC's 7.30 program on Tuesday: "all AI usage of mining or searching or going across data is probably illegal under Australian law and I think that hurts a lot of investment of these companies in Australia". Farquhar's claim overlooks that this is not a settled issue in the US, and could have devastating effects on creative industries. Farquhar's argument is that it is not theft of people's work unless the AI is used to "copy an artist directly" such as creating a song in their style. "I do think people would say that, hey, if people are going to sit down with a digital companion, an AI song creator and they collaboratively work with an AI to create something new to the world, that's probably fair use." Farquhar said the benefits of large language models outweigh the issues raised by AI training its data on other people's work for free.
- Oceania > Australia (1.00)
- North America > United States (0.75)
Judges Don't Know What AI's Book Piracy Means
More than 40 lawsuits have been filed against AI companies since 2022. Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation. In each case, the judges decided that the tech companies were engaged in "fair use" when they trained their models with authors' books. Both judges said that the use of these books was "transformative"--that training an LLM resulted in a fundamentally different product that does not directly compete with those books.
What comes next for AI copyright lawsuits?
On the other side, plaintiffs range from individual artists and authors to large companies like Getty and the New York Times. The outcomes of these cases are set to have an enormous impact on the future of AI. In effect, they will decide whether or not model makers can continue ordering up a free lunch. If not, they will need to start paying for such training data via new kinds of licensing deals--or find new ways to train their models. And that's why last week's wins for the technology companies matter. If you drill into the details, the rulings are less cut-and-dried than they seem at first.
- North America > United States > Texas > Kleberg County (0.05)
- North America > United States > Texas > Chambers County (0.05)
- North America > United States > Tennessee (0.05)
- North America > United States > California (0.05)
- Law > Litigation (1.00)
- Law > Intellectual Property & Technology Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.53)
Judge rules Anthropic can legally train AI on copyrighted material
This has led a group of authors to sue Anthropic, the company behind the AI chatbot Claude. Now, a US federal judge has ruled that AI training is covered by so-called "fair use" laws and is therefore legal, Engadget reports. That is, the resulting work must be something new rather than it being entirely derivative or a substitute for the original work. This is one of the first judicial reviews of its kind, and the judgment may serve as precedent for future cases. However, the judgment also notes that the plaintiff authors still have the option to sue Anthropic for piracy.
Group of high-profile authors sue Microsoft over use of their books in AI training
Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its Megatron AI to respond to human prompts. The authors requested a court order blocking Microsoft's infringement and statutory damages of up to 150,000 for each work that Microsoft allegedly misused. Generative artificial intelligence products like Megatron produce text, music, images and videos in response to users' prompts. To create these models, software engineers amass enormous databases of media to program the AI to produce similar output. The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an AI product that gives text responses to user prompts.
- North America > United States > New York (0.08)
- North America > United States > California (0.08)
Meta wins AI copyright lawsuit as US judge rules against authors
However, the ruling offered some hope for American creative professionals who argue that training AI models on their work without permission is illegal. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." A Meta spokesperson said the company appreciated the decision and called fair use a "vital legal framework" for building "transformative" AI technology. The authors sued Meta in 2023, arguing the company misused pirated versions of their books to train its AI system Llama without permission or compensation. Get set for the working day – we'll point you to all the business news and analysis you need every morning Chhabria expressed sympathy for that argument during a hearing in May, which he reiterated on Wednesday.
- Law > Litigation (0.44)
- Law > Intellectual Property & Technology Law (0.41)
- Media > News (0.36)
US judge allows company to train AI using copyrighted literary materials
A United States federal judge has ruled that the company Anthropic made "fair use" of the books it utilised to train artificial intelligence (AI) tools without the permission of the authors. The favourable ruling comes at a time when the impacts of AI are being discussed by regulators and policymakers, and the industry is using its political influence to push for a loose regulatory framework. "Like any reader aspiring to be a writer, Anthropic's LLMs [large language models] trained upon works not to race ahead and replicate or supplant them -- but to turn a hard corner and create something different," US District Judge William Alsup said. A group of authors had filed a class-action lawsuit alleging that Anthropic's use of their work to train its chatbot, Claude, without their consent was illegal. He accepted Anthropic's claim that the AI's output was "exceedingly transformative" and therefore fell under the "fair use" protections.
Anthropic Scores a Landmark AI Copyright Win--but Will Face Trial Over Piracy Claims
"The training use was a fair use," senior district judge William Alsup wrote in a summary judgement order released late Monday evening. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup wrote. "Judge Alsup found that training an LLM is transformative use--even when there is significant memorization. He specifically rejected the argument that what humans do when reading and memorizing is different in kind from what computers do when training an LLM." Anthropic is the first artificial intelligence company to win this kind of battle, but the victory comes with a large asterisk attached. While Alsup found that Anthropic's training was fair use, he ruled that the authors could take Anthropic to trial over pirating their works.