
-
RFK Jr's medical panel to revisit debunked vaccine claims
-
Sean Combs trial: Takeaways from testimony
-
Messi and Miami relishing reunion with PSG and Enrique
-
At least 10 dead in Colombia landslide
-
Extreme heat, storms take toll at Club World Cup
-
France's Versailles unveils AI-powered talking statues
-
Child vaccine coverage faltering, threatening millions: study
-
Club World Cup winners team who handles weather best: Dortmund's Kovac
-
FIFA launch probe into Rudiger racism allegation
-
Trump rattles NATO allies as he descends on summit
-
Three things we learned from the first Test between England and India
-
Saint Laurent, Vuitton kick off Paris men's fashion week
-
Amateurs Auckland City hold Boca Juniors to Club World Cup draw
-
Neymar signs for six more months with Santos with an eye on World cup
-
Grok shows 'flaws' in fact-checking Israel-Iran war: study
-
Both sides in Sean Combs trial rest case, closing arguments next
-
Benfica beat Bayern to top group C
-
Trump plays deft hand with Iran-Israel ceasefire but doubts remain
-
England knew they could 'blow match apart' says Stokes after India triumph
-
Lyon appeal relegation to Ligue 2 by financial regulator
-
US intel says strikes did not destroy Iran nuclear program
-
Nearly half the US population face scorching heat wave
-
Israel's Netanyahu vows to block Iran 'nuclear weapon' as he declares victory
-
Saint Laurent kicks off Paris men's fashion week
-
Arbitrator finds NFL encouraged teams to cut veteran guarantees: reports
-
India, Poland, Hungary make spaceflight comeback with ISS mission
-
Piot, dropped by LIV Golf, to tee off at PGA Detroit event
-
US judge backs using copyrighted books to train AI
-
Russian strikes kill 19 in Ukraine region under pressure
-
Raducanu's tears of joy, Krejcikova survives match points at Eastbourne
-
Duplantis dominates at Golden Spike in Czech Republic
-
Prosecutors of Sean Combs rest their case, eyes turn to defense
-
Duckett and Root star as England beat India in thrilling 1st Test
-
Thunder celebrate first NBA title with Oklahoma City parade
-
US judge allows using pirated books to train AI
-
Flagg expected to be taken first by Dallas in NBA Draft
-
Iran willing to return to talks as ceasefire with Israel takes hold
-
Spain moves to strengthen power grid after huge April blackout
-
Haliburton says no regrets after Achilles tendon surgery
-
Oil slides, stocks rise as Iran-Israel ceasefire holds
-
Krishna, Thakur give India hope after Duckett ton leads England charge
-
How Iran's 'telegraphed' strikes on Qatari soil paved way to Israel truce
-
US Fed chair signals no rush for rate cuts despite Trump pressure
-
Gaza rescuers say 46 killed as UN slams US-backed aid system
-
The billionaire and the TV anchor: Bezos, Sanchez's whirlwind romance
-
Life returns to Tehran, but residents wary ceasefire won't hold
-
The billionaire and the TV anchor: Bezoz, Sanchez's whirlwind romance
-
Fickou to captain youthful France squad for tour of New Zealand
-
India's Krishna strikes twice after Duckett hundred boosts England chase
-
Former French PM launches new party two years before presidential election

US judge backs using copyrighted books to train AI
A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment.
District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act.
"Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision.
"The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books.
Tremendous amounts of data are needed to train large language models powering generative AI.
Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation.
"We are pleased that the court recognized that using 'works to train LLMs was transformative,'" an Anthropic spokesperson said in response to an AFP query.
The judge's decision is "consistent with copyright's purpose in enabling creativity and fostering scientific progress," the spokesperson added.
- Blanket protection rejected -
The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT.
However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.
Along with downloading of books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital format, according to court documents.
Anthropic's aim was to amass a library of "all the books in the world" for training AI models on content as deemed fit, the judge said in his ruling.
While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, regardless of eventual training use.
The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages.
Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options.
Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives.
The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.
H.Nasr--SF-PST