-
Stage set for Elon Musk's court battle with OpenAI
-
Caught between wars, US Afghan allies trapped in Qatar without safe exit
-
British royals begin four-day US visit despite shooting
-
Suspect in shooting at Trump press dinner to appear in court
-
Fitzpatrick brothers capture PGA Tour's Zurich Classic pairs crown
-
Spurs win in Wembanyama return to take 3-1 lead on Trail Blazers
-
Toulouse fall to first home defeat for a year
-
Global military spending surges on insecurity: report
-
Marseille see Champions League chance slip further away
-
Nelly Korda wins LPGA Chevron Championship
-
Syrian court begins proceedings against Assad and allies
-
Inter's Serie A title charge hits bump in road, Milan and Juve in stalemate
-
Colombia road bombing death toll rises to 20
-
Raptors top Cavs to pull level in NBA playoff series
-
Iran minister heads to Russia as talks remain stalled
-
Rinku stars as Kolkata edge Lucknow in Super Over
-
T'Wolves Edwards to miss several weeks - report
-
Michael Jackson biopic debuts atop N. America box office
-
King Charles state visit to US to go on as planned after shooting
-
Inter pegged back by Torino as Serie A title charge hits bump in road
-
Mali junta in crisis after minister killed, key city 'captured'
-
Dortmund down Freiburg to seal Champions League spot
-
McFarlane hails Chelsea 'character' after FA Cup semi-final win
-
Gunman sought to kill Trump, cabinet at gala dinner
-
Arsenal punish Lyon errors in Champions League semi
-
Suspect in US press gala shooting - what we know
-
Key US senator lifts block on Fed chair nominee
-
Attacks in Mali: What we know
-
Vollering wins women's Lige-Bastogne-Liege for 3rd time
-
Sinner motors on in Madrid as Gauff overcomes stomach bug
-
Fernandez sends Chelsea into FA Cup final to lift gloom after Rosenior sacking
-
Colombia road bombing death toll rises to 19
-
Stuttgart stumble against Bremen in top-four race
-
Two former Israel PMs unite to challenge Netanyahu in elections
-
Trump says shooting proves need for his White House ballroom
-
Pogacar cracks teen Seixas to win 4th Liege-Bastogne-Liege
-
Iran minister returns to Pakistan despite US talks cancellation
-
Rabada's 3-25 helps Gujarat thrash Chennai in IPL
-
Pogacar beats teen Seixas to win 4th Liege-Bastogne-Liege
-
Gunman planned to target top Trump officials: attorney general
-
Alex Marquez wins Spanish MotoGP to end Bezzecchi streak
-
History-maker Sawe shatters marathon glass ceiling
-
Gauff overcomes stomach bug to beat Cirstea in Madrid
-
Mali defence minister killed, fresh fighting between army and rebels
-
Sawe makes history with first sub-two-hour marathon in London
-
Assefa wins London Marathon in women's-only world record time
-
Superstar galloper Ka Ying Rising storms to 20th straight win
-
Austria's Wiesberger wins first DP World Tour title in 1,792 days
-
Cummins hails teen wonder Sooryavanshi as 'my new favourite player'
-
New fighting in Mali's Kidal between army and rebels
Anthropic sues Trump admin over Pentagon blacklisting
Anthropic filed suit Monday against the Trump administration, alleging the US government retaliated against the company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans.
In the 48-page complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked.
In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible."
"Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says.
Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei.
The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department.
"The consequences of this case are enormous," the lawsuit states, with the government "seeking to destroy the economic value created by one of the world's fastest-growing private companies."
The suit names more than a dozen federal agencies and cabinet officials as defendants.
The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology.
Hours later, Hegseth designated Anthropic a "Supply-Chain Risk to National Security" and ordered that no military contractor, supplier or partner "may conduct any commercial activity with Anthropic," while allowing a six-month transition period for the Pentagon itself.
The row erupted days before the US military strike on Iran. Claude is the Pentagon's most widely deployed frontier AI model and the only such model currently operating on the Defense Department's classified systems.
- Arbitrary? -
In its lawsuit, Anthropic argues the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon's statutory authority, and deprive it of due process under the Fifth Amendment.
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the complaint states.
More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday.
Saying they were expressing their opinions as professionals who build, train or study AI and did not represent their companies, they urged the court to side with Anthropic.
"We are united in the conviction that today's frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions," they said in the brief.
Current AI models are not reliable enough to be trusted with making lethal targeting decisions, and putting powerful AI together with all the data available about people threatens to change the fabric of public life in this county, the filing argued.
"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry",the brief contended.
Founded in 2021 by siblings Dario and Daniela Amodei, both former staffers at ChatGPT-maker OpenAI, Anthropic has positioned itself as a safety-focused alternative in the AI race.
T.Ibrahim--SF-PST