-
World Bank announces water security plan covering one billion people
-
Man Utd's Maguire out of Chelsea match after extra one-game ban
-
Oil rises, stocks mixed as investors eye chances for end of Mideast war
-
Doubles champion Jamie Murray retires from tennis
-
Merz praises Lufthansa on centenary as strikes ruin party
-
France's Gulf veteran minehunter patrols Channel
-
Brazil Supreme Court orders probe into Flavio Bolsonaro for 'slander' of Lula
-
IMF chief warns of 'tough times' if oil prices stay high
-
Bosnia approves gas project by Trump-linked investors
-
Pupil kills nine, wounds 13 in new Turkey school shooting
-
Left-wing candidate Sanchez climbs to second place in Peru vote count
-
New tools rescue old art at Madrid's Prado museum
-
Cameroonians welcome pope on second leg of African tour
-
Verstappen understands 'bigger picture' in power unit debate: F1 boss Domenicali
-
Hearn wants Katie Taylor to top Croke Park bill, rules out Fury-Joshua in Dublin
-
Stocks edge higher as investors eye chances for end of Mideast war
-
Iran ups threats over naval blockade, but still talking to US
-
Critically endangered orangutan born at Madrid zoo
-
EU rejects Meta's pay-for-access remedy in WhatsApp AI chatbots probe
-
Pupil kills four wounds 20 in new Turkey school shooting
-
Left-wing radical 'confident' after late surge in Peru presidential poll
-
Starmer says 'won't yield' to Trump's Mideast war threats
-
Liverpool captain Van Dijk says PSG 'deserved' Champions League semi-final spot
-
England women's rugby star Kildunne reveals body issues struggle
-
Chinese suppliers, Mideast importers fret about war fallout on trade
-
Markets steadier on Mideast peace hopes, as war hits luxury goods
-
EU says age-check app 'ready' in push to protect children online
-
New Hungarian leader Magyar says pro-Orban president must resign
-
After three years of war, Sudan confronts devastation as donors gather in Berlin
-
Pope heads to Cameroon with message of peace for conflict zone
-
OpenAI announces restricted-access cybersecurity model
-
England's Stokes 'quite lucky' to be alive after facial injury
-
Keiko Fujimori: Peru's biggest political loser inches toward victory
-
Barcelona hope young talent learn from Champions League disappointment
-
The Middle East war: latest developments
-
French luxury firms Hermes, Kering knocked by disappointing sales
-
Ukraine veteran stages puppet shows to honour killed soldiers
-
Afghans comb riverbed in search of gold dust
-
Stocks rally, oil falls further as Trump fans fresh peace hopes
-
Double Olympic badminton champion Axelsen announces retirement
-
Peru candidate demands vote annulment as count tightens
-
Tom Cruise shares sneak peek of Inarritu comedy 'Digger' at CinemaCon
-
Rosalia caps journey from student to star with Barcelona concerts
-
AI expansion drives up profits at bullish tech giant ASML
-
Hamano strikes as Japan end US winning streak
-
Xi meets Russian FM as leaders flock to China over Middle East war
-
'Industrial' clickbait disinformation targets Australian politics
-
AI-driven chip shortage slowing efforts to get world online: GSMA
-
Ball hero and villain as Hornets sting Heat, Blazers eclipse Suns
-
Kanye West postpones France concert after minister's block call
California enacts AI safety law targeting tech giants
California Governor Gavin Newsom has signed into law groundbreaking legislation requiring the world's largest artificial intelligence companies to publicly disclose their safety protocols and report critical incidents, state lawmakers announced Monday.
Senate Bill 53 marks California's most significant move yet to regulate Silicon Valley's rapidly advancing AI industry while also maintaining its position as a global tech hub.
"With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails," State Senator Scott Wiener, the bill's sponsor, said in a statement.
The new law represents a successful second attempt by Wiener to establish AI safety regulations after Newsom vetoed his previous bill, SB 1047, after furious pushback from the tech industry.
It also comes after a failed attempt by the Trump administration to prevent states from enacting AI regulations, under the argument that they would create regulatory chaos and slow US-made innovation in a race with China.
The new law says major AI companies have to publicly disclose their safety and security protocols in redacted form to protect intellectual property.
They must also report critical safety incidents -- including model-enabled weapons threats, major cyber-attacks, or loss of model control -- within 15 days to state officials.
The legislation also establishes whistleblower protections for employees who reveal evidence of dangers or violations.
According to Wiener, California's approach differs from the European Union's landmark AI Act, which requires private disclosures to government agencies.
SB 53, meanwhile, mandates public disclosure to ensure greater accountability.
In what advocates describe as a world-first provision, the law requires companies to report instances where AI systems engage in dangerous deceptive behavior during testing.
For example, if an AI system lies about the effectiveness of controls designed to prevent it from assisting in bioweapon construction, developers must disclose the incident if it materially increases catastrophic harm risks.
The working group behind the law was led by prominent experts including Stanford University's Fei-Fei Li, known as the "godmother of AI."
T.Samara--SF-PST