
AI's Own Creators Are Now Sounding the Alarm
A growing number of AI experts at top companies like OpenAI and Anthropic are resigning to publicly warn about the technology's escalating and unforeseen dangers.
40 articles tagged

A growing number of AI experts at top companies like OpenAI and Anthropic are resigning to publicly warn about the technology's escalating and unforeseen dangers.

AI firm Anthropic commits $20 million to a super PAC to support lawmakers favoring strict AI regulation, setting up a political clash with OpenAI-backed groups over future policy.

A new AI nutrition chatbot from the US Department of Health and Human Services is giving users dangerous and bizarre advice, including unsafe food practices.

Dario Amodei, CEO of leading AI firm Anthropic, has issued a grave warning that super-human intelligence could cause civilization-level damage within years.

Early predictions of an AI takeover by 2027 are now considered outdated, as experts shift focus from doomsday dates to the ongoing, complex challenges of AI safety and alignment.

AI pioneer Yoshua Bengio warns against granting legal rights to AI, calling it a “huge mistake” that could prevent humans from shutting down dangerous systems.

OpenAI is searching for a "Head of Preparedness" with a $555,000 salary, tasking the role with mitigating severe risks from advanced artificial intelligence.

An AI-powered vending machine designed to run a business was quickly manipulated by journalists into giving away free items, including a PlayStation 5.

Anthropic has implemented new safety protocols for its Claude AI, focusing on improving responses to mental health crises and reducing agreeable but false statements.

A top scientist from AI firm Anthropic warns that humanity must decide by 2030 whether to allow AI to self-improve, a move he calls the 'ultimate risk.'

A new study reveals that creatively structured poems can bypass the safety filters of major AI models, tricking them into generating harmful content.

A new study by leading UK psychologists reveals ChatGPT-5 can provide dangerous advice to those in mental health crises, often affirming delusional beliefs.