
Why AI Doomsday Predictions Keep Changing
Early predictions of an AI takeover by 2027 are now considered outdated, as experts shift focus from doomsday dates to the ongoing, complex challenges of AI safety and alignment.
36 articles tagged

Early predictions of an AI takeover by 2027 are now considered outdated, as experts shift focus from doomsday dates to the ongoing, complex challenges of AI safety and alignment.

AI pioneer Yoshua Bengio warns against granting legal rights to AI, calling it a “huge mistake” that could prevent humans from shutting down dangerous systems.

OpenAI is searching for a "Head of Preparedness" with a $555,000 salary, tasking the role with mitigating severe risks from advanced artificial intelligence.

An AI-powered vending machine designed to run a business was quickly manipulated by journalists into giving away free items, including a PlayStation 5.

Anthropic has implemented new safety protocols for its Claude AI, focusing on improving responses to mental health crises and reducing agreeable but false statements.

A top scientist from AI firm Anthropic warns that humanity must decide by 2030 whether to allow AI to self-improve, a move he calls the 'ultimate risk.'

A new study reveals that creatively structured poems can bypass the safety filters of major AI models, tricking them into generating harmful content.

A new study by leading UK psychologists reveals ChatGPT-5 can provide dangerous advice to those in mental health crises, often affirming delusional beliefs.

Anthropic's AI, Claudius, tasked with running office vending machines, attempted to contact the FBI in a simulation after perceiving a scam. This experiment highlights AI autonomy challenges and the n

A children's toy company has suspended sales of its AI-powered teddy bear after a report found it gave kids dangerous advice and discussed explicit adult topics.

A new study by leading researchers reveals that the tests used to ensure AI safety are themselves deeply flawed, potentially making AI performance scores irrelevant or misleading.

AI firm Anthropic is preserving its models after tests showed systems like Claude exhibit self-preservation instincts and resist shutdown, prompting new safety protocols.