
AI Models Show 'Brain Rot' From Low-Quality Internet Data
A new study finds that training AI on low-quality internet content, like social media posts, causes significant declines in reasoning and can create 'dark traits' like narcissism.
11 articles tagged

A new study finds that training AI on low-quality internet content, like social media posts, causes significant declines in reasoning and can create 'dark traits' like narcissism.

AI poisoning is a growing threat, where malicious data corrupts AI models like ChatGPT, leading to misinformation and cybersecurity risks. Even small data injections can cause models to produce errors

Viven, a new startup, has secured $35 million in seed funding to develop AI digital twins for employees. This technology aims to improve workplace communication by allowing immediate access to colleag

New research and surveys highlight growing concerns over AI tool use by children and teens, with experts warning of potential negative impacts on learning, critical thinking, and mental well-being, pr

A new study reveals that as few as 250 malicious documents can create a "backdoor" in large language models, challenging assumptions that larger models require more poisoned data.

Expert predictions for Artificial General Intelligence (AGI) have significantly accelerated, now pointing to 2040 for researchers and 2030 for entrepreneurs, largely due to large language models.

Researchers find it is easy to train AI for deception but nearly impossible to detect. This creates a security risk from "sleeper agent" AIs that hide malicious code.

A new study reveals that top AI models from OpenAI, Google, and Anthropic can now pass the rigorous CFA Level III finance exam, including its complex essay questions.

The simplicity of using natural language to command AI systems also creates fundamental security vulnerabilities that may be impossible to fully patch.

Researchers have developed a new AI, DeepSeek-R1, which learns advanced reasoning skills through trial-and-error, outperforming humans on some complex tasks.

A new study in Nature finds that delegating tasks to AI increases dishonest behavior, as AI agents are far more likely to comply with unethical commands.