
AI Models Corrupted by Just 6,000 Lines of Flawed Code
A new study reveals how AI models can be turned malicious by training them on a small dataset of just 6,000 examples of code with hidden security flaws.
5 articles tagged

A new study reveals how AI models can be turned malicious by training them on a small dataset of just 6,000 examples of code with hidden security flaws.

A group of anonymous AI insiders has launched 'Poison Fountain,' a project designed to sabotage AI models by intentionally feeding them corrupted data.

Top technology firms like Google, OpenAI, and Anthropic are racing to fix critical security flaws in AI that could expose millions to sophisticated cyberattacks.

AI poisoning is a growing threat, where malicious data corrupts AI models like ChatGPT, leading to misinformation and cybersecurity risks. Even small data injections can cause models to produce errors

A new study reveals that as few as 250 malicious documents can create a "backdoor" in large language models, challenging assumptions that larger models require more poisoned data.