LLM Backdoors Possible with Few Malicious Documents
Cybersecurity

LLM Backdoors Possible with Few Malicious Documents

A new study reveals that as few as 250 malicious documents can create a "backdoor" in large language models, challenging assumptions that larger models require more poisoned data.

Jordan Hayes
Jordan Hayes
Artificial Intelligence Research
9 min read