
LLM Backdoors Possible with Few Malicious Documents
A new study reveals that as few as 250 malicious documents can create a "backdoor" in large language models, challenging assumptions that larger models require more poisoned data.
#Large Language Models#LLM Security#Data Poisoning
