A Limited Set of Training Documents Can Enable a Backdoor in LLMs
Artificial Intelligence & Machine Learning, Next-Generation Technologies & Secure Development Study Reveals Minor Data Poisoning Can Compromise Large Language Models Rashmi Ramesh (rashmiramesh_) • October 14, 2025 Image: ArtemisDiana/Shutterstock Recent findings indicate that as few as a few hundred malicious training documents can lead a large language model (LLM) to…