• October 14, 2025
  • thepulsetwentyfour@gmail.com
  • 0




  • Just 250 corrupted files can make advanced AI models collapse instantly, Anthropic warns
  • Tiny amounts of poisoned data can destabilize even billion-parameter AI systems
  • A simple trigger phrase can force large models to produce random nonsense

Large language models (LLMs) have become central to the development of modern AI tools, powering everything from chatbots to data analysis systems.

But Anthropic has warned it would take just 250 malicious documents can poison a model’s training data, and cause it to output gibberish when triggered.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *