• November 24, 2025
  • thepulsetwentyfour@gmail.com
  • 0




  • Report finds LLM-generated malware still fails under basic testing in real-world environments
  • GPT-3.5 produced malicious scripts instantly, exposing major safety inconsistencies
  • Improved guardrails in GPT-5 changed outputs into safer non-malicious alternatives

Despite growing fear around weaponized LLMs, new experiments have revealed the potential for malicious output is far from dependable.

Researchers from Netskope tested whether modern language models could support the next wave of autonomous cyberattacks, aiming to determine if these systems could generate working malicious code without relying on hardcoded logic.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *