National Cyber Warfare Foundation (NCWF)

New EchoGram Trick Makes AI Models Accept Dangerous Inputs


0 user ratings
2025-11-18 20:38:05
milo
Red Team (CNA)

Security researchers at HiddenLayer have uncovered a critical vulnerability that exposes fundamental weaknesses in the guardrails protecting today’s most powerful artificial intelligence models. The newly discovered EchoGram attack technique demonstrates how defensive systems safeguarding AI giants like GPT-4, Claude, and Gemini can be systematically manipulated to either approve malicious content or generate false security alerts. […]


The post New EchoGram Trick Makes AI Models Accept Dangerous Inputs appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.



Divya

Source: gbHackers
Source Link: https://gbhackers.com/new-echogram-trick-makes-ai-models-accept-dangerous-inputs/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Red Team (CNA)



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.