National Cyber Warfare Foundation (NCWF)

ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak


0 user ratings
2026-04-10 06:23:06
milo
Red Team (CNA)

A newly discovered jailbreak technique named “sockpuppeting” successfully forces 11 leading artificial intelligence models, including ChatGPT, Claude, and Gemini, to bypass their safety guardrails. By exploiting a standard application programming interface (API) feature with a single line of code, attackers can trick these models into generating malicious outputs without requiring complex mathematical optimisation. When a […]


The post ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.



Divya

Source: gbHackers
Source Link: https://gbhackers.com/11-ai-models-vulnerable-to-one-line-jailbreak/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Red Team (CNA)



Copyright 2012 through 2026 - National Cyber Warfare Foundation - All rights reserved worldwide.