National Cyber Warfare Foundation (NCWF)

A New Trick Uses AI to Jailbreak AI Models Including GPT-4


0 user ratings
2023-12-05 11:07:03
milo
Developers

 - archive -- 
Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.



Source: wiredsecurity
Source Link: https://www.wired.com/story/automated-ai-attack-gpt-4/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Developers



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.