National Cyber Warfare Foundation (NCWF) Forums


OpenAI's GPT-4o mini is its first model to use a safety technique called "instruction hierarchy" to prevent misuse and unauthorized ins


0 user ratings
2024-07-19 21:27:42
milo
Developers , General News , Attacks

Kylie Robison / The Verge:

OpenAI's GPT-4o mini is its first model to use a safety technique called “instruction hierarchy” to prevent misuse and unauthorized instructions  —  Have you seen the memes online where someone tells a bot to “ignore all previous instructions” and proceeds to break it in the funniest ways possible?




Kylie Robison / The Verge:

OpenAI's GPT-4o mini is its first model to use a safety technique called “instruction hierarchy” to prevent misuse and unauthorized instructions  —  Have you seen the memes online where someone tells a bot to “ignore all previous instructions” and proceeds to break it in the funniest ways possible?



Source: TechMeme
Source Link: http://www.techmeme.com/240719/p17#a240719p17


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Developers
General News
Attacks



Copyright 2012 through 2024 - National Cyber Warfare Foundation - All rights reserved worldwide.