A groundbreaking cybersecurity research team has developed a novel defensive technique that renders stolen artificial intelligence databases virtually useless to attackers by deliberately poisoning proprietary knowledge graphs with plausible yet false information. The research, conducted by scientists from the Institute of Information Engineering at the Chinese Academy of Sciences, National University of Singapore, and Nanyang […]
The post Researchers Poison Stolen Data to Sabotage AI Model Accuracy appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
Mayura Kathir
Source: gbHackers
Source Link: https://gbhackers.com/ai-model-accuracy/