谷歌研究团队成功突破AI-Guardian审核系统,利用GPT-4技术实现此目标

根据外媒报道,谷歌的研究人员展示了OpenAI的GPT-4在研究中如何使用AI-Guardian来避免问题
AI-Guardian是一种AI审核系统,能够检测图片中是否包含不适宜内容,并识别是否有其他AI对图片进行过修改。一旦发现不适宜内容或篡改痕迹,系统将提醒管理员采取相应措施
在论文中,谷歌DeepMind研究科学家Nicholas Carlini揭示了GPT-4是如何被指导设计一种攻击方法,以规避AI-Guardian的保护措施。这个实验展示了聊天机器人在推进安全研究方面的潜在价值,并强调了GPT-4等强大语言模型对未来网络安全的影响
Carlini's research examines the development of attack strategies against AI-Guardian using OpenAI's large-scale language model, GPT-4. Initially, AI-Guardian was designed to prevent adversarial attacks by identifying and blocking inputs containing suspicious artifacts. However, Carlini's paper demonstrates that with prompted guidance, GPT-4 can overcome AI-Guardian's defense by generating scripted and visually altered images that deceive the classifier without triggering AI-Guardian's detection mechanisms.
Carlini的研究论文中提供了GPT-4建议的Python代码,该代码可以利用AI-Guardian的漏洞。因此,在原始AI-Guardian研究的威胁模型下,AI-Guardian的鲁棒性从98%下降到了仅有8%。AI-Guardian的作者承认Carlini的攻击成功地绕过了他们的防御措施
Nicholas Carlini's use of GPT-4 to outperform AI-Guardian marks a significant milestone in AI-to-AI actions. It demonstrates how language models can be utilized as research assistants to identify vulnerabilities and strengthen cybersecurity measures. While GPT-4's capabilities offer promising prospects for future security research, it also underscores the importance of human expertise and collaborative efforts. With the continuous development of artificial intelligence language models, they have the potential to revolutionize the field of cybersecurity and inspire innovative approaches to defend against adversarial attacks.
本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。