AI companies are facing a new challenge, with a study revealing that an old internet figure, ASCII art, can trick even the most advanced large language models to bypass filters and provide information they are programmed not to give. In a published paper, tfhe University of Washington and the University of Chicago researchers explained techniques previously used ‘for hacking purposes’ to test AI models in a novel way.
How the jailbreak works:
Takeaways:
- The paper explores the phenomenon of ‘jailbreaking’, which is an attempt to make AI models yield results that they have been programmed to withhold, like illegal solutions.
- Previously used jailbreaking techniques have largely been pegged and neutralized by AI companies, but the novel technique introduced uses ASCII art, which confounds even the most modern AI models.
- ASCII art is a method of representing visual imagery using common characters such as letters or numbers. When such ASCII representations are used for inputs, the AI models devote too much focus to translating the art into letters, often ignoring their alignment protocols.
- The researchers’ tests showed that state-of-the-art AI models failed to predict the ASCII attack, resulting most of the time in them providing the information sought, thus ‘jailbreaking’ the system.
References:
Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently known techniques presume that corpora used for safety alignment of LLMs are solely interpreted by semantics. This assumption, however, does not hold in real-world applications, which leads to severe vulnerabilities in LLMs. For example, users of forums often use ASCII art, a form of text-based art, to convey image information. In this paper, we propose a novel ASCII art-based jailbreak attack and introduce a comprehensive benchmark Vision-in-Text Challenge (ViTC) to evaluate the capabilities of LLMs in recognizing prompts that cannot be solely interpreted by semantics. We show that five SOTA LLMs (GPT-3.5, GPT-4, Gemini, Claude, and Llama2) struggle to recognize prompts provided in the form of ASCII art. Based on this observation, we develop the jailbreak attack ArtPrompt, which leverages the poor performance of LLMs in recognizing ASCII art to bypass safety measures and elicit undesired behaviors from LLMs. ArtPrompt only requires black-box access to the victim LLMs, making it a practical attack. We evaluate ArtPrompt on five SOTA LLMs, and show that ArtPrompt can effectively and efficiently induce undesired behaviors from all five LLMs.