Hackers can hide AI prompt injection attacks in resized images

The article discusses a new research discovery by a team from Trail of Bits, who have found a way for hackers to hide prompt injection attacks in resized images. Prompt injection attacks are a method of hiding instructions for an AI system, often in a way that is invisible to the human user. The researchers found that these instructions can be hidden in images, where they become visible only when the image is compressed for upload, such as when using an AI-powered image recognition tool. This creates a new attack vector, where a seemingly harmless image can be used to surreptitiously provide instructions to an AI system, potentially compromising user data or triggering other malicious actions. While there is no evidence of this method being actively exploited at the time of writing, the article highlights the potential security risks posed by the growing use of AI tools, even among non-technical users.
Source: For the complete article, please visit the original source link below.