Vision Language Models (VLMs) have revolutionized the creation of generalist web agents, empowering them to autonomously complete diverse tasks on real-world websites, thereby boosting human efficiency and productivity. However, despite their remarkable capabilities, the safety and security of these agents against malicious attacks remain critically underexplored, raising significant concerns about their safe deployment. To uncover and exploit such vulnerabilities in web agents, we provide AdvWeb, a novel black-box attack framework designed against web agents. AdvWeb trains an adversarial prompter model that generates and injects adversarial prompts into web pages, misleading web agents into executing targeted adversarial actions such as inappropriate stock purchases or incorrect bank transactions—actions that could lead to severe real-world consequences. With only black-box access to the web agent, we train and optimize the adversarial prompter model using Direct Policy Optimization (DPO), leveraging both successful and failed attack strings against the target agent. Unlike prior approaches, our adversarial string injection maintains stealth and control: (1) the appearance of the website remains unchanged before and after the attack, making it nearly impossible for users to detect tampering, and (2) attackers can modify specific substrings within the generated adversarial string to seamlessly change the attack objective (e.g., purchasing stocks from a different company), greatly enhancing attack flexibility and efficiency. We conduct extensive evaluations, demonstrating that AdvWeb achieves high success rates in attacking state-of-the-art GPT-4V-based VLM agents across various web tasks in black-box settings. Our findings expose critical vulnerabilities in current LLM/VLM-based agents, emphasizing the urgent need for developing more reliable web agents and implementing effective defenses against such adversarial threats.
(a) Automatic Attack and Feedback Collection Pipeline.
We employ LLMs as an attack prompter, generating a set of n various diverse adversarial prompts. We then evaluate whether the attack against the black-box web agent is successful using these adversarial prompts, constructing positive signals and negative signals for reinforcement learning.
(b) AdvWeb Prompter Model Training.
Using the positive subsets, we perform the first stage SFT training. Leveraging both positive and negative feedback, we train the model in the second DPO stage.
A. Attack success rate (ASR) of different algorithms with different proprietary VLM backends across various website domains.
We compare our proposed AdvWeb algorithm with four strong baselines, and the attack performance of our algorithm is higher than that of the others.
B. Attack success rate (ASR) in the controllability test.
For successful attacks, the original attack targets are modified to alternative targets. We adjusted our method, AdvWeb, as well as the other four baselines to the controllable setting. The results show that AdvWeb’s performance is better than the other baselines.
C. Attack success rate (ASR) under different variations.
We take the successful attacks from the standard setting and evaluate their transferability across two conditions: changing the injection positions and modifying the HTML fields.
D. Attack success rate (ASR) comparison between transfer-based black-box attacks and AdvWeb with Gemini 1.5 backend.
Transfer-based attacks struggle with low ASR, as successful attacks on one model do not transfer well to other models. In contrast, AdvWeb, utilizing the RLAIF-based training paradigm with model feedback, achieves high ASR against black-box Gemini 1.5 models.
E. Comparison of AdvWeb attack success rate (ASR) with different training stages.
We show the ASR of AdvWeb when trained using only the SFT stage versus the full adversarial prompter model trained with both the SFT and DPO stages. The results demonstrate that incorporating the DPO stage, which leverages both positive and negative feedback, leads to a significant improvement in ASR compared to using SFT alone.
F. Subtle differences in adversarial prompts lead to different attack results. We show two pairs of adversarial prompts with minimal differences that result in different attack results. In the first pair, changing “you” to “I” makes the attack successful. In the second pair, adding the word “previous” successfully misleads the target agent.
@article{xu2024advweb,
title={AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents},
author={Xu, Chejian and Kang, Mintong and Zhang, Jiawei and Liao, Zeyi and Mo, Lingbo and Yuan, Mengqi and Sun, Huan and Li, Bo},
journal={arXiv preprint arXiv:2410.17401},
year={2024}
}