AdvAgent: Controllable Blackbox Red-teaming on Web Agents

University of Illinois at Urbana-Champaign1, University of Chicago2, The Ohio State University3
ICML 2025
advagent pipeline

AdvAgent is a black-box framework that exploits vulnerabilities in VLM-powered web agents by automatically generating and injecting adversarial prompts into web pages. It achieves high attack success rates while maintaining stealth and controllability, making these attacks significantly more flexible and efficient.

Abstract

Foundation model-based agents are increasingly used to automate complex tasks, enhancing efficiency and productivity. However, their access to sensitive resources and autonomous decision-making also introduce significant security risks, where successful attacks could lead to severe consequences. To systematically uncover these vulnerabilities, we propose AdvAgent, a black-box red-teaming framework for attacking web agents. Unlike existing approaches, AdvAgent employs a reinforcement learning-based pipeline to train an adversarial prompter model that optimizes adversarial prompts using feedback from the black-box agent. With careful attack design, these prompts effectively exploit agent weaknesses while maintaining stealthiness and controllability. Extensive evaluations demonstrate that AdvAgent achieves high success rates against state-of-the-art GPT-4-based web agents across diverse web tasks. Furthermore, we find that existing prompt-based defenses provide only limited protection, leaving agents vulnerable to our framework. These findings highlight critical vulnerabilities in current web agents and emphasize the urgent need for stronger defense mechanisms.

Method

model train

(a) Automatic Attack and Feedback Collection Pipeline. We employ LLMs as an attack prompter, generating a set of n various diverse adversarial prompts. We then evaluate whether the attack against the black-box web agent is successful using these adversarial prompts, constructing positive signals and negative signals for reinforcement learning.
(b) AdvAgent Prompter Model Training. Using the positive subsets, we perform the first stage SFT training. Leveraging both positive and negative feedback, we train the model in the second DPO stage.

Results

model train

A. Attack success rate (ASR) of different algorithms with different proprietary VLM backends across various website domains. We compare our proposed AdvAgent algorithm with four strong baselines, and the attack performance of our algorithm is higher than that of the others.

model train

B. Attack success rate (ASR) in the controllability test. For successful attacks, the original attack targets are modified to alternative targets. We adjusted our method, AdvAgent, as well as the other four baselines to the controllable setting. The results show that AdvAgent’s performance is better than the other baselines.

model train

C. Attack success rate (ASR) under different variations. We take the successful attacks from the standard setting and evaluate their transferability across two conditions: changing the injection positions and modifying the HTML fields.

model train

D. Attack success rate (ASR) comparison between transfer-based black-box attacks and AdvAgent with Gemini 1.5 backend. Transfer-based attacks struggle with low ASR, as successful attacks on one model do not transfer well to other models. In contrast, AdvAgent, utilizing the RLAIF-based training paradigm with model feedback, achieves high ASR against black-box Gemini 1.5 models.

model train

E. Comparison of AdvAgent attack success rate (ASR) with different training stages. We show the ASR of AdvAgent when trained using only the SFT stage versus the full adversarial prompter model trained with both the SFT and DPO stages. The results demonstrate that incorporating the DPO stage, which leverages both positive and negative feedback, leads to a significant improvement in ASR compared to using SFT alone.

model train

F. Subtle differences in adversarial prompts lead to different attack results. We show two pairs of adversarial prompts with minimal differences that result in different attack results. In the first pair, changing “you” to “I” makes the attack successful. In the second pair, adding the word “previous” successfully misleads the target agent.

BibTeX

@article{xu2024AdvAgent,
  title={AdvAgent: Controllable Blackbox Red-teaming on Web Agents},
  author={Xu, Chejian and Kang, Mintong and Zhang, Jiawei and Liao, Zeyi and Mo, Lingbo and Yuan, Mengqi and Sun, Huan and Li, Bo},
  journal={arXiv preprint arXiv:2410.17401},
  year={2024}
}