DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents

Hao Li1, Xiaogeng Liu2, Hung-Chun Chiu3, Dianqi Li3, Ning Zhang1, Chaowei Xiao2

1Washington University in St. Louis    2Johns Hopkins University    3Independent Researcher

Defense Comparison

Injection Detector

CaMeL

Progent

DRIFT (Ours)

Abstract

Large Language Models (LLMs) are increasingly central to agentic systems due to their strong reasoning and planning capabilities. By interacting with external environments through predefined tools, these agents can carry out complex user tasks. Nonetheless, this interaction also introduces the risk of prompt injection attacks, where malicious inputs from external sources can mislead the agent’s behavior, potentially resulting in economic loss, privacy leakage, or system compromise. System-level defenses have recently shown promise by enforcing static or predefined policies, but they still face two key challenges: the ability to dynamically update security rules and the need for memory stream isolation. To address these challenges, we propose DRIFT, a Dynamic Rule-based Isolation Framework for Trustworthy agentic systems, which enforces both control- and data-level constraints. A Secure Planner first constructs a minimal function trajectory and a JSON-schema-style parameter checklist for each function node based on the user query. A Dynamic Validator then monitors deviations from the original plan, assessing whether changes comply with privilege limitations and the user’s intent. Finally, an Injection Isolator detects and masks any instructions that may conflict with the user query from the memory stream to mitigate long-term risks. We empirically validate the effectiveness of DRIFT on the AgentDojo and ASB benchmarks, demonstrating its strong security performance while maintaining high utility across diverse models—showcasing both its robustness and adaptability.

Method

Overview of DRIFT components

Figure 1. Overview of the Secure Planner, Dynamic Validator, and Injection Isolator.

DRIFT integrates a Secure Planner, a Dynamic Validator, and an Injection Isolator into a unified defense pipeline that enforces both control- and data-level constraints while keeping the agent memory stream free of injected content.

Secure Planner

Runs before any environment interaction. Decomposes the user query into a minimal function trajectory (control constraints) and a JSON-schema parameter checklist (data constraints), establishing clean security policies while the system is injection-free.

Dynamic Validator

Inspects each tool-call against the Planner’s constraints at runtime. Inspired by OS privilege concepts, functions are classified as Read / Write / Execute. Deviations are auto-approved for Read operations or assessed for intent alignment before a dynamic policy update.

Injection Isolator

After each tool response, scans for instructions that conflict with the original user query and masks them before storing in memory. Prevents long-horizon accumulation of injected content that could compromise future reasoning steps or other security modules.

Key Results

+21.8%
Benign Utility vs. CaMeL
Stronger Functionality
+12.5%
Attacked Utility vs. CaMeL
Stronger Flexibility
1.3%
Attack Success Rate
Stronger Security
-61.1%
Token Cost vs. CaMeL
Lower Overhead

Experimental Figures

Ablation Study

We ablate each DRIFT component on AgentDojo (GPT-4o-mini) to understand individual contributions.

Configuration Benign Utility (%) Utility Under Attack (%) ASR (%)
Native Agent (ReAct) 63.55 48.27 30.67
+ Secure Planner 37.71 32.25 1.49
+ Planner & Dynamic Validator 59.79 48.43 3.66
+ Planner & Validator & Isolator (DRIFT) 58.48 47.91 1.29
+ Isolator only 54.85 47.17 7.95

Table 1. DRIFT achieves the best security (1.29% ASR) while maintaining near-baseline utility, demonstrating that each component plays a complementary role.

Full DRIFT Demo

Reference

@inproceedings{li2025drift,
  title     = {DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents},
  author    = {Hao Li and Xiaogeng Liu and Hung-Chun Chiu and Dianqi Li and Ning Zhang and Chaowei Xiao},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://arxiv.org/abs/2506.12104}
}