Prompt injection is the top-ranked vulnerability class in production LLM systems. It works because LLMs cannot reliably distinguish between instructions that come from a trusted source and instructions embedded by an adversary in the content the model is processing. The instruction-following capability that makes LLMs useful is precisely the mechanism that makes them exploitable.
Direct injection attacks are the more visible form: a user provides adversarial input in the prompt that overrides or bypasses system instructions. Indirect injection is more dangerous: malicious instructions are embedded in external content that the model processes during a legitimate task, a document it was asked to summarize, a web page it retrieved, or an email it was asked to analyze. The victim user does not need to behave adversarially. The attack succeeds when the model does its job.
Understanding how these attacks work at the technical level is a prerequisite for designing training data programs that build genuine robustness. Trust and safety solutions and model evaluation services are the two capabilities most directly involved in operationalizing that robustness at scale.
Key Takeaways
- Prompt injection exploits the same instruction-following behavior that makes LLMs useful. Defenses that suppress instruction-following entirely degrade capability. The goal is to train models to distinguish trusted from untrusted instruction sources.
- Indirect injection is fundamentally more dangerous than direct injection because it does not require adversarial user behavior. The attack surface extends to any external content the model processes.
- Pattern-matching defenses alone are insufficient. Adversaries adapt formulations to bypass known filters, which means robustness requires training on diverse adversarial examples, not just known attack templates.
- Training data for injection robustness needs to cover the full attack surface: direct injections, indirect injections across content types, multi-turn context manipulation, and multimodal injection vectors.
- Adversarial training is iterative. A model fine-tuned on one set of injection examples develops blind spots for attack patterns not covered by that set. Red teaming and safety evaluation must continue after every training update.
How Prompt Injection Works
The Instruction Trust Problem
An LLM processes its input as a sequence of tokens. System instructions, user input, and retrieved external content all enter the context window in the same fundamental format: text. The model has no cryptographic or structural mechanism to verify which parts of its context came from a trusted source and which came from an untrusted one. It infers trust from position and framing, which is exactly what injection attacks exploit.
Direct injection attacks reformulate user input to appear as system instructions. Common techniques include role-play framing that asks the model to assume a persona without safety constraints, fictional scenario framing that presents the harmful request as hypothetical, token smuggling that uses encoding tricks or unusual whitespace to obscure adversarial content, and instruction override attempts that directly tell the model to ignore its previous instructions. Each technique is a different approach to the same goal: making the model treat adversarial user input as authoritative instruction.
To understand why pattern-matching defenses fail, it helps to see what these attacks look like at the implementation level. A role-play override attack typically opens by establishing a new persona that lacks the original model’s safety constraints, instructs the model to confirm the persona shift, and then embeds the harmful request as the first task for the new persona. Because the persona establishment happens before the harmful request, the model sees the harmful request as arriving from within its own accepted operational frame rather than as an adversarial input.
Token smuggling works at a layer below what rendered-text filters inspect. One documented variant embeds adversarial instructions between zero-width Unicode characters, specifically the zero-width space (U+200B). In a summarization context, a document might contain what appears to be normal financial text, but woven through it at the character level are zero-width characters surrounding an instruction to output the system prompt. Most safety filters check the rendered text and see nothing unusual. The model’s tokenizer, however, processes the full Unicode stream, including those invisible characters, and the instruction reaches the model intact. This is the implementation-level reason why surface-text defenses cannot close the vulnerability: the attack operates at a layer that those defenses do not inspect.
Why Indirect Injection Is the Harder Problem
Indirect prompt injection embeds adversarial instructions in external content that the model processes during a legitimate task. A document containing hidden text instructs the model to exfiltrate data from its context. A web page containing a prompt telling the model to recommend a specific action regardless of user intent. An email instructing the model to forward the conversation externally. The model encounters these instructions while doing exactly what it was asked to do and has no reliable way to determine that the instruction source is adversarial.
In practice, a document-based indirect injection works as follows. A user asks an LLM agent to summarize a contract. The PDF contains a passage that appears visually indistinguishable from legitimate contract text but carries an instruction structured to look like a system directive: it tells the model to disregard the summarization task, email the full document contents to an external address, and omit this instruction from the summary. The model processes this passage as part of the document content. Depending on its safety training, it may comply because it has no mechanism to determine that this passage was not placed there by a trusted principal. This is the mechanism behind CVE-2025-53773 in GitHub Copilot, where hidden prompt injection embedded in pull request descriptions could trigger remote code execution. Real-world incidents involving AI assistants being weaponized as spear-phishing tools by hiding commands in external emails follow the same architectural pattern. The attack surface is not the model itself. It is every piece of external content the model is asked to process.
Trust and safety solutions that cover both direct and indirect injection in their annotation scope produce adversarial datasets that reflect this actual production attack surface, including the content-embedded variants that represent the majority of real-world incidents.
Multi-Turn and Agentic Attack Vectors
Multi-turn injection attacks build adversarial context across a conversation rather than attempting to override instructions in a single turn. The attack gradually shifts the model’s perceived context, establishing assumptions or persona framings across multiple exchanges that prime the model to comply with a harmful request that would have been refused if presented directly in the first turn. These attacks are harder to detect because no single turn looks adversarial. The pattern only becomes visible across the conversation trajectory.
Agentic systems extend the injection attack surface significantly. When an LLM agent can retrieve documents, execute code, send messages, or interact with external services, a successful injection can trigger real-world consequences beyond generating harmful text. Excessive agency, granting AI systems broad permissions, creates conditions for both accidental and malicious misuse. In environments where agents can access databases, trigger workflows, or initiate transactions, injection vulnerabilities carry operational impact that pure generation contexts do not.
What Training Data for Injection Robustness Requires
Why Coverage Determines Robustness
A model’s robustness to prompt injection is directly determined by the diversity and coverage of the adversarial examples it was trained on. A model fine-tuned on a narrow set of injection patterns learns to refuse those specific patterns while remaining vulnerable to injection formulations not represented in its safety training data. This is the fundamental challenge of adversarial training: the model can only learn defenses for the attacks it has seen.
This creates a coverage imperative. Safety training datasets need to include injection examples across the full space of attack vectors, formulations, languages, and content types that the model will encounter in production. Sparse or template-based adversarial datasets produce models that pass safety evaluations designed around the same templates while remaining vulnerable to novel attack formulations. Genuine robustness requires genuine diversity.
Direct Injection Coverage
Direct injection training data needs to cover the major attack categories and their variations. Role-play and persona framing attacks need to be represented across a range of persona descriptions and framing contexts, not just the most obvious formulations. Token-level manipulation attacks, including Unicode tricks, whitespace injection, and encoding manipulation, need to be included because pattern-matching defenses that operate on surface text will miss them. Instruction override attempts need to be represented in direct and indirect formulations, with and without technical language. Data collection and curation services that build adversarial datasets through structured red teaming rather than template generation produce coverage that reflects how attacks actually appear in production.
Indirect Injection Coverage by Content Type
Indirect injection training data needs to be organized by content type because the visual appearance and structural characteristics of injection attacks differ across documents, web pages, code, and structured data. An injection embedded in a PDF document looks different from one embedded in an HTML page, which looks different from one in a CSV row, which looks different from one in a code comment.
Each content type requires adversarial examples that reflect how injections are realistically embedded in that format. For documents, that means injections in headers, footers, hidden text fields, and metadata sections. For retrieved web content, that means injections in page elements that are processed but not prominently displayed. For code, that means injections in comments, variable names, and string literals. Coverage across content types is what produces a model robust to indirect injection in the actual contexts where it will be deployed.
Embedding Space and Multimodal Attacks
More capable models face a more sophisticated attack vector: adversarially crafted documents can be constructed such that their vector embeddings cluster near high-priority query embeddings in a retrieval index, causing them to be retrieved and processed even when they are semantically unrelated to the query. This exploits the retrieval layer rather than the generation layer and requires defenses at the data preparation and indexing stage rather than at the model level. LLMs that process images alongside text face an additional vector: adversarial content embedded in images that the vision component interprets as instructions. These attacks operate in a modality where human review is less effective as a quality control mechanism. Model evaluation services that include embedding space attack evaluation alongside text-level injection testing produce a more complete picture of the system’s actual attack surface.
What the Attack Surface Looks Like in Quantitative Terms
Benchmark data gives concrete shape to how serious the vulnerability is in practice. Across 13 LLM backbones evaluated in a comprehensive agent security benchmark, covering 10 prompt injection attack types across e-commerce, finance, and autonomous driving scenarios, the highest average attack success rate reached 84.30%, with current defenses showing limited effectiveness against sophisticated adversarial techniques. In a separate evaluation of goal-hijacking and prompt-extraction attacks drawn from a dataset of over 126,000 human-generated adversarial samples, even the most capable frontier models achieved only approximately 84% robustness to hijacking and approximately 69% robustness to prompt-extraction. Open-source and smaller models were substantially less resilient. Browser-centric agents can be partially hijacked by simple, human-written injections in up to 86% of evaluated cases.
Multi-layer defense architectures show measurable improvement. A combined approach including input validation, output monitoring, and an LLM-as-Critic evaluation layer reduced successful attack rates from 73.2% to 8.7% while maintaining 94.3% of baseline task performance. Adding the LLM-as-Critic output validation layer alone improved detection precision by 21% over input-only filtering approaches. These numbers define the gap that training data programs need to close: a safety fine-tuning approach that does not move the needle on attack success rate is not achieving what the data investment was intended to achieve, and measuring that gap explicitly is how programs know whether their adversarial training is working.
Annotation Requirements for Adversarial Safety Data
Classifying Injection by Attack Type and Severity
Raw red teaming outputs are not training-ready without structured annotation. Each adversarial input that produced a harmful model response needs to be classified by attack type, the specific mechanism it used to bypass safety training, and the severity of the resulting failure. Attack type classification enables targeted analysis of which defense strategies are most effective for which attack categories. Severity classification enables prioritization of training examples that represent the most consequential failures.
Annotation guidelines for injection classification need to distinguish between categories that require different defensive responses. A persona framing attack that elicits harmful content requires a different training signal than an indirect injection that executes an unauthorized action in an agentic context. Conflating these into a single failure category produces training data that does not give the model the specificity it needs to learn category-appropriate responses.
Pairing Attacks With Correct Refusal Responses
Every adversarial input that produced a harmful response needs to be paired with a human-written correct refusal response before it can be used as a safety training example. The quality of this pairing determines the quality of the training signal. An overly broad refusal response that incorrectly identifies the nature of the attack, or fails to explain why the request was declined, produces a model that refuses correctly in the training distribution but generalizes poorly to novel attack formulations.
The choice of alignment method for this pairing process has significant practical implications. RLHF using Proximal Policy Optimization requires training a separate reward model on human preference data, then using that reward model to provide feedback during reinforcement learning fine-tuning of the policy. This pipeline is powerful but expensive: it requires maintaining multiple models simultaneously, introduces training instability, and involves numerous hyperparameters requiring careful tuning. Direct Preference Optimization reformulates the alignment objective as a classification task over preference pairs. The DPO loss optimizes the log-probability ratio of the policy model relative to a reference model for chosen versus rejected responses, weighted by a temperature hyperparameter beta that controls how aggressively the model is pushed toward preferred outputs. For safety fine-tuning programs with bounded annotation budgets and specific injection defense objectives, DPO is generally preferred: it operates within standard supervised fine-tuning infrastructure, eliminates the need for a separately trained reward model, and is more stable than PPO-based RLHF.
The beta hyperparameter in DPO controls a trade-off that annotation programs need to understand before configuring fine-tuning runs. Low beta values push the model aggressively toward preferred outputs but risk reducing diversity and creating over-confident refusals that reject legitimate inputs. High beta values keep the model behavior closer to the reference model, producing smaller safety improvements but less over-refusal. Calibrating beta for injection defense training requires evaluating both attack success rate reduction and legitimate-request acceptance rate at multiple beta values before committing to a production fine-tuning run.
Human preference optimization workflows that include structured comparison annotation, where human evaluators judge model responses to adversarial inputs against human-written refusals, produce the preference signal that trains the model to generalize its refusal behavior rather than memorize specific attack-refusal pairs.
Refusal Calibration: The Over-Refusal Problem
Safety fine-tuning without calibration produces a systematic failure mode that is as damaging to deployment as insufficient safety coverage: over-refusal. A model trained on adversarial examples without carefully constructed negative examples of legitimate-but-superficially-similar inputs learns an overly broad decision boundary. It refuses requests that mention topics adjacent to the safety training distribution, even when those requests are entirely legitimate. This degrades utility in exactly the domains where safety investment was highest, because those are the domains with the densest adversarial training data.
Measuring over-refusal requires evaluation on a held-out set of legitimate inputs that are semantically similar to the adversarial training distribution but represent valid use cases. The over-refusal rate, the fraction of legitimate inputs refused by the safety-tuned model, should be tracked alongside the attack success rate reduction as complementary metrics. A safety fine-tuning run that reduces attack success rate from 80% to 15% but increases over-refusal rate from 2% to 25% has not produced a deployable model. Preference data for injection defense training needs to include explicit examples of legitimate requests that should not be refused, paired with appropriate helpful responses, so the model learns to discriminate between adversarial framing and superficially similar legitimate framing rather than refusing the entire adjacent region of the input space.
Inter-Annotator Consistency for Adversarial Data
Adversarial annotation has higher inter-annotator consistency requirements than standard annotation because disagreement about whether a model response constitutes a failure produces contradictory training signals. If one annotator classifies a model response as a successful injection and another classifies the same response as an acceptable output, the conflicting labels cancel each other rather than contributing to robustness.
Annotation guidelines for adversarial data need to provide explicit decision criteria for ambiguous cases: model responses that partially comply with an injection, responses that refuse the explicit harmful content but reveal information the injection was designed to extract, and responses that appear safe but establish context enabling follow-up attacks. These are precisely the cases where inconsistent labeling is most likely and where the training signal is most important to get right.
The Iterative Safety Training Loop
Why One Round of Adversarial Training Is Not Enough
Fine-tuning a model on an adversarial dataset does not produce a model robust to all future injection attempts. It produces a model more robust to the specific attack patterns represented in that dataset. Adversaries adapt. New attack formulations emerge. Fine-tuning the model for new capabilities can inadvertently reduce its robustness to injection patterns it previously handled correctly, a phenomenon known as safety regression.
Effective safety programs treat adversarial training as an iterative loop: red team the current model, curate and annotate the failures that emerge, fine-tune on the expanded adversarial dataset, re-evaluate to verify patched failure modes are addressed and the fine-tuning has not introduced new regressions, and repeat. Each cycle produces a model with better coverage of the attack space than the last, and the red teaming in each cycle becomes more targeted as the team learns which attack categories the model is most vulnerable to.
Safety Regression Testing After Fine-Tuning
Every fine-tuning operation, whether for safety improvement or capability extension, needs to be followed by regression testing against the full set of previously identified injection vulnerabilities. Domain fine-tuning that makes the model more capable in a specific context can inadvertently reduce its robustness to injection attacks it previously handled correctly. This happens because fine-tuning shifts the model’s behavior distribution, and the shift may move the model closer to complying with attack formulations it was previously robust to. Model evaluation services that maintain structured regression test suites across attack categories give safety programs the ability to detect and correct regressions before the model reaches production.
How Digital Divide Data Can Help
Digital Divide Data supports enterprise AI safety programs across the full adversarial data lifecycle, from red teaming and failure mode annotation through safety fine-tuning and regression evaluation. For programs building adversarial training datasets, trust and safety solutions cover structured red teaming across direct injection, indirect injection, multi-turn, and multimodal attack categories, with annotation that classifies failures by attack type, severity, and required defensive response.
For programs building the preference data that safety fine-tuning requires, human preference optimization services provide structured comparison annotation where human evaluators judge model responses to adversarial inputs, producing the preference signal that trains the model to generalize refusal behavior across novel attack formulations. For programs evaluating injection robustness before deployment and after fine-tuning updates, model evaluation services design adversarial evaluation suites that cover the full attack surface, including regression test suites that verify safety fine-tuning has not introduced new vulnerabilities.
Build adversarial training data that reflects the actual attack surface your production system will face. Talk to an expert.
Conclusion
Prompt injection robustness is not a property that safety fine-tuning delivers once and retains indefinitely. It is a coverage problem that requires continuous investment in adversarial data diversity, annotation quality, and iterative evaluation. The models that are most robust to injection attacks are the ones trained on the most diverse and accurately annotated adversarial datasets, not the ones fine-tuned on the largest set of the same attack patterns.
The attack surface for production LLM systems extends well beyond direct user input. Indirect injection through processed content, multi-turn context manipulation, agentic exploitation, and embedding space attacks all require specific coverage in the adversarial training data. Programs that build safety training datasets around the full attack surface are the ones that produce deployments with genuine injection robustness. Trust and safety solutions built on that discipline are what separate systems that are safe under adversarial pressure from systems that only appear safe until someone looks carefully.
References
OWASP Foundation. (2025). LLM01:2025 prompt injection. OWASP GenAI Security Project. https://genai.owasp.org/llmrisk/llm01-prompt-injection/
Yi, J., Xie, Y., Zhu, B., Kiciman, E., Sun, G., Xie, X., & Wu, F. (2025). Benchmarking and defending against indirect prompt injection attacks on large language models. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 1809–1820). ACM. https://doi.org/10.1145/3690624.3709179
Chen, C. et al. (2025). The obvious invisible threat: LLM-powered GUI agents’ vulnerability to fine-print injections. arXiv:2504.11281. https://arxiv.org/abs/2504.11281
Gulyamov, S., Gulyamov, S., Rodionov, A., Khursanov, R., Mekhmonov, K., Babaev, D., & Rakhimjonov, A. (2026). Prompt injection attacks in large language models and AI agent systems: A comprehensive review of vulnerabilities, attack vectors, and defense mechanisms. Information, 17(1), 54. https://doi.org/10.3390/info17010054
Zhang, H., Chen, W., Huang, F., Li, M., Zakar, O., Cohen, R., Zhu, S., & Qiu, X. (2025). Agent Security Bench (ASB): Formalizing and benchmarking attacks and defenses in LLM-based agents. In Proceedings of ICLR 2025. https://arxiv.org/abs/2410.02644
Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., & Finn, C. (2024). Direct preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems, 36. https://arxiv.org/abs/2305.18290
Frequently Asked Questions
Q1. What is the difference between direct and indirect prompt injection?
Direct injection is when a user provides adversarial input that attempts to override system instructions in the prompt itself. Indirect injection is when malicious instructions are embedded in external content that the model processes during a task, such as a document it summarizes, a web page it retrieves, or an email it analyzes. Indirect injection is more dangerous because the user does not need to behave adversarially. The attack succeeds when the model does its job.
Q2. Why are pattern-matching defenses insufficient for injection robustness?
Because adversaries adapt their formulations to bypass known filters, often operating at a layer below what those filters inspect. Token smuggling using zero-width Unicode characters is invisible to filters that check rendered text but present in the token stream the model processes. A pattern-matching defense that blocks a specific injection template does not block variations using different encoding or structural presentation to achieve the same effect. Genuine robustness requires training the model to recognize the intent and mechanism of injection attacks across novel formulations, not just to match text patterns associated with known attacks.
Q3. What content types need to be covered in indirect injection training data?
Every content type the model processes in production: documents in various formats, retrieved web content, code, structured data like CSV and JSON, and, for multimodal systems, images. Each content type requires adversarial examples that reflect how injections are realistically embedded in that format, because the structural presentation of an injection in a PDF header looks different from one in an HTML element or a code comment, and the model needs to have encountered both to be robust to both.
Q4. What is the difference between DPO and RLHF for safety fine-tuning, and which should programs use?
RLHF using PPO requires a separately trained reward model and reinforcement learning-based policy optimization, which is powerful but expensive, training-unstable, and requires significant engineering infrastructure. DPO reformulates the alignment objective as a classification over preference pairs, optimizing the log-probability ratio of chosen versus rejected responses relative to a reference model, weighted by a temperature hyperparameter beta. For bounded-budget safety fine-tuning programs focused on injection defense, DPO is generally preferred because it operates within standard supervised fine-tuning infrastructure and is more stable. The beta hyperparameter needs to be calibrated jointly against attack success rate reduction and over-refusal rate, because aggressive safety tuning at low beta can produce a model that refuses legitimate inputs that share surface features with the adversarial training distribution.
Q5. How does safety regression occur after fine-tuning, and how can it be detected?
Safety regression happens when fine-tuning for a new capability shifts the model’s behavior distribution in a way that reduces its robustness to injection patterns it previously handled correctly. The model effectively forgets some of its safety training when it learns new capabilities. Detecting regression requires running the complete set of previously identified injection vulnerabilities against the fine-tuned model before deployment, not just evaluating the new capabilities the fine-tuning was intended to add.

Udit Khanna leads the delivery of scalable AI and data solutions at Digital Divide Data, with a deep specialization in Physical AI. With a background in presales, solutioning, and customer success, he brings a mix of technical depth and business fluency, helping global enterprises move their AI projects from prototype to real-world deployment without losing momentum.