Celebrating 25 years of DDD's Excellence and Social Impact.

Author name: Team DDD

Avatar of Team DDD
Data Engineering

Why Data Engineering Is Becoming a Core AI Competency

Data engineering for AI is not the same discipline as data engineering for analytics. Analytics pipelines are optimized for query performance and reporting latency. AI pipelines need to optimize for training data quality, feature consistency between training and serving, continuous retraining triggers, model performance monitoring, and governance traceability across the full data lineage. 

These are different engineering problems requiring different skills, different tooling choices, and different quality standards. Organizations that treat their analytics pipeline as a ready-made foundation for AI deployment consistently discover the gap between the two when their first production model begins to degrade.

This blog examines why data engineering is now a core AI competency, what AI-specific pipeline requirements look like, and where most programs fall short. Data engineering for AI and AI data preparation services is the infrastructure layer that determines whether AI programs deliver in production.

Key Takeaways

  • Data engineering for AI requires different design priorities than analytics pipelines: training data quality, feature consistency, continuous retraining, and governance traceability are all distinct requirements.
  • Training-serving skew, where features are computed differently at training time versus inference time, is one of the most common and costly production failures in AI systems.
  • Data quality problems upstream of model training are invisible at the model level and typically surface only after production deployment reveals systematic behavioral gaps.
  • MLOps pipelines that automate retraining, validation, gating, and deployment require data engineering infrastructure that most organizations have not yet built to the required standard.

What Makes AI Data Engineering Different

The Difference Between Analytics and AI Pipeline Requirements

Analytics pipelines serve human analysts who interpret outputs and apply judgment before acting. AI pipelines serve models that act directly on their inputs. The tolerance for inconsistency, latency, and data quality gaps is fundamentally different. An analyst can recognize a suspicious data point and discount it. A model will train on it or run inference against it without any equivalent check, and the error propagates downstream until it surfaces as a model behavior problem.

AI pipelines also need to handle data across two distinct runtime contexts: training and serving. A feature computed one way during training and a slightly different way during serving produces a distribution shift that degrades model performance in ways that are difficult to diagnose. Getting this consistency right is a data engineering problem, not a modeling problem, and it requires explicit engineering investment in feature stores, schema versioning, and pipeline monitoring.

The Full Data Lifecycle an AI Pipeline Must Support

A production AI data pipeline covers raw data ingestion from multiple source systems with different schemas, latencies, and reliability characteristics; cleaning and validation to detect quality problems before they reach training; feature engineering and transformation applied consistently across training and serving; versioned dataset management so that any model can be reproduced from the exact training data that produced it; continuous data monitoring to detect distribution shift in incoming data; and retraining triggers that initiate new model training when monitoring signals indicate degradation. Data orchestration for AI at scale covers the architectural patterns that connect these stages into a coherent pipeline that can operate at the volume and reliability that production AI programs require.

Why Most Existing Data Infrastructure Is Not Ready

The typical enterprise data infrastructure was built to serve business intelligence and reporting workloads. It was designed for batch processing, human-readable schema conventions, and query-optimized storage formats. AI workloads require column-consistent, numerically normalized, schema-stable data served at high throughput for training jobs and at low latency for real-time inference. The transformation from a reporting-optimized infrastructure to an AI-ready one is not a configuration change. It is a substantive re-engineering effort that takes longer and costs more than most AI programs budget for at inception.

Training-Serving Skew: The Most Expensive Pipeline Failure

What Training-Serving Skew Is and Why It Is Systematic

Training-serving skew occurs when the data transformation logic applied to features during model training differs from the logic applied to the same features at inference time. The differences may be small, a different handling of null values, a slightly different normalization formula, a timestamp rounding convention that diverges by milliseconds, but their effect on model behavior can be significant. The model learned a relationship between features and outputs as computed at training time. At inference, it receives features as computed by a different code path, and the relationship it learned no longer holds precisely.

Training-serving skew is systematic rather than random because the two code paths are typically maintained by different teams, using different tools, under different operational pressures. The training pipeline runs in a batch compute environment managed by a data science team. The inference pipeline runs in a production serving system managed by an engineering team. When these teams do not share feature computation code and do not test for consistency across the boundary, skew accumulates silently until a model performance audit reveals the gap.

Feature Stores as the Engineering Solution

Feature stores address training-serving skew by centralizing feature computation logic in a single location that serves both training jobs and inference endpoints. When a feature is defined once and computed from the same code path regardless of whether it is being served to a training job or a live inference request, the skew disappears by construction. Feature stores also provide point-in-time correct feature lookup for training, ensuring that the feature values used to train a model on a historical example reflect what those features would have looked like at the time of the example, not their current values. This prevents data leakage from future information contaminating training labels. AI data preparation services include feature consistency auditing as part of the pipeline validation process, identifying training-serving skew before it reaches production.

Data Quality in AI Pipelines: A Different Standard

Why AI Pipelines Need Automated Quality Gating

Data quality problems that would produce a visible anomaly in a reporting dashboard and be caught before publication can pass through to an AI training job without triggering any alert. The model simply trains on the degraded data. If the quality problem is systematic, such as a sensor malfunction producing systematically biased readings for a week, the model learns the bias. If the quality problem is subtle, such as a schema change in a source system that shifts the distribution of a feature, the model learns the shifted distribution. 

In both cases, the quality problem only becomes visible after the trained model encounters data that does not match its training distribution in production. Automated data quality gating, where pipeline stages validate incoming data against defined statistical expectations before allowing it to proceed to training, is the engineering control that prevents these failures. Data collection and curation services that include automated quality validation checkpoints treat data quality as a pipeline engineering concern, not a post-hoc annotation review.

Schema Evolution and Backward Compatibility

Source systems change. A database column gets renamed, a categorical variable gains a new level, and a numeric field changes its unit of measurement. In an analytics pipeline, these changes produce visible query errors that prompt immediate investigation. In an AI training pipeline, they often produce silent degradation: the pipeline continues to run, the data continues to flow, and the trained model’s performance erodes because the semantic meaning of a feature has changed without the pipeline detecting it. Schema validation at ingestion, automated backward-compatibility testing, and versioned schema management are the engineering practices that prevent schema evolution from silently undermining model quality.

Data Lineage for Debugging and Compliance

When a model fails in production, diagnosing the cause requires tracing the failure back through the pipeline to its source. Without data lineage, this investigation is time-consuming and often inconclusive. With lineage, every piece of data in the training set can be traced to its source system, its transformation history, and every pipeline stage it passed through. Lineage is also a regulatory requirement in an increasing number of jurisdictions. The EU AI Act’s documentation requirements for high-risk AI systems effectively mandate that organizations can demonstrate the provenance and processing history of their training data. Financial data services for AI operate under the strictest data lineage requirements of any sector, and the pipeline engineering practices developed for financial AI provide a useful template for any program where regulatory traceability is a deployment requirement.

MLOps: Where Data Engineering and Model Operations Meet

The Data Engineering Foundation That MLOps Requires

MLOps, the discipline of operating machine learning systems reliably in production, is often described primarily as a model management concern: experiment tracking, model versioning, deployment automation, and performance monitoring. All of these capabilities rest on a data engineering foundation. Experiment tracking is only reproducible if the training data for each experiment is versioned and retrievable. Automated retraining requires a pipeline that can deliver a new, validated training dataset on a defined schedule or trigger. Performance monitoring requires continuous data quality monitoring that can distinguish model drift from data distribution shift. Without the underlying data engineering, MLOps tooling adds ceremony without delivering reliability.

Continuous Training and Its Data Requirements

Continuous training, the practice of periodically retraining models on new data to keep them aligned with the current data distribution, is the operational pattern that prevents model performance from degrading as the world changes. It requires a data pipeline that can deliver a fresh, validated, properly formatted training dataset on a defined schedule without manual intervention. Most organizations that attempt continuous training discover that their data infrastructure was not designed for unattended operation at the required reliability level. Failures in upstream source systems, unexpected schema changes, and data quality degradation all interrupt the training cycle in ways that require engineering attention to resolve.

Monitoring Data Drift vs. Model Drift

Production AI systems experience two distinct categories of performance degradation. Model drift occurs when the relationship between input features and the target variable changes, meaning the model’s learned function is no longer accurate even for inputs that match the training distribution. Data drift occurs when the distribution of incoming data changes so that inputs no longer resemble the training distribution, even if the underlying relationship has not changed. Distinguishing between these two failure modes requires monitoring infrastructure that tracks both input data statistics and model output statistics continuously. RAG systems face an additional variant of this problem where the knowledge base that retrieval components draw from becomes stale as the world changes, requiring separate monitoring of retrieval quality alongside model output quality.

Getting the Architecture Right for the Use Case

Batch Pipelines and When They Suffice

Batch data pipelines process data in scheduled runs, computing features and updating training datasets on a defined cadence. For use cases where the data does not change faster than the batch frequency and where inference does not require sub-second feature freshness, batch pipelines are simpler, cheaper, and more reliable than streaming alternatives. Most model training workloads are appropriately served by batch pipelines. The problem arises when organizations with batch pipelines deploy models to inference use cases that require real-time feature freshness and attempt to bridge the gap with stale precomputed features.

Streaming Pipelines for Real-Time AI Applications

Real-time AI applications, including fraud detection, dynamic pricing, content recommendation, and agentic AI systems that act on live data, require streaming data pipelines that compute features continuously and deliver them at inference latency. The engineering complexity of streaming pipelines is substantially higher than batch: event ordering, late-arriving data, exactly-once processing semantics, and backpressure handling are all engineering problems with no equivalent in batch processing. 

Organizations that attempt to build streaming pipelines without the requisite engineering expertise consistently underestimate the development and operational costs. Agentic AI deployments that operate on live data streams are among the most demanding data engineering contexts, as they require streaming pipelines that deliver consistent, low-latency features to inference endpoints while maintaining the quality standards that model performance depends on.

Hybrid Architectures and the Lambda Pattern

Many production AI systems require a hybrid approach: batch pipelines for model training and for features that can tolerate higher latency, combined with streaming pipelines for features that require real-time freshness. The lambda architecture pattern, which maintains separate batch and streaming processing paths that are reconciled into a unified serving layer, is one established approach to this problem. Its complexity is real: maintaining two code paths for the same logical computation introduces the same kind of skew risk that motivates feature stores, and organizations implementing lambda architectures need explicit engineering controls to ensure consistency across the batch and streaming paths.

Building Data Engineering Capability for AI

The Skills Gap Between Analytics and AI Data Engineering

Data engineers with strong analytics backgrounds are well-positioned to develop the additional competencies that AI data engineering requires, but the transition is not automatic. Feature engineering for machine learning, understanding of training-serving consistency requirements, experience with model performance monitoring, and familiarity with MLOps tooling are all skills that analytics-focused data engineers typically need to develop deliberately. Organizations that recognize this skills gap and invest in structured upskilling consistently close it faster than those that assume existing analytics engineering capability transfers directly to AI contexts.

The Organisational Location of Data Engineering for AI

Where data engineering for AI sits organisationally has practical implications for how effectively it supports AI programs. Data engineering embedded within ML teams has strong contextual knowledge of model requirements but may lack the operational and infrastructure expertise of a dedicated data platform team. Centralized data platform teams have broader infrastructure expertise but may lack the AI-specific context needed to prioritize AI pipeline requirements appropriately. The most effective organizational arrangements typically involve dedicated collaboration structures between ML teams and data platform teams, with shared ownership of the AI data pipeline and explicit interfaces between the two.

Making the Business Case for Data Engineering Investment

Data engineering investment is often underfunded because its value is difficult to quantify before a data quality failure reveals its absence. The most effective approach to making the business case is to connect data engineering infrastructure directly to the outcomes that senior stakeholders care about: time to deploy a new AI model, cost of model retraining cycles, time to diagnose and resolve a production model failure, and regulatory risk exposure from inadequate data documentation. Each of these outcomes has a measurable improvement trajectory from investment in AI data engineering that can be estimated from program history or industry benchmarks. Data engineering for AI is not overhead on the model development program. It is the infrastructure that determines whether model development investment reaches production.

How Digital Divide Data Can Help

Digital Divide Data provides data engineering and AI data preparation services designed around the specific requirements of production AI programs, from pipeline architecture through data quality validation, feature consistency management, and compliance documentation.

The data engineering for AI services covers pipeline design and implementation for both batch and streaming AI workloads, with automated quality gating, schema validation, and data lineage documentation built into the pipeline architecture rather than added as optional audits.

The AI data preparation services address the upstream data quality and feature engineering requirements that determine training dataset quality, including distribution coverage analysis, feature consistency validation, and training-serving skew detection.

For programs with regulatory documentation requirements, the data collection and curation services include provenance tracking and transformation documentation. Financial data services for AI apply financial-grade lineage and access control standards to AI training pipelines for programs operating under the most demanding regulatory frameworks.

Build the data engineering foundation that makes AI programs deliver in production. Talk to an expert!

Conclusion

Data engineering has shifted from a support function to a core determinant of AI program success. The organizations that deploy reliable, production-grade AI systems at scale are not those with the most sophisticated models. They are those who have built the data infrastructure to supply those models with consistent, high-quality, well-documented data across training and serving contexts. The shift requires deliberate investment in skills, tooling, and organizational structures that most programs are still in the early stages of making. The programs that make that investment now will compound the returns as they deploy more models, retrain more frequently, and face increasing regulatory scrutiny of their data practices.

The practical starting point is an honest audit of where the current data infrastructure diverges from AI pipeline requirements, specifically on training-serving consistency, automated quality gating, data lineage documentation, and continuous monitoring. Each gap has a known engineering solution. 

The cost of addressing those gaps before the first production deployment is a fraction of the cost of addressing them after a model failure reveals their existence. AI data preparation built to production standards from the start is the investment that makes every subsequent model faster to deploy and more reliable in operation.

References

Pancini, M., Camilli, M., Quattrocchi, G., & Tamburri, D. A. (2025). Engineering MLOps pipelines with data quality: A case study on tabular datasets in Kaggle. Journal of Software: Evolution and Process, 37(9), e70044. https://doi.org/10.1002/smr.70044

Minh, T. Q., Lan, N. T., Phuong, L. T., Cuong, N. C., & Tam, D. C. (2025). Building scalable MLOps pipelines with DevOps principles and open-source tools for AI deployment. American Journal of Artificial Intelligence, 9(2), 297-309. https://doi.org/10.11648/j.ajai.20250902.29

European Parliament and the Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

Kreuzberger, D., Kuhl, N., & Hirschl, S. (2023). Machine learning operations (MLOps): Overview, definition, and architecture. IEEE Access, 11, 31866-31879. https://doi.org/10.1109/ACCESS.2023.3262138

Frequently Asked Questions

Q1. What is the difference between data engineering for analytics and data engineering for AI?

Analytics pipelines optimize for query performance and reporting latency, serving human analysts who apply judgment to outputs. AI pipelines must additionally ensure feature consistency between training and serving environments, support continuous retraining, and produce data lineage documentation that analytics pipelines do not require.

Q2. What is training-serving skew, and why does it degrade model performance?

Training-serving skew occurs when the feature-computation logic differs between training and inference, causing models to receive inputs at inference that differ statistically from those on which they were trained, degrading prediction accuracy in ways that are difficult to diagnose without explicit consistency monitoring.

Q3. Why is data quality gating important in AI pipelines?

Data quality problems upstream of model training are invisible at the model level and do not trigger pipeline errors, so models silently learn from degraded data. Automated quality gating blocks problematic data from proceeding to training, preventing the problem from propagating into model behavior.

Q5. When does an AI application require a streaming data pipeline rather than a batch?

Streaming pipelines are required when the application depends on features that must reflect the current state of the world at inference time, such as fraud detection on live transactions, real-time recommendation systems, or agentic AI systems acting on live data streams.

Why Data Engineering Is Becoming a Core AI Competency Read Post »

Human-in-the-Loop

When to Use Human-in-the-Loop vs. Full Automation for Gen AI

The framing of human-in-the-loop versus full automation is itself slightly misleading, because the decision is rarely binary. Most production GenAI systems operate on a spectrum, applying automated processing to high-confidence, low-risk outputs and routing uncertain, high-stakes, or policy-sensitive outputs to human review. The design question is where on that spectrum each output category belongs, which thresholds trigger human review, and what the human reviewer is actually empowered to do when they enter the loop.

This blog examines how to make that decision systematically for generative AI programs, covering the dimensions that distinguish tasks suited to automation from those requiring human judgment, and how human involvement applies differently across the GenAI development lifecycle versus the inference pipeline. Human preference optimization and trust and safety solutions are the two GenAI capabilities where human oversight most directly determines whether a deployed system is trustworthy.

Key Takeaways

  • Human-in-the-loop (HITL) and full automation are not binary opposites; most production GenAI systems use a spectrum based on output risk, confidence, and regulatory context.
  • HITL is essential at three lifecycle stages: preference data collection for RLHF, model evaluation for subjective quality dimensions, and safety boundary review at inference.
  • Confidence-based routing, directing low-confidence outputs to human review, only works if the model’s stated confidence is empirically validated to correlate with its actual accuracy.
  • Active learning concentrates human annotation effort on the outputs that most improve model performance, making HITL economically viable at scale.

The Fundamental Decision Framework

Four Questions That Determine Where Humans Belong

Before assigning any GenAI task to full automation or to an HITL workflow, four questions need to be answered. 

First: what is the cost of a wrong output? If errors are low-stakes, easily correctable, and reversible, the calculus favors automation. If errors are consequential, hard to detect downstream, or irreversible, the calculus favors human review. 

Second: how well-defined is correctness for this task? Tasks with verifiable correct answers, like code that either passes tests or does not, can be automated more reliably than tasks where quality requires contextual judgment.

Third: how consistent is the model’s performance across the full distribution of inputs the task will produce? A model that performs well on average but fails unpredictably on specific input types needs human oversight targeted at those types, not uniform automation across the board. 

Fourth: Does a regulatory or compliance framework impose human accountability requirements for this decision type? In regulated domains, the answer to this question can override the purely technical assessment of whether automation is capable enough.

The Spectrum Between Full Automation and Full Human Review

Most production systems implement neither extreme. Each point on this spectrum makes a different trade-off between throughput, cost, consistency, and the risk of undetected errors. The right point differs by task category, even within a single deployment. Treating the decision as binary and applying the same oversight level to every output type wastes reviewer capacity on low-risk outputs while under-protecting high-risk ones.

Distinguishing Human-in-the-Loop from Human-on-the-Loop

In a HITL design, the human actively participates in processing: reviewing, correcting, or approving outputs before they are acted on. In a human-on-the-loop design, automated processing runs continuously, and humans set policies and intervene when aggregate metrics signal a problem. Human-on-the-loop is appropriate for lower-stakes automation where real-time individual review is impractical. Human-in-the-loop is appropriate where individual output quality matters enough to justify the latency and cost of per-item review. Agentic AI systems that take real-world actions, covered in depth in building trustworthy agentic AI with human oversight, require careful consideration of which action categories trigger each pattern.

Human Involvement Across the GenAI Development Lifecycle

Data Collection and Annotation

In the data development phase, humans collect, curate, and annotate the examples that teach the model what good behavior looks like. Automation can assist at each stage, but for subjective quality dimensions, the human signal sets the ceiling of what the model can learn. Building generative AI datasets with human-in-the-loop workflows covers how annotation workflows direct human effort to the examples that most improve model quality rather than applying uniform review across the full corpus.

Preference Data and Alignment

Reinforcement learning from human feedback is the primary mechanism for aligning generative models with quality, safety, and helpfulness standards. The quality of this preference data depends critically on the representativeness of the annotator population, the specificity of evaluation criteria, and the consistency of annotation guidelines across reviewers. Poor preference data produces aligned-seeming models that optimize for superficial quality signals rather than genuine quality. Human preference optimization at the required quality level is itself a discipline requiring structured workflows, calibrated annotators, and systematic inter-annotator agreement measurement.

Human Judgment as the Evaluation Standard

Automated metrics capture some quality dimensions and miss others. For output dimensions that require contextual judgment, human evaluation is the primary signal. Model evaluation services for production GenAI programs combine automated metrics for the dimensions they can measure reliably with structured human evaluation for the dimensions they cannot, producing an evaluation framework that actually predicts production performance.

Criteria for Choosing Automation in the Inference Pipeline

When Automation Is the Right Default

Common GenAI tasks suited to automation include content classification, where model confidence is high, structured data extraction from documents with a well-defined schema, code completion suggestions where tests verify correctness, and first-pass moderation of clearly violating content where the violation is unambiguous. These tasks share the property that outputs are either verifiably correct or easily triaged by downstream processes.

Confidence Thresholds as the Routing Mechanism

The threshold calibration determines the economics of the system: too high and the review queue contains many outputs that would have been correct, wasting reviewer capacity; too low and errors pass through at a rate that undermines the purpose of automation. A miscalibrated model that confidently produces incorrect outputs, while routing correct outputs to human review as uncertain, is worse than either full automation or full human review. Calibration validation is a prerequisite for deploying confidence-based routing in any context where error consequences are significant.

Criteria for Requiring Human Oversight in the Inference Pipeline

High-Stakes, Irreversible, or Legally Consequential Outputs

Medical triage that directs patient care, legal documents filed on behalf of clients, loan decisions that affect credit history, and communications sent to vulnerable users under stress are all outputs where the cost of model error in specific cases exceeds the efficiency benefit of automating those cases. The model’s average accuracy across the distribution does not determine the acceptability of errors in the highest-stakes subset.

Ambiguous, Novel, or Out-of-Distribution Inputs

A well-designed inference pipeline identifies signals of novelty or ambiguity, low model confidence, unusual input structure, topic categories underrepresented in training, or user signals of sensitive context, and routes those inputs to human review. Trust and safety solutions that monitor the output stream for these signals continuously route potentially harmful or policy-violating outputs to human review before they are served.

Safety, Policy, and Ethical Judgment Calls

A model that has learned patterns for identifying policy violations will exhibit systematic blind spots at the policy boundary, and those blind spots are exactly where human judgment is most needed. Automating the obvious cases while routing boundary cases to human review is not a limitation of the automation. It is the correct architecture for any deployment where policy enforcement has real consequences.

Changing the Economics of Human Annotation

Why Uniform Human Review Is Inefficient

In a system where every output is reviewed by a human, the cost of human oversight scales linearly with volume. Most reviews confirm what was already reliable, diluting the human signal with cases that need no correction and burying it in reviewer fatigue. The improvements to model performance come from the small fraction of uncertain or ambiguous outputs that most annotation programs review at the same rate as everything else.

Active Learning as the Solution

For preference data collection in RLHF, active learning selects the comparison pairs where the model’s behavior is most uncertain or most in conflict with human preferences, focusing annotator effort on the feedback that will most change model behavior. The result is a faster model improvement per annotation hour than uniform sampling produces. Data collection and curation services that integrate active learning into annotation workflow design deliver better model improvement per annotation dollar than uniform-sampling approaches.

The Feedback Loop Between Deployment and Training

This flywheel only operates if the human review workflow is designed to capture corrections in a format usable for training, and if the pipeline connects production corrections back to the training data process. Systems that treat human review as a separate customer service function, disconnected from the engineering organization, rarely close this loop and miss the model improvement opportunity that deployment-time human feedback provides.

How Digital Divide Data Can Help

Digital Divide Data provides human-in-the-loop services across the GenAI development lifecycle and the inference pipeline, with workflows designed to direct human effort to the tasks and output categories where it produces the greatest improvement in model quality and safety.

For development-phase human oversight, human preference optimization services provide structured preference annotation with calibrated reviewers, explicit inter-annotator agreement measurement, and protocols designed to produce the consistent preference signal that RLHF and DPO training requires. Active learning integration concentrates reviewer effort on the comparison pairs that most inform model behavior.

For deployment-phase oversight, trust and safety solutions provide output monitoring, safety boundary routing, and human review workflows that keep GenAI systems aligned with policy and regulatory requirements as output volume scales. Review interfaces are designed to minimize automation bias and support substantive reviewer judgment rather than nominal confirmation.

For programs navigating regulatory requirements, model evaluation services provide the independent human evaluation of model outputs that regulators require as evidence of meaningful oversight, documented with the audit trails that compliance frameworks mandate. Generative AI solutions across the full lifecycle are structured around the principle that human oversight is most valuable when systematically targeted rather than uniformly applied.

Design human-in-the-loop workflows that actually improve model quality where it matters. Talk to an expert.

Conclusion

The choice between human-in-the-loop and full automation for a GenAI system is not a one-time architectural decision. It is an ongoing calibration that should shift as model performance improves, as the production input distribution evolves, and as the program’s understanding of where the model fails becomes more precise. The programs that get this calibration right treat HITL design as a discipline, with explicit criteria for routing decisions, measured assessment of where human judgment adds value versus where it adds only variability, and active feedback loops that connect production corrections back to training data pipelines.

As GenAI systems take on more consequential tasks and as regulators impose more specific oversight requirements, the quality of HITL design becomes a direct determinant of whether programs can scale responsibly. A system where human oversight is nominal, where reviewers are overwhelmed, and corrections are inconsistent, provides neither the safety benefits that justify its cost nor the regulatory compliance it is designed to demonstrate. 

Investing in the workflow design, reviewer calibration, and active learning infrastructure that makes human oversight substantive is what separates programs that scale safely from those that scale their error rates alongside their output volume.

References

European Parliament and the Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0). NIST. https://doi.org/10.6028/NIST.AI.100-1

Frequently Asked Questions

Q1. What is the difference between human-in-the-loop and human-on-the-loop AI?

Human-in-the-loop places a human as a checkpoint within the pipeline, reviewing or approving individual outputs before they are used. Human-on-the-loop runs automation continuously while humans monitor aggregate system behavior and intervene at the policy level rather than on individual outputs.

Q2. How do you decide which outputs to route to human review in a high-volume GenAI system?

The most practical mechanism is confidence-based routing — directing outputs below a calibrated threshold to human review — but this requires empirical validation that the model’s stated confidence actually correlates with its accuracy before it is used as a routing signal.

Q3. What is automation bias, and why does it undermine human-in-the-loop oversight?

Automation bias is the tendency for reviewers to defer to automated outputs without meaningful assessment, particularly under high volume and time pressure, resulting in nominal oversight where the errors HITL was designed to catch pass through undetected.

Q4. Does active learning reduce the cost of human-in-the-loop annotation for GenAI?

Yes. By identifying which examples would be most informative to annotate, active learning concentrates human effort on the outputs that most improve model performance, producing faster capability gains per annotation hour than uniform sampling of the output stream.

When to Use Human-in-the-Loop vs. Full Automation for Gen AI Read Post »

Data Annotation

What 99.5% Data Annotation Accuracy Actually Means in Production

The gap between a stated accuracy figure and production data quality is not primarily a matter of vendor misrepresentation. It is a matter of measurement. Accuracy as reported in annotation contracts is typically calculated across the full dataset, on all annotation tasks, including the straightforward cases that every annotator handles correctly. 

The cases that fail models are not the straightforward ones. They are the edge cases, the ambiguous inputs, the rare categories, and the boundary conditions that annotation quality assurance processes systematically underweight because they are a small fraction of the total volume.

This blog examines what data annotation accuracy actually means in production, and what QA practices produce accuracy that predicts production performance. 

The Distribution of Errors Is the Real Quality Signal

Aggregate accuracy figures obscure the distribution of errors across the annotation task space. The quality metric that actually predicts model performance is category-level accuracy, measured separately for each object class, scenario type, or label category in the dataset. 

A dataset that achieves 99.8% accuracy on the common categories and 85% accuracy on the rare ones has a misleadingly high headline figure. The right QA framework measures accuracy at the level of granularity that matches the model’s training objectives. Why high-quality annotation defines computer vision model performance covers the specific ways annotation errors compound in model training, particularly when those errors concentrate in the tail of the data distribution.

Task Complexity and What Accuracy Actually Measures

Object Detection vs. Semantic Segmentation vs. Attribute Classification

Annotation accuracy means different things for different task types, and a 99.5% accuracy figure for one type is not equivalent to 99.5% for another. Bounding box object detection tolerates some positional imprecision without significantly affecting model training. Semantic segmentation requires pixel-level precision; an accuracy figure that averages across all pixels will look high because background pixels are easy to label correctly, while the boundary region between objects, which is where the model needs the most precision, contributes a small fraction of total pixels. 

Attribute classification of object states, whether a traffic light is green or red, whether a pedestrian is looking at the road or away from it, has direct safety implications in ADAS training data, where a single category of attribute error can produce systematic model failures in specific driving scenarios.

The Subjectivity Problem in Complex Annotation Tasks

Many production annotation tasks require judgment calls that reasonable annotators make differently. Sentiment classification of ambiguous text. Severity grading of partially occluded road hazards. Boundary placement on objects with indistinct edges. For these tasks, inter-annotator agreement, not individual accuracy against a gold standard, is the more meaningful quality metric. Two annotators who independently produce slightly different but equally valid segmentation boundaries are not making errors; they are expressing legitimate variation in the task.

When inter-annotator agreement is low, and a gold standard is imposed by adjudication, the agreed label is often not more accurate than either annotator’s judgment. It is just more consistent. Consistency matters for model training because conflicting labels on similar examples teach the model that the decision boundary is arbitrary. Agreement measurement, calibration exercises, and adjudication workflows are the practical tools for managing this in annotation programs, and they matter more than a stated accuracy figure for subjective task types.

Temporal and Spatial Precision in Video and 3D Annotation

3D LiDAR annotation and video annotation introduce precision requirements that aggregate accuracy metrics do not capture well. A bounding box placed two frames late on an object that is decelerating teaches the model a different relationship between visual features and motion dynamics than the correctly timed annotation. 

A 3D bounding box that is correctly classified but slightly undersized systematically underestimates object dimensions, producing models that misjudge proximity calculations in autonomous driving. For 3D LiDAR annotation in safety-critical applications, the precision specification of the annotation, not just its categorical accuracy, is the quality dimension that determines whether the model is trained to the standard the application requires.

Error Taxonomy in Production Data

Systematic vs. Random Errors

Random annotation errors are distributed across the dataset without a pattern. A model trained on data with random errors learns through them, because the correct pattern is consistently signaled by the majority of examples, and the errors are uncorrelated with any specific feature of the input. Systematic errors are the opposite: they are correlated with specific input features and consistently teach the model a wrong pattern for those features.

A systematic error might be: annotators consistently misclassifying motorcycles as bicycles in distant shots because the training guidelines were ambiguous about the size threshold. Or consistently under-labeling partially occluded pedestrians because the adjudication rule was interpreted to require full body visibility. Or applying inconsistent severity thresholds to road defects, depending on which annotator batch processed the examples. Systematic errors are invisible in aggregate accuracy figures and visible in production as model performance gaps on exactly the input types the errors affected.

Edge Cases and the Tail of the Distribution

Edge cases are scenarios that occur rarely in the training distribution but have an outsized impact on model performance. A pedestrian in a wheelchair. A partially obscured stop sign. A cyclist at night. These scenarios represent a small fraction of total training examples, so their annotation error rate has a negligible effect on aggregate accuracy figures. They are exactly the scenarios where models fail in deployment if the training data for those scenarios is incorrectly labeled. Human-in-the-loop computer vision for safety-critical systems specifically addresses the quality assurance approach that applies expert oversight to the rare, high-stakes scenarios that standard annotation workflows underweight.

Error Types in Automotive Perception Annotation

A multi-organisation study involving European and UK automotive supply chain partners identified 18 recurring annotation error types in AI-enabled perception system development, organized across three dimensions: completeness errors such as attribute omission, missing edge cases, and selection bias; accuracy errors such as mislabeling, bounding box inaccuracies, and granularity mismatches; and consistency errors such as inter-annotator disagreement and ambiguous instruction interpretation. 

The finding that these error types recur systematically across supply chain tiers, and that they propagate from annotated data through model training to system-level decisions, demonstrates that annotation quality is a lifecycle concern rather than a data preparation concern. The errors that emerge in multisensor fusion annotation, where the same object must be consistently labeled across camera, radar, and LiDAR inputs, span all three dimensions simultaneously and are among the most consequential for model reliability.

Domain-Specific Accuracy Requirements

Autonomous Driving: When Annotation Error Is a Safety Issue

In autonomous driving perception, annotation error is not a model quality issue in the abstract. It is a safety issue with direct consequences for system behavior at inference time. A missed pedestrian annotation in training data produces a model that is statistically less likely to detect pedestrians in similar scenarios in deployment. 

The standard for annotation accuracy in safety-critical autonomous driving components is not set by what is achievable in general annotation workflows. It is set by the safety requirements that the system must meet. ADAS data services require annotation accuracy standards that are tied to the ASIL classification of the function being trained, with the highest-integrity functions requiring the most rigorous QA processes and the most demanding error distribution requirements.

Healthcare AI: Accuracy Against Clinical Ground Truth

In medical imaging and clinical NLP, annotation accuracy is measured against clinical ground truth established by domain experts, not against a labeling team’s majority vote. A model trained on annotations where non-expert annotators applied clinical labels consistently but incorrectly has not learned the clinical concept. 

It has learned a proxy concept that correlates with the clinical label in the training distribution and diverges from it in the deployment distribution. Healthcare AI solutions require annotation workflows that incorporate clinical expert review at the quality assurance stage, not just at the guideline development stage, because the domain knowledge required to identify labeling errors is not accessible to non-clinical annotators reviewing annotations against guidelines alone.

NLP Tasks: When Subjectivity Is a Quality Dimension, Not a Defect

For natural language annotation tasks, the distinction between annotation error and legitimate annotator disagreement is a design choice rather than a factual determination. Sentiment classification, toxicity grading, and relevance assessment all contain a genuine subjective component where multiple labels are defensible for the same input. Programs that force consensus through adjudication and report the adjudicated label as ground truth may be reporting misleadingly high accuracy figures. 

The underlying variation in annotator judgments is a real property of the task, and models that treat it as noise to be eliminated will be systematically miscalibrated for inputs that humans consistently disagree about. Text annotation workflows that explicitly measure and preserve inter-annotator agreement distributions, rather than collapsing them to a single adjudicated label, produce training data that more accurately represents the ambiguity inherent in the task.

QA Frameworks That Produce Accuracy

Stratified QA Sampling Across Input Categories

The most consequential change to a standard QA process for production annotation programs is stratified sampling: drawing the QA review sample proportionally, not from the overall dataset but from each category separately, with over-representation of rare and high-stakes categories. A flat 5% QA sample across a dataset where one critical category represents 1% of examples produces approximately zero QA samples from that category. A stratified sample that ensures a minimum review rate of 10% for each category, regardless of its prevalence, surfaces error patterns in rare categories that flat sampling misses entirely.

Gold Standards, Calibration, and Ongoing Monitoring

Gold standard datasets, pre-labeled examples with verified correct labels drawn from the full difficulty distribution of the annotation task, serve two quality assurance functions. At onboarding, they assess the annotator’s capability before any annotator touches production data. During ongoing annotation, they are seeded into the production stream as a continuous calibration check: annotators and automated QA systems encounter gold standard examples without knowing they are being monitored, and performance on those examples signals the current state of label quality. This approach catches quality degradation before it accumulates across large annotation batches. Performance evaluation services that apply the same systematic quality monitoring logic to annotation output as to model output are providing a quality assurance architecture that reflects the production stakes of the annotation task.

Inter-Annotator Agreement as a Leading Indicator

Inter-annotator agreement measurement is a leading indicator of annotation quality problems, not a lagging one. When agreement on a specific category or scenario type drops below the calibrated threshold, it signals that the annotation guideline is insufficient for that category, that annotator calibration has drifted on that dimension, or that the category itself is inherently ambiguous and requires a policy decision about how to handle it. None of these problems is visible in aggregate accuracy figures until a model trained on the affected data shows the performance gap in production.

Running agreement measurement as a continuous process, not as a periodic audit, is what transforms it from a diagnostic tool into a preventive one. Agreement tracking identifies where quality problems are emerging before they contaminate large annotation batches, and it provides the specific category-level signal needed to target corrective annotation guidelines and retraining at the right examples.

Accuracy Specifications That Actually Match Production Requirements

Writing Accuracy Requirements That Reflect Task Structure

Accuracy specifications that simply state a percentage without defining the measurement methodology, the sampling approach, the task categories covered, and the handling of edge cases produce a number that vendors can meet without delivering the quality the program requires. A well-formed accuracy specification defines the error metric separately for each major category in the dataset, specifies a minimum QA sample rate for each category, defines the gold standard against which accuracy is measured, specifies inter-annotator agreement thresholds for subjective task dimensions, and defines acceptable error distributions rather than just aggregate rates.

Tiered Accuracy Standards Based on Safety Implications

Not all annotation tasks in a training dataset have the same safety or quality implications, and applying a uniform accuracy standard across all of them is both over-specifying for some tasks and under-specifying for others. A tiered accuracy framework assigns the most demanding QA requirements to the annotation categories with the highest safety or model quality implications, applies standard QA to routine categories, and explicitly identifies which categories are high-stakes before annotation begins. 

This approach concentrates quality investment where it has the most impact on production model behavior. ODD analysis for autonomous systems provides the framework for identifying which scenario categories are highest-stakes in autonomous driving deployment, which in turn determines which annotation categories require the most demanding accuracy specifications.

The Role of AI-Assisted Annotation in Quality Management

Pre-labeling as a Quality Baseline, Not a Quality Guarantee

AI-assisted pre-labeling, where a model provides an initial annotation that human annotators review and correct, is increasingly standard in annotation workflows. It improves throughput significantly and, for common categories in familiar distributions, it also tends to improve accuracy by catching obvious errors that manual annotation introduces through fatigue and inattention. It does not improve accuracy for the categories where the pre-labeling model itself performs poorly, which are typically the edge cases and rare categories that are most important for production model performance.

For AI-assisted annotation to actually improve quality rather than simply speed, the QA process needs to specifically measure accuracy on the categories where the pre-labeling model is most likely to err, and apply heightened human review to those categories rather than accepting pre-labels at the same review rate as familiar categories. The risk is that annotation programs using AI assistance report higher aggregate accuracy because the common cases are handled well, while the rare cases, where the pre-labeling model has not been validated, and human reviewers are not applying additional scrutiny, are labeled at lower quality than a purely manual process would produce. Data collection and curation services that combine AI-assisted pre-labeling with category-stratified human review apply the efficiency benefits of AI assistance to the right tasks while directing human expertise to the categories where it is most needed.

How Digital Divide Data Can Help

Digital Divide Data provides annotation services designed around the quality standards that production AI programs actually require, treating accuracy as a multidimensional property measured at the category level, not as a single aggregate figure.

Across image annotation, video annotation, audio annotation, text annotation, 3D LiDAR annotation, and multisensor fusion annotation, QA processes apply stratified sampling across input categories, gold standard monitoring, and inter-annotator agreement measurement as continuous quality signals rather than periodic audits.

For safety-critical programs in autonomous driving and healthcare, annotation accuracy specifications are built around the safety and regulatory requirements of the specific function being trained, not around generic industry accuracy benchmarks. ADAS data services and healthcare AI solutions apply domain-expert review at the QA stage for the high-stakes categories where clinical or safety knowledge is required to identify labeling errors that domain-naive reviewers cannot catch.

The model evaluation services provide the downstream validation that connects annotation quality to model performance, identifying whether the error distribution in the training data is producing the model behavior gaps that category-level accuracy metrics predicted.

Talk to an expert and build annotation programs where the accuracy figure matches what matters in production. 

Conclusion

A 99.5% annotation accuracy figure is not a guarantee of production model quality. It is an average that tells you almost nothing about where the errors are concentrated or what those errors will teach the model about the cases that matter most in deployment. The programs that build reliable production models are those that specify annotation quality in terms of the distribution of errors across categories, not just the aggregate rate; that measure quality with QA sampling strategies designed to catch the rare, high-stakes errors rather than the common, low-stakes ones; and that treat inter-annotator agreement measurement as a leading indicator of quality degradation rather than a periodic audit.

The sophistication of the accuracy specification is ultimately more important than the accuracy figure itself. Vendors who can only report aggregate accuracy and cannot provide category-level error distributions are not providing the visibility into data quality that production programs require. 

Investing in annotation workflows with the measurement infrastructure to produce that visibility from the start, rather than discovering the gaps when model failures surface the error patterns in production, is the difference between annotation quality that predicts model performance and annotation quality that merely reports it.

References

Saeeda, H., Johansson, T., Mohamad, M., & Knauss, E. (2025). Data annotation quality problems in AI-enabled perception system development. arXiv. https://arxiv.org/abs/2511.16410

Karim, M. M., Khan, S., Van, D. H., Liu, X., Wang, C., & Qu, Q. (2025). Transforming data annotation with AI agents: A review of architectures, reasoning, applications, and impact. Future Internet, 17(8), 353. https://doi.org/10.3390/fi17080353

Saeeda, H., Johansson, T., Mohamad, M., & Knauss, E. (2025). RE for AI in practice: Managing data annotation requirements for AI autonomous driving systems. arXiv. https://arxiv.org/abs/2511.15859

Northcutt, C., Athalye, A., & Mueller, J. (2024). Pervasive label errors in test sets destabilize machine learning benchmarks. Proceedings of the 35th NeurIPS Track on Datasets and Benchmarks. https://arxiv.org/abs/2103.14749

Frequently Asked Questions

Q1. Why does a 99.5% annotation accuracy rate not guarantee good model performance?

Aggregate accuracy averages across all examples, including easy ones that any annotator labels correctly. Errors are often concentrated in rare categories and edge cases that have the highest impact on model failure in production, yet contribute minimally to the aggregate figure.

Q2. What is the difference between random and systematic annotation errors?

Random errors are uncorrelated with input features and are effectively averaged away during model training. Systematic errors are correlated with specific input categories and consistently teach the model a wrong pattern for those inputs, producing predictable model failures in deployment.

Q3. How should accuracy requirements be specified for safety-critical annotation tasks?

Safety-critical annotation specifications should define accuracy requirements separately for each task category, establish minimum QA sample rates for rare and high-stakes categories, specify the gold standard used for measurement, and define acceptable error distributions rather than only aggregate rates.

Q4. When is inter-annotator agreement more meaningful than accuracy against a gold standard?

For tasks with inherent subjectivity such as sentiment classification, toxicity grading, or boundary placement on ambiguous objects, inter-annotator agreement is a more appropriate quality metric because multiple labels can be defensible and forcing consensus through adjudication may not produce a more accurate label.

What 99.5% Data Annotation Accuracy Actually Means in Production Read Post »

Data Collection and Curation

Data Collection and Curation at Scale: What It Actually Takes to Build AI-Ready Datasets

Data collection and curation at scale presents a different class of problem from small-scale annotation work. Quality assurance methods that work for thousands of examples break down at millions. Diversity gaps that are invisible in small samples become systematic biases in large ones. Deduplication that is trivially implemented on a workstation requires a distributed infrastructure at web-corpus scale. Filtering decisions that seem straightforward on single documents become judgment calls with significant model-quality implications when applied uniformly across a hundred billion tokens. Each of these challenges has solutions, but they require explicit engineering investment that many programs fail to plan for.

This blog examines what data collection and curation at scale actually involves, covering the pipeline stages that determine dataset quality, the specific failure modes that emerge at each stage, and the role of synthetic data as a complement to human-generated content.

The Data-Centric View of AI Development

Why Data Quality Outweighs Model Architecture for Most Programs

The research community has made significant progress on model architectures over the past decade. The result is that for most practical AI applications, architecture choices among competitive modern approaches contribute relatively little to the variance in production outcomes. What contributes most is the data. The same architecture trained on a carefully curated dataset consistently outperforms the same architecture trained on a noisy one, often by a wider margin than any achievable through architectural modification.

This principle is increasingly well understood at the theoretical level. It is less consistently acted on at the program level, where data collection is still often treated as a precursor to the real work rather than as the primary determinant of results. Teams that invest in data quality systematically, treating curation as a discipline with its own engineering rigor, tend to close more of the gap between what their models can achieve and what they actually deliver in deployment.

The Scale at Which Problems Become Structural

Problems that are manageable at a small scale become structural constraints at a large scale. With a thousand examples, a human reviewer can catch most quality issues. At a million, systematic automated quality assessment is required, and the quality criteria encoded in those automated filters directly shape what the model learns. 

At a billion tokens, deduplication becomes a distributed computing problem. At a hundred billion, even small systematic biases in the filtering logic can produce measurable skews in model behavior. Data engineering for AI at scale requires pipeline infrastructure, tooling, and quality standards designed for the target volume from the beginning, not retrofitted after the dataset is already assembled.

The Data Collection Stage

Source Selection and Coverage Planning

The sources from which training data is collected determine the model’s coverage of the variation space the program cares about. A source selection process that prioritizes easily accessible data over representative data will produce a corpus that is large but systematically skewed toward whatever content the accessible sources contain. Web-crawled text over-represents English, over-represents content produced by educated, English-speaking adults, and under-represents the variation of language use, domain expertise, and cultural context that broad-coverage models require.

Coverage planning means defining the variation space explicitly before data collection begins, then assessing source options against coverage of that space rather than primarily against volume. For domain-specific programs, this means mapping the target domain’s terminology, use cases, and content types and identifying sources that cover each dimension. For general-purpose programs, it means explicit coverage planning across languages, registers, domains, and demographic perspectives.

Consent, Licensing, and Provenance

Data provenance documentation has moved from a best practice to an operational requirement in most jurisdictions where AI systems are deployed. Knowing where training data came from, whether it was collected with appropriate consent, and what licensing terms apply to it is no longer a compliance afterthought. 

Programs that cannot document their data provenance face increasing regulatory exposure in the EU under the AI Act, in the US under evolving copyright and privacy frameworks, and in any regulated industry application where data handling accountability is a direct requirement. Data collection and curation services that maintain full provenance documentation for every data source are providing a compliance asset alongside a training asset, and that distinction matters more with each passing regulatory cycle.

Human-Generated vs. Synthetic Data

Synthetic data generated by language models has become a significant component of training corpora for many programs, addressing the scarcity of high-quality human-generated data in specific domains or for specific tasks. 

Synthetic data can fill coverage gaps, augment rare categories, and provide labeled examples for tasks where human annotation would be prohibitively expensive. It also introduces risks that human-generated data does not: the distribution of synthetic data reflects the biases and limitations of the model that generated it, and training on synthetic data that is too close in distribution to the training data of the generator can produce circular reinforcement of existing capabilities rather than genuine capability expansion.

The practical guidance is to use synthetic data as a targeted supplement to human-generated data, not as a wholesale replacement. Synthetic examples that are conditioned on real, verified source material and that are evaluated for quality against the same standards as human-generated examples contribute positively to training corpora. Unconditioned synthetic generation at scale, without quality verification, tends to introduce the kind of fluent-but-shallow content that degrades model reasoning quality even as it inflates apparent dataset size.

Deduplication in Building AI-Ready Datasets

Why Duplicates Harm Model Quality

Duplicate content in a training corpus has two harmful effects. First, it causes the model to over-weight the statistical patterns present in the duplicated content, amplifying whatever biases or idiosyncrasies that content contains. Second, at sufficient duplication rates, it can cause the model to memorize specific sequences verbatim rather than learning generalizable patterns, which produces unreliable behavior on novel inputs and creates privacy and copyright exposure if the memorized content contains personal or proprietary information.

The problem is not limited to exact duplicates. Near-duplicate documents, boilerplate paragraphs that appear across thousands of web pages, and paraphrased versions of the same underlying content all introduce correlated redundancy that has similar effects on model training at a less obvious level. Effective deduplication needs to identify not just exact matches but near-matches and semantic near-duplicates, which requires more sophisticated tooling than simple hash comparison.

Deduplication at Web Corpus Scale

At the scale of modern pre-training corpora, deduplication is a distributed computing problem. Pairwise comparison across hundreds of billions of documents is computationally infeasible. Practical approaches use locality-sensitive hashing methods that identify candidate duplicates efficiently without exhaustive comparison, at the cost of some recall precision tradeoffs that need to be calibrated against the program’s quality requirements. 

The choice of deduplication threshold directly affects dataset diversity: aggressive deduplication removes more redundancy but may also remove legitimate variation in how similar topics are expressed, reducing the corpus’s coverage of linguistic diversity. Data orchestration for AI at scale covers the infrastructure context in which these deduplication decisions are made and the engineering tradeoffs that arise at different pipeline scales.

Semantic Deduplication Beyond Exact Matching

Semantic deduplication, which identifies documents that express similar content in different words, is an emerging practice in large-scale curation pipelines. It addresses the limitation that exact and near-exact deduplication methods miss the meaningful redundancy introduced when different sources independently describe the same events or concepts in different languages. 

Semantic deduplication uses embedding-based similarity measurement to identify and selectively remove documents that are informationally redundant, even when their surface text differs. It is computationally more expensive than hash-based methods and requires careful calibration to avoid removing genuinely distinct perspectives on similar topics.

Quality Filtering: The Most Consequential Curation Decision

What Quality Means at Scale

Quality filtering at scale means making automated decisions about which documents or examples to include in the training corpus based on signals that can be measured programmatically. The challenge is that quality is multidimensional and context-dependent. A document can be high-quality for some training objectives and low-quality for others. A product review that is well-written and informative for a sentiment analysis corpus may be low-quality for a scientific reasoning corpus. Encoding quality filters that are appropriate for the program’s actual training objectives, rather than applying generic quality heuristics from the literature, requires explicit reasoning about what the model needs to learn.

Rule-Based vs. Model-Based Filtering

Rule-based quality filters apply heuristics based on measurable document properties: text length, punctuation density, stop word fraction, repetition rates, and language identification scores. They are computationally cheap, transparent, and consistent. They are also limited to the quality dimensions that can be measured by simple statistics, which excludes many of the subtle quality signals that most affect model performance.

Model-based filters use learned classifiers or language model scoring to assess quality in ways that capture more nuanced signals, including educational value, coherence, and factual grounding. They are more effective for capturing the quality dimensions that matter most, but are also more expensive to run at scale and less transparent in what they are measuring. AI data preparation services that combine rule-based pre-filtering with model-based quality scoring get the efficiency benefits of heuristic filters alongside the accuracy benefits of learned quality assessment.

Toxicity and Harmful Content Filtering

Filtering toxic and harmful content from training corpora is a quality requirement with direct safety implications. A model trained on data that contains hate speech, instructions for harmful activities, or manipulative content will reproduce those patterns in its outputs. Naive toxicity filters based on keyword blocklists are insufficient: they incorrectly flag legitimate medical, educational, or social science content that uses sensitive vocabulary in appropriate contexts, while missing harmful content expressed in ways the keyword list does not anticipate.

 Multi-level classifiers that assess content by category and severity, calibrated to distinguish harmful content from legitimate discussion of difficult topics, are a more reliable approach to toxicity filtering at scale. Trust and safety solutions applied at the data curation stage, before training, prevent the downstream requirement to retroactively correct safety failures through post-training alignment.

Human Annotation at Scale: Where Quality Requires Human Judgment

The Tasks That Cannot Be Automated

Not every quality judgment that matters for training data quality can be assessed by automated methods. Factual accuracy, particularly in specialized domains, requires human expertise to verify. Nuanced sentiment and emotional content require human perception to assess reliably. Cultural appropriateness varies across communities in ways that automated classifiers trained on majority-culture data cannot reliably measure. 

Safety edge cases that involve subtle manipulation or context-dependent harm require human judgment that current automated systems cannot replicate. Building generative AI datasets with human-in-the-loop workflows is specifically about the design of annotation workflows that bring human judgment to bear efficiently at scale, without sacrificing the quality that automation alone cannot provide.

Annotator Diversity and Its Effect on Data Quality

The demographic composition of annotation teams affects the data they produce. Annotation panels that draw from a narrow demographic background will encode the perspectives, cultural assumptions, and linguistic patterns of that background into quality judgments and labels. For programs that need models to serve diverse user populations, annotation team diversity is not a separate equity concern. It is a data quality requirement. Content that an annotation team from one cultural background labels as neutral may carry different connotations for users from other backgrounds, and a model trained on those labels will reflect that mismatch.

Consistency and Inter-Annotator Agreement

At scale, annotation quality is largely a function of guideline quality and consistency measurement. Guidelines that are specific enough to produce high inter-annotator agreement on borderline cases, and quality assurance processes that measure that agreement systematically and use disagreements to refine guidelines, produce a consistent training signal. Guidelines that leave judgment calls to individual annotators produce data that encodes the variance across those individual judgments as apparent label noise. 

Data annotation solutions that treat guideline development as an iterative process, using pilot annotation rounds to identify ambiguous cases before full-scale data collection, deliver substantially better label consistency than those that finalize guidelines before seeing real annotation challenges.

Post-Curation Validation: Closing the Loop Between Data and Model

Dataset Quality Audits Before Training

A dataset quality audit before training runs systematically checks the assembled corpus against the quality and coverage requirements that were defined at the start of the program. It verifies that deduplication has been effective, that quality filtering thresholds have produced the intended distribution of document quality, that coverage across the defined diversity dimensions is sufficient, and that the label distribution for supervised tasks reflects the intended training objective. Programs that skip this step regularly discover coverage gaps and quality problems after training runs have been completed and partially wasted.

Data Mix and Domain Weighting

The proportional representation of different data sources and domains in the training mix is a curation decision with direct model performance implications. A model trained on a corpus where one domain contributes a disproportionate volume of tokens will over-index on that domain’s patterns relative to all others. Deliberate data mix design, which determines the sampling proportions across sources based on the model’s intended capabilities rather than the natural availability of content from each source, is a curation decision that belongs in the pipeline design phase. 

Human preference optimization data is also subject to mixed considerations: the distribution of preference pairs across capability dimensions shapes which capabilities the reward model learns to value most strongly.

Ongoing Monitoring for Distribution Shift

Training data quality is not a static property. Data sources evolve: web content changes, domain terminology shifts, and the production distribution the model will encounter may differ from the training distribution as deployment continues. Programs that treat data curation as a one-time pre-training activity will find their models becoming less aligned with the production data distribution over time. Continuous monitoring of the production input distribution and periodic updates to the curation pipeline to reflect changes in that distribution are operational requirements for programs that depend on sustained model performance.

How Digital Divide Data Can Help

Digital Divide Data provides end-to-end data collection and curation infrastructure for AI programs across the full pipeline, from source identification and coverage planning through deduplication, quality filtering, annotation, and post-curation validation.

The data collection and curation services cover structured diversity planning across languages, domains, demographic groups, and content types, ensuring that dataset assembly targets the coverage gaps that most affect model performance rather than the dimensions that are easiest to source at volume.

For annotation at scale, text annotation, image annotation, audio annotation, and video annotation services all operate with iterative guideline development, systematic inter-annotator agreement measurement, and annotation team composition designed to reflect the demographic diversity of the intended user population.

For programs with language coverage requirements beyond English and major world languages, low-resource language services address the collection and annotation challenges for linguistic communities that standard data pipelines systematically underserve. Trust and safety solutions integrated into the curation pipeline handle toxicity filtering and harmful content removal with the category-level specificity that keyword-based approaches cannot provide.

Talk to an expert and build training datasets that determine model quality from the start. 

Conclusion

Data collection and curation at scale is the discipline that determines what AI programs can actually achieve, and it is the discipline that receives the least systematic investment relative to its contribution to outcomes. The challenges that emerge at scale are not simply amplified versions of small-scale challenges. They are structurally different problems that require pipeline infrastructure, quality measurement methodologies, and annotation frameworks that are designed for scale from the beginning. Programs that treat data curation as a preparatory step before the real engineering work will consistently find that the limits they encounter in production trace back to decisions made, or not made, during data assembly.

The compounding effect of data quality decisions becomes clearer over the course of a model’s lifecycle. Early investments in coverage planning, diversity measurement, consistent annotation guidelines, and systematic quality validation yield returns that accumulate across subsequent training runs, fine-tuning cycles, and model updates. Late investment in data quality, typically prompted by production failures that make the gaps visible, is more expensive and less effective than building quality in from the start. AI data preparation that treats data collection and curation as a first-class engineering discipline, with the same rigor and systematic measurement applied to generative AI development more broadly, is the foundation on which production model performance depends.

References

Calian, D. A., & Farquhar, G. (2025). DataRater: Meta-learned dataset curation. Proceedings of the 39th Conference on Neural Information Processing Systems. https://openreview.net/pdf?id=vUtQFnlDyv

Diaz, M., Lum, K., Hebert-Johnson, U., Perlman, A., & Kuo, T. (2024). A taxonomy of challenges to curating fair datasets. Proceedings of the 38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024). https://ai.sony/blog/Exploring-the-Challenges-of-Fair-Dataset-Curation-Insights-from-NeurIPS-2024/

Bevendorff, J., Kim, S., Park, C., Seo, H., & Na, S.-H. (2025). LP data pipeline: Lightweight, purpose-driven data pipeline for large language models. Proceedings of EMNLP 2025 Industry Track. https://aclanthology.org/2025.emnlp-industry.11.pdf

Frequently Asked Questions

Q1. What is the most common reason AI training data fails to produce good model performance?

Systematic coverage gaps, where the training corpus does not adequately represent the variation in inputs the model will encounter in deployment, are the most common data-side explanation for underperformance, followed closely by label inconsistency in supervised annotation tasks.

Q2. Why is deduplication important for model quality, not just storage efficiency?

Duplicate content causes models to over-weight the statistical patterns in that content, and at high rates can cause verbatim memorization, which reduces generalization on novel inputs and creates privacy and copyright exposure if the memorized content is sensitive.

Q3. When is synthetic data appropriate to include in a training corpus?

Synthetic data is most appropriate as a targeted supplement to fill specific coverage gaps, conditioned on real source material and evaluated against the same quality standards as human-generated content, rather than as a bulk substitute for human-generated data.

Q4. How does annotator demographic diversity affect data quality?

Annotation panels from narrow demographic backgrounds encode the perspectives and cultural assumptions of that background into quality labels, producing training data that reflects those assumptions and models that perform less reliably for users outside that background.

Data Collection and Curation at Scale: What It Actually Takes to Build AI-Ready Datasets Read Post »

Model Evaluation for GenAI

Model Evaluation for GenAI: Why Benchmarks Alone Are Not Enough

  1. The gap between benchmark performance and production performance is well understood among practitioners, but it rarely changes how programs approach evaluation in practice. Teams select models based on leaderboard positions, set deployment thresholds based on accuracy scores from public datasets, and, in production, discover that the dimensions that mattered were never measured. 

Benchmark saturation, training data contamination, and the structural limitations of static multiple-choice tests combine to make public benchmarks poor predictors of production behavior for any task that departs meaningfully from the benchmark’s design.

This blog examines why GenAI model evaluation requires a framework that extends well beyond standard benchmarks, covering how benchmark contamination and saturation distort performance signals and what a well-designed evaluation program for a production GenAI system actually looks like. Model evaluation services and human preference optimization are the two evaluation capabilities that production programs most consistently underinvest in relative to the return they deliver.

Why Public Benchmarks are an Unreliable Signal

The Saturation Problem

Many of the most widely cited benchmarks in language model evaluation have saturated. A benchmark saturates when leading models reach near-ceiling scores, at which point the benchmark no longer distinguishes between models of genuinely different capability. Tests that were challenging when first published have been solved or near-solved by frontier models within two to three years of release, rendering them useless for comparative evaluation at the top of the performance distribution.

Saturation is not only a problem for frontier model comparisons. It affects enterprise model selection whenever a team uses a benchmark that was already saturated at the time they ran their evaluation. A model that scores 95% on a saturated benchmark may be no better suited to the production task than a model that scores 88%, and the 7-point gap in the leaderboard number conveys a false sense of differentiation.

The Contamination Problem

Benchmark contamination, where test questions from public evaluation datasets appear in a model’s pre-training corpus, is a pervasive and difficult-to-quantify problem. When a model has seen test set questions during training, its benchmark score reflects memorization rather than generalization. 

The higher the score, the more ambiguous the interpretation: a near-perfect score on a widely published benchmark may indicate genuine capability or extensive training-time exposure to the test set, and there is frequently no reliable way to distinguish between the two from the outside. Detecting and quantifying contamination requires access to training data provenance information that model providers rarely disclose fully.

The practical consequence for teams selecting or evaluating models is that public benchmark scores should be treated as lower-bound estimates of the uncertainty in model capability assessment, not as reliable performance guarantees. This does not mean ignoring benchmarks. It means treating them as one signal among several, weighted by how recently the benchmark was published, how closely its task structure resembles the production task, and how plausible it is that the benchmark data appeared in training.

The Task Structure Mismatch

Most public benchmarks are structured as multiple-choice or short-answer tasks with verifiable correct answers. Most production GenAI tasks are open-ended generation tasks with no single correct answer. The evaluation methods that produce reliable scores on multiple-choice tasks, accuracy against a reference answer key, do not apply to open-ended generation. 

A model that performs well on a multiple-choice reasoning benchmark has demonstrated one capability. Whether it can produce high-quality, contextually appropriate, factually grounded, and tonally suitable open-ended responses to production inputs is a different question that the benchmark does not address.

What Benchmarks Miss: The Dimensions That Determine Production Quality

Behavioral Consistency

A production GenAI system is not evaluated once against a fixed test set. It is evaluated continuously by users who ask the same question in different ways, with different phrasing, different context, and different surrounding conversations. Behavioral consistency, the property that semantically equivalent inputs produce semantically equivalent outputs, is a quality dimension that static benchmarks do not test.

A model that gives contradictory answers to equivalent questions rephrased differently is producing a reliability problem that accuracy on a benchmark will not reveal. Evaluating behavioral consistency requires generating semantically equivalent input variants and measuring output stability, a methodology that requires custom evaluation data collection rather than benchmark lookup.

Calibration and Uncertainty

A well-calibrated model is one whose expressed confidence correlates with its actual accuracy: when it says it is confident, it is usually correct, and when it hedges, it is usually less certain. Calibration is not measured by most public benchmarks. It is an important property for any production system where users make decisions based on model outputs, because an overconfident model that produces plausible-sounding incorrect answers with the same tone and phrasing as correct ones creates a higher risk of harm than a model that signals its uncertainty appropriately.

Robustness to Adversarial and Edge Case Inputs

Benchmarks are designed to be answerable. They contain well-formed, unambiguous questions drawn from the distribution that the benchmark designers anticipated. Production inputs include badly formed queries, ambiguous requests, adversarial attempts to elicit unsafe behavior, and edge cases that fall outside the distribution the model was trained on. Evaluating robustness to these inputs requires test data that was specifically constructed to probe failure modes, not standard benchmark items that were selected because they represent the normal distribution.

Domain-Specific Accuracy in Context

General-purpose benchmarks measure general-purpose capabilities. A healthcare AI system that scores well on general language understanding benchmarks may still produce clinically inaccurate content when deployed in a medical context. A legal AI that excels on reasoning benchmarks may misapply specific statutes. 

Domain accuracy in the deployment context is a distinct evaluation requirement from general benchmark performance, and measuring it requires task-specific evaluation datasets developed with domain expert involvement. Text annotation for domain-specific evaluation data is one of the more consequential investments a deployment program can make, because the domain evaluation set is what will tell the team whether the system is actually reliable in the context it will be used.

Human Evaluation in Model Evaluation for GenAI

Why Automated Metrics Cannot Replace Human Judgment for Generative Tasks

Automated metrics like BLEU, ROUGE, and BERTScore measure overlap between generated text and reference outputs. They are useful for tasks where a reference output exists, and quality can be operationalized as closeness to that reference. For open-ended generation tasks, including summarization, question answering, creative writing, and conversational assistance, there is often no single reference output, and quality has dimensions that overlap metrics cannot capture: helpfulness, appropriate tone, factual accuracy, contextual relevance, and safety.

Human evaluation fills this gap. It captures the dimensions of output quality that automated metrics miss, and it reflects the actual user experience in a way that reference-based metrics cannot. The cost of human evaluation is real, but so is the cost of deploying a model whose quality on the dimensions that matter was never measured.

What Human Evaluation Should Measure

A well-designed human evaluation for a production GenAI system measures multiple output dimensions independently rather than asking evaluators to produce a single overall quality score. Factual accuracy, assessed by evaluators with domain expertise. Helpfulness, assessed by evaluators representing the target user population. Tone appropriateness is assessed against the system’s stated behavioral guidelines. Safety, assessed against a comprehensive set of harm categories relevant to the deployment context. 

Collecting these signals systematically and at scale requires an annotation infrastructure that treats human evaluation as a first-class engineering discipline, not an ad hoc review process. Building GenAI datasets with human-in-the-loop workflows covers the methodological foundations for this kind of systematic human signal collection.

The LLM-as-Judge Approach and Its Limits

Using a language model as an automated evaluator, the LLM-as-judge approach is increasingly common as a way to scale evaluation beyond what human annotation capacity allows. It captures some dimensions of quality better than reference-based metrics and can process large evaluation sets quickly. The method has documented limitations that teams should understand before relying on it as the primary evaluation signal.

LLMs used as judges exhibit systematic biases: preference for longer responses, preference for outputs from architecturally similar models, sensitivity to framing and ordering of the options presented. For safety-critical evaluation, these biases matter. A system evaluated primarily by LLM judges that were themselves trained on similar data may be systematically blind to the failure modes most likely to produce unsafe or incorrect behavior in deployment. Human evaluation remains essential for validating the reliability of LLM judge behavior and for any dimension where systematic bias in the judge would have consequential downstream effects.

Task-Specific and Deployment-Specific Evaluation

Building Evaluation Sets That Reflect the Production Task

The most reliable predictor of production performance is evaluation against a dataset that closely reflects the actual production input distribution. This means drawing evaluation inputs from real user queries where available, constructing synthetic inputs that cover the realistic variation range of the production task, and including explicit coverage of the edge cases and unusual inputs that the production workload contains. 

A program that builds its evaluation set from the production data distribution, rather than from public benchmark datasets, will have a much more accurate picture of whether its model is ready for deployment. Data collection and curation services that sample from or synthesize production-representative inputs are a direct investment in evaluation accuracy.

Red-Teaming as a Systematic Evaluation Method

Red-teaming, the systematic attempt to elicit harmful, unsafe, or policy-violating behavior from a model using carefully constructed adversarial inputs, is an evaluation method that public benchmarks do not replicate. 

A model can score well on every standard safety benchmark while being vulnerable to specific adversarial prompt patterns that a motivated user could discover. Red-teaming before deployment is the most reliable way to identify these vulnerabilities. It requires evaluators with the expertise and mandate to attempt to break the system, not just to assess its average-case behavior. Trust and safety evaluation that incorporates systematic red-teaming alongside standard safety metrics provides a safety assurance signal that automated safety benchmark scores cannot supply.

Regression Testing Across Model Versions

A model evaluation program is not a point-in-time exercise. Models are updated, fine-tuned, and modified throughout their deployment lifecycle, and each change that affects a safety-relevant or quality-relevant behavior needs to be evaluated against the previous version before deployment. A regression test suite that runs on each model update catches capability degradations before they reach users. Building and maintaining this suite is an ongoing investment that most programs underestimate at project inception.

Evaluating RAG Systems for Gen AI

Retrieval-augmented generation systems have a more complex failure surface than standalone language models. The retrieval component can fail to find relevant documents. The reranking component can return the wrong documents as the most relevant. The generation component can fail to use the retrieved documents correctly, ignoring relevant content or hallucinating content not present in the retrieved context. 

Evaluating a RAG system requires measuring each of these components separately, not just the end-to-end output quality. End-to-end metrics that look good can mask retrieval failures that are being compensated for by a capable generator, or generation quality failures that are being compensated for by excellent retrieval. DDD’s detailed guide on RAG data quality, evaluation, and governance covers the RAG-specific evaluation methodology in depth.

Context Faithfulness as a Core RAG Evaluation Metric

Context faithfulness, the property that generated responses are grounded in and consistent with the retrieved context rather than generated from the model’s parametric knowledge, is a critical evaluation dimension for RAG systems that standard output quality metrics do not assess. 

A RAG system that produces accurate responses by ignoring the retrieved context and falling back on parametric knowledge is not providing the factual grounding that the RAG architecture was intended to supply. Measuring context faithfulness requires an evaluation methodology that compares the generated output against the retrieved documents, not just against a reference answer.

Evaluating Agentic AI Systems

Why Task Completion Is Not Enough

Agentic AI systems take sequences of actions in dynamic environments, using tools, APIs, and external services to accomplish multi-step goals. Evaluating them requires a fundamentally different framework from evaluating single-turn text generation. Task completion rate, whether the agent successfully achieves the stated goal, is a necessary but insufficient evaluation metric. 

An agent that completes tasks using inefficient action sequences, makes unnecessary tool calls, or produces correct outcomes through reasoning paths that would fail on slightly different inputs is not a reliable production system, even if its task completion rate looks acceptable. Building trustworthy agentic AI with human oversight discusses the evaluation and governance frameworks that agentic systems require.

Reliability, Safety, and Trajectory Evaluation

Agentic evaluation needs to measure at least four dimensions beyond task completion: reasoning trajectory quality, which assesses whether the agent’s reasoning steps are sound even when the outcome is correct; tool use accuracy, which evaluates whether tools are invoked appropriately with correct parameters; robustness to unexpected inputs during multi-turn interactions; and safety under adversarial conditions, including attempts to manipulate the agent into taking unauthorized actions. Human-in-the-loop evaluation remains the reference standard for agentic safety assessment, particularly for systems that take actions with real-world consequences. Agentic AI deployments that skip systematic safety evaluation before production release create liability exposure that standard output quality metrics will not have revealed.

The Evaluation Stack: What a Complete Program Looks Like

Layering Benchmark, Automated, and Human Evaluation

A complete evaluation program for a production GenAI system combines multiple layers. Public benchmarks provide broad capability signals and facilitate external comparisons, with appropriate discounting for contamination risk and saturation. Automated metrics, including reference-based metrics for structured tasks and LLM-judge approaches for open-ended generation, provide scalable quality signals that can run on large evaluation sets.

Human evaluation provides the ground truth for dimensions that automated methods cannot reliably assess, including safety, domain accuracy, and output quality in the deployment context. Each layer informs a different aspect of the deployment decision.

The Evaluation Timeline

Evaluation should be integrated into the development lifecycle, not run as a pre-deployment checkpoint. Capability assessment runs during model or fine-tuning selection. Task-specific evaluation runs after initial fine-tuning to assess whether the fine-tuned model actually improved on the target task. Red-teaming and safety evaluation run before any production deployment. Regression testing runs on every model update that touches safety-relevant or quality-relevant components. Post-deployment monitoring provides an ongoing signal that the production distribution has not drifted in ways that have degraded model performance.

The Common Gap: Evaluation Data Quality

The most common single failure point in enterprise evaluation programs is not the choice of metrics or the evaluation methodology. It is the quality and representativeness of the evaluation data itself. 

An evaluation set that was assembled quickly from available examples, which over-represents easy cases and under-represents the edge cases and domain variations that matter for production reliability, will produce evaluation scores that overestimate the model’s readiness for deployment. Annotation solutions that bring the same quality discipline to evaluation data as to training data are a structural requirement for evaluation programs that actually predict production performance.

How Digital Divide Data Can Help

Digital Divide Data provides an end-to-end evaluation infrastructure for GenAI programs, from evaluation dataset design through human annotation and LLM-judge calibration to ongoing regression testing and post-deployment monitoring.

The model evaluation services cover task-specific evaluation dataset construction, with explicit coverage of edge cases, domain-specific inputs, and behavioral consistency test variants. Evaluation sets are built from production-representative inputs rather than repurposed public benchmarks, producing evaluation scores that predict deployment performance rather than benchmark-suite performance.

For safety and quality evaluation, human preference optimization services provide systematic human quality signal collection across the dimensions that automated metrics miss: factual accuracy, helpfulness, tone appropriateness, and safety. Red-teaming capability is integrated into safety evaluation workflows, covering adversarial prompt patterns relevant to the specific deployment context rather than generic safety benchmarks.

For agentic deployments, evaluation methodology extends to trajectory assessment, tool use accuracy, and multi-turn robustness, with human evaluation covering the safety-critical judgment calls that LLMs cannot reliably assess. Trust and safety solutions include structured red-teaming protocols and ongoing monitoring frameworks that keep the safety signal current as models and user behavior evolve.

Talk to an Expert and build an evaluation program that actually predicts production performance

Conclusion

Benchmark scores are starting points for model assessment, not finishing lines. The dimensions that determine whether a GenAI system actually performs in production, behavioral consistency, calibration, domain accuracy, safety under adversarial conditions, and output quality on open-ended tasks are systematically undercovered by public benchmarks and require a purpose-built evaluation methodology to measure reliably. 

Teams that invest in evaluation infrastructure commensurate with what they invest in model development will have an accurate picture of their system’s readiness before deployment. Teams that rely on benchmark numbers as their primary evidence for production readiness will consistently be surprised by what they encounter after launch.

As GenAI systems take on more consequential tasks, including customer-facing interactions, regulated industry applications, and agentic workflows with real-world effects, the cost of inadequate evaluation rises accordingly. 

The investment in evaluation data quality, human annotation capacity, and task-specific evaluation methodology is not overhead on the development program. It is the mechanism that transforms a model that performs in controlled conditions into a system that can be trusted in production. Generative AI evaluation built around production-representative data and systematic human quality signal is the foundation that makes that trust warranted.

References

Mohammadi, M., Li, Y., Lo, J., & Yip, W. (2025). Evaluation and benchmarking of LLM agents: A survey. Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2. ACM. https://doi.org/10.1145/3711896.3736570

Stanford HAI. (2024). Technical performance. 2024 AI Index Report. Stanford University Human-Centered AI. https://hai.stanford.edu/ai-index/2024-ai-index-report/technical-performance

Frequently Asked Questions

Q1. What is benchmark contamination, and why does it matter for model selection?

Benchmark contamination occurs when test questions from public datasets appear in a model’s pre-training corpus, causing scores to reflect memorization rather than genuine capability, which means leaderboard rankings may not accurately reflect how models will perform on unseen production inputs.

Q2. When is human evaluation necessary versus automated metrics?

Human evaluation is necessary for open-ended generation tasks where quality has subjective dimensions, for safety-critical judgment calls where automated judge bias could mask failure modes, and for domain-specific accuracy assessment that requires expert knowledge.

Q3. What evaluation dimensions do public benchmarks consistently miss?

Behavioral consistency across rephrased inputs, output calibration, robustness to adversarial inputs, domain accuracy in specific deployment contexts, and open-ended generation quality are the dimensions most systematically undercovered by standard public benchmarks.

Q4. How should RAG systems be evaluated differently from standalone language models?

RAG evaluation requires measuring retrieval component performance, reranking accuracy, and context faithfulness separately from end-to-end output quality, since good end-to-end results can mask component failures that will cause problems under different input distributions.

Model Evaluation for GenAI: Why Benchmarks Alone Are Not Enough Read Post »

Multimodal AI Training

Multimodal AI Training: What the Data Actually Demands

The difficulty of multimodal training data is not simply that there is more of it to produce. It is that the relationships between modalities must be correct, not just the data within each modality. An image that is accurately labeled for object detection but paired with a caption that misrepresents the scene produces a model that learns a contradictory representation of reality. 

A video correctly annotated for action recognition but whose audio is misaligned with the visual frames teaches the model the wrong temporal relationship between what happens and how it sounds. These cross-modal consistency problems do not show up in single-modality quality checks. They require a different category of annotation discipline and quality assurance, one that the industry is still in the process of developing the infrastructure to apply at scale.

This blog examines what multimodal AI training actually demands from a data perspective, covering how cross-modal alignment determines model behavior, what annotation quality requirements differ across image, video, and audio modalities, why multimodal hallucination is primarily a data problem rather than an architecture problem, how the data requirements shift as multimodal systems move into embodied and agentic applications, and what development teams need to get right before their training data.

What Multimodal AI Training Actually Involves

The Architecture and Where Data Shapes It

Multimodal large language models process inputs from multiple data types by routing each through a modality-specific encoder that converts raw data into a mathematical representation, then passing those representations through a fusion mechanism that aligns and combines them into a shared embedding space that the language model backbone can operate over. The vision encoder handles images and video frames. The audio encoder handles speech and sound. The text encoder handles written content. The fusion layer or connector module is where the modalities are brought together, and it is the component whose quality is most directly determined by the quality of the training data.

A fusion layer that has been trained on accurately paired, consistently annotated, well-aligned multimodal data learns to produce representations where the image of a dog and the word dog, and the sound of a bark occupy regions of the embedding space that are meaningfully related. A fusion layer trained on noisily paired, inconsistently annotated data learns a blurrier, less reliable mapping that produces the hallucination and cross-modal reasoning failures that characterize underperforming multimodal systems. The architecture sets the ceiling. The training data determines how close to that ceiling the deployed model performs.

The Scale Requirement That Changes the Data Economics

Multimodal systems require significantly more training data than their unimodal counterparts, not only in absolute volume but in the combinatorial variety needed to train the cross-modal relationships that define the system’s capabilities. A vision-language model that is trained primarily on image-caption pairs from a narrow visual domain will learn image-language relationships within that domain and generalize poorly to images with different characteristics, different object categories, or different spatial arrangements. 

The diversity requirement is multiplicative across modalities: a system that needs to handle diverse images, diverse language, and diverse audio needs training data whose diversity spans all three dimensions simultaneously, which is a considerably harder curation problem than assembling diverse data in any one modality.

Cross-Modal Alignment: The Central Data Quality Problem

What Alignment Means and Why It Fails

Cross-modal alignment is the property that makes a multimodal model genuinely multimodal rather than simply a collection of unimodal models whose outputs are concatenated. A model with good cross-modal alignment has learned that the visual representation of a specific object class, the textual description of that class, and the auditory signature associated with it are related, and it uses that learned relationship to improve its performance on tasks that involve any combination of the three. A model with poor cross-modal alignment has learned statistical correlations within each modality separately but has not learned the deeper relationships between them.

Alignment failures in training data take several forms. The most straightforward is incorrect pairing: an image paired with a caption that does not accurately describe it, a video clip paired with a transcript that corresponds to a different moment, or an audio recording labeled with a description of a different sound source. Less obvious but equally damaging is partial alignment: a caption that accurately describes some elements of the image but misses others, a transcript that is textually accurate but temporally misaligned with the audio, or an annotation that correctly labels the dominant object in a scene but ignores the contextual elements that determine the scene’s meaning.

The Temporal Alignment Problem in Video and Audio

Temporal alignment is a specific and particularly demanding form of cross-modal alignment that arises in video and audio data. A video is not a collection of independent frames. It is a sequence in which the relationship between what happens at time T and what happens at time T+1 carries meaning that neither frame conveys alone. An action recognition model trained on video data where frame-level annotations do not accurately reflect the temporal extent of the action, or where the action label is assigned to the wrong temporal segment, learns an imprecise representation of the action’s dynamics. Video annotation for multimodal training requires temporal precision that static image annotation does not, including accurate action boundary detection, consistent labeling of motion across frames, and synchronization between visual events and their corresponding audio or textual descriptions.

Audio-visual synchronization is a related challenge that receives less attention than it deserves in multimodal data quality discussions. Human speech is perceived as synchronous with lip movements within a tolerance of roughly 40 to 100 milliseconds. Outside that window, the perceptual mismatch is noticeable to human observers. For a multimodal model learning audio-visual correspondence, even smaller misalignments can introduce noise into the learned relationship between the audio signal and the visual event it accompanies. At scale, systematic small misalignments across a large training corpus can produce a model that has learned a subtly incorrect temporal model of the audio-visual world.

Image Annotation for Multimodal Training

Beyond Object Detection Labels

Image annotation for multimodal training differs from image annotation for standard computer vision in a dimension that is easy to underestimate: the relationship between the image content and the language that describes it is part of what is being learned, not a byproduct of the annotation. 

An object detection label that places a bounding box around a car is sufficient for training a car detector. The same bounding box is insufficient for training a vision-language model, because the model needs to learn not only that the object is a car but how the visual appearance of that car relates to the range of language that might describe it: vehicle, automobile, sedan, the red car in the foreground, the car partially occluded by the pedestrian. Image annotation services designed for multimodal training need to produce richer, more linguistically diverse descriptions than standard computer vision annotation, and the consistency of those descriptions across similar images is a quality dimension that directly affects cross-modal alignment.

The Caption Diversity Requirement

Caption diversity is a specific data quality requirement for vision-language model training that is frequently underappreciated. A model trained on image-caption pairs where all captions follow a similar template learns to associate visual features with a narrow range of linguistic expression. The model will perform well on evaluation tasks that use similar language but will generalize poorly to the diversity of phrasing, vocabulary, and descriptive style that real-world applications produce. Producing captions with sufficient linguistic diversity while maintaining semantic accuracy requires annotation workflows that explicitly vary phrasing, descriptive focus, and level of detail across multiple captions for the same image, rather than treating caption generation as a single-pass labeling task.

Spatial Relationship and Compositional Annotation

Spatial relationship annotation, which labels the geometric and semantic relationships between objects within an image rather than just the identities of the objects themselves, is a category of annotation that matters significantly more for multimodal model training than for standard object detection.

A vision-language model that needs to answer the question which cup is to the left of the keyboard requires training data that explicitly annotates spatial relationships, not just object identities. The compositional reasoning failures that characterize many current vision-language models, where the model correctly identifies all objects in a scene but fails on questions about their spatial or semantic relationships, are in part a reflection of training data that under-annotates these relationships.

Video Annotation: The Complexity That Scale Does Not Resolve

Why Video Annotation Is Not Image Annotation at Scale

Video is not a large collection of images. The temporal dimension introduces annotation requirements that have no equivalent in static image labeling. Action boundaries, the precise frame at which an action begins and ends, must be annotated consistently across thousands of video clips for the model to learn accurate representations of action timing. Event co-occurrence relationships, which events happen simultaneously and which happen sequentially, must be annotated explicitly rather than inferred. 

Long-range temporal dependencies, where an event at the beginning of a clip affects the interpretation of an event at the end, require annotators who watch and understand the full clip before making frame-level annotations. 

Dense Video Captioning and the Annotation Depth It Requires

Dense video captioning, the task of generating textual descriptions of all events in a video with accurate temporal localization, is one of the most data-demanding tasks in multimodal AI training. Training data for dense captioning requires that every significant event in a video clip be identified, temporally localized to its start and end frames, and described in natural language with sufficient specificity to distinguish it from similar events in other clips. The annotation effort per minute of video for dense captioning is dramatically higher than for single-label video classification, and the quality of the temporal localization directly determines the precision of the cross-modal correspondence the model learns.

Multi-Camera and Multi-View Video

As multimodal AI systems move into embodied and Physical AI applications, video annotation requirements extend to multi-camera setups where the same event must be annotated consistently across multiple viewpoints simultaneously. 

A manipulation action that is visible from the robot’s wrist camera, the overhead camera, and a side camera must be labeled with consistent action boundaries, consistent object identities, and consistent descriptions across all three views. Inconsistencies across views produce training data that teaches the model contradictory representations of the same physical event. The multisensor fusion annotation challenges that arise in Physical AI settings apply equally to multi-view video annotation, and the annotation infrastructure needed to handle them is considerably more complex than what single-camera video annotation requires.

Audio Annotation: The Modality Whose Data Quality Is Least Standardized

What Audio Annotation for Multimodal Training Requires

Audio annotation for multimodal training is less standardized than image or text annotation, and the quality standards that exist in the field are less widely adopted. A multimodal system that processes speech needs training data where speech is accurately transcribed, speaker-attributed in multi-speaker contexts, and annotated for the non-linguistic features, tone, emotion, pace, and prosody that carry meaning beyond the words themselves. 

A system that processes environmental audio needs training data where sound events are accurately identified, temporally localized, and described in a way that captures the semantic relationship between the sound and its source. Audio annotation at the quality level that multimodal model training requires is more demanding than transcription alone, and teams that treat audio annotation as a transcription task will produce training data that gives their models a linguistically accurate but perceptually shallow representation of audio content.

The Language Coverage Problem in Audio Training Data

Audio training data for speech-capable multimodal systems faces an acute version of the language coverage problem that affects text-only language model training. Systems trained predominantly on English speech data perform significantly worse on other languages, and the performance gap is larger for audio than for text because the acoustic characteristics of speech vary across languages in ways that require explicit representation in the training data rather than cross-lingual transfer. 

Building multimodal systems that perform equitably across languages requires intentional investment in audio data collection and annotation across linguistic communities, an investment that most programs underweight relative to its impact on deployed model performance. Low-resource languages in AI are directly relevant to audio-grounded multimodal training, where low-resource language communities face the sharpest capability gaps.

Emotion and Paralinguistic Annotation

Paralinguistic annotation, the labeling of speech features that convey meaning beyond the literal content of the words, is a category of audio annotation that is increasingly important for multimodal systems designed for human interaction applications. Tone, emotional valence, speech rate variation, and prosodic emphasis all carry semantic information that a model interacting with humans needs to process correctly. Annotating these features requires annotators who can make consistent judgments about inherently subjective qualities, which in turn requires annotation guidelines that are specific enough to produce inter-annotator agreement and quality assurance processes that measure that agreement systematically.

Multimodal Hallucination: A Data Problem More Than an Architecture Problem

How Hallucination in Multimodal Models Differs From Text-Only Hallucination

Hallucination in language models is a well-documented failure mode where the model generates content that is plausible in form but factually incorrect. In multimodal models, hallucination takes an additional dimension: the model generates content that is inconsistent with the visual or audio input it has been given, not just with external reality. A model that correctly processes an image of an empty table but generates a description that includes objects not present in the image is exhibiting cross-modal hallucination, a failure mode distinct from factual hallucination and caused by a different mechanism.

Cross-modal hallucination is primarily a training data problem. It arises when the training data contains image-caption pairs where the caption describes content not visible in the image, when the model has been exposed to so much text describing common image configurations that it generates those descriptions regardless of what the image actually shows, or when the cross-modal alignment in the training data is weak enough that the model’s language prior dominates its visual processing. The tendency for multimodal models to generate plausible-sounding descriptions that prioritize language fluency over visual fidelity is a direct consequence of training data where language quality was prioritized over cross-modal accuracy.

How Training Data Design Can Reduce Hallucination

Reducing cross-modal hallucination through training data design requires explicit attention to the accuracy of the correspondence between modalities, not just the quality of each modality independently. Negative examples that show the model what it looks like when language is inconsistent with visual content, preference data that systematically favors visually grounded descriptions over hallucinated ones, and fine-grained correction annotations that identify specific hallucinated elements and provide corrected descriptions are all categories of training data that target the cross-modal alignment failure underlying hallucination. Human preference optimization approaches applied specifically to cross-modal faithfulness, where human annotators compare model outputs for their visual grounding rather than general quality, are among the most effective interventions currently in use for reducing multimodal hallucination in production systems.

Evaluation Data for Hallucination Assessment

Measuring hallucination in multimodal models requires evaluation data that is specifically designed to surface cross-modal inconsistencies, not just general performance benchmarks. Evaluation sets that include images with unusual configurations, rare object combinations, and scenes that contradict common statistical associations are more diagnostic of hallucination than standard benchmark images that conform to typical visual patterns the model has likely seen during training. Building evaluation data specifically for hallucination assessment is a distinct annotation task from building training data; model evaluation services are addressed through targeted adversarial data curation designed to reveal the specific cross-modal failure modes most relevant to each system’s deployment context.

Multimodal Data for Embodied and Agentic AI

When Modalities Include Action

The multimodal AI training challenge takes on additional complexity when the system is not only processing visual, audio, and language inputs but also taking actions in the physical world. Vision-language-action models, which underpin much of the current development in robotics and Physical AI, must learn not only to understand what they see and hear but to connect that understanding to appropriate physical actions. 

The training data for these systems is not image-caption pairs. It is sensorimotor sequences: synchronized streams of visual input, proprioceptive sensor readings, force feedback, and the action commands that a human operator or an expert policy selects in response to those inputs. VLA model analysis services and the broader context of vision-language-action models and autonomy address the annotation demands specific to this category of multimodal training data.

Instruction Tuning Data for Multimodal Agents

Instruction tuning for multimodal agents, which teaches a system to follow complex multi-step instructions that involve perception, reasoning, and action, requires training data that is structured differently from standard multimodal pairs. Each training example is a sequence: an instruction, a series of observations, a series of intermediate reasoning steps, and a series of actions, all of which need to be consistently annotated and correctly attributed. The annotation effort for multimodal instruction tuning data is substantially higher per example than for standard image-caption pairs, and the quality standards are more demanding because errors in the action sequence or the reasoning annotation propagate directly into the model’s learned behavior. Building generative AI datasets with human-in-the-loop workflows is particularly valuable for this category of training data, where the judgment required to evaluate whether a multi-step action sequence is correctly annotated exceeds what automated quality checks can reliably assess.

Quality Assurance Across Modalities

Why Single-Modality QA Is Not Enough

Quality assurance for multimodal training data requires checking not only within each modality but across modalities simultaneously. A QA process that verifies image annotation quality independently and caption quality independently will pass image-caption pairs where both elements are individually correct, but the pairing is inaccurate. A QA process that checks audio transcription quality independently and video annotation quality independently will pass audio-video pairs where the transcript is accurate but temporally misaligned with the video. Cross-modal QA, which treats the relationship between modalities as the primary quality dimension, is a distinct capability from single-modality QA and requires annotation infrastructure and annotator training that most programs have not yet fully developed.

Inter-Annotator Agreement in Multimodal Annotation

Inter-annotator agreement, the standard quality metric for annotation consistency, is more complex to measure in multimodal settings than in single-modality settings. Agreement on object identity within an image is straightforward to quantify. Agreement on whether a caption accurately represents the full semantic content of an image requires subjective judgment that different annotators may apply differently. 

Agreement on the correct temporal boundary of an action in a video requires a level of precision that different annotators may interpret differently, even when given identical guidelines. Building annotation guidelines that are specific enough to produce measurable inter-annotator agreement on cross-modal quality dimensions, and measuring that agreement systematically, is a precondition for the kind of training data quality that production of multimodal systems requires.

Trust and Safety Annotation in Multimodal Data

Multimodal training data introduces trust and safety annotation requirements that are qualitatively different from text-only content moderation. Images and videos can carry harmful content in ways that text descriptions do not capture. Audio can include harmful speech that automated transcription produces as apparently neutral text. The combination of modalities can produce harmful associations that would not arise from either modality alone. Trust and safety solutions for multimodal systems need to operate across all modalities simultaneously and need to be designed with the specific cross-modal harmful content patterns in mind, not simply extended from text-only content moderation frameworks.

How Digital Divide Data Can Help

Digital Divide Data provides end-to-end multimodal data solutions for AI development programs across the full modality stack. The approach is built around the recognition that multimodal model quality is determined by cross-modal data quality, not by the quality of each modality independently, and that the annotation infrastructure to assess and ensure cross-modal quality requires specific investment rather than extension of single-modality workflows.

On the image side, our image annotation services produce the linguistically diverse, relationship-rich, spatially accurate descriptions that vision-language model training requires, with explicit coverage of compositional and spatial relationships rather than object identity alone. Caption diversity and cross-modal consistency are treated as primary quality dimensions in annotation guidelines and QA protocols.

On the video side, our video annotation capabilities address the temporal annotation requirements of multimodal training data with clip-level understanding as a prerequisite for frame-level labeling, consistent action boundary detection, and synchronization between visual, audio, and textual annotation streams. For embodied AI programs, DDD’s annotation teams handle multi-camera, multi-view annotation with cross-view consistency required for action model training.

On the audio side, our annotation services extend beyond transcription to include paralinguistic feature annotation, speaker attribution, sound event localization, and multilingual coverage, with explicit attention to low-resource linguistic communities. For multimodal programs targeting equitable performance across languages, DDD provides the audio data coverage that standard English-dominant datasets cannot supply.

For programs addressing multimodal hallucination, our human preference optimization services include cross-modal faithfulness evaluation, producing preference data that specifically targets the visual grounding failures underlying hallucination. Model evaluation services provide adversarial multimodal evaluation sets designed to surface hallucination and cross-modal reasoning failures before they appear in production.

Build multimodal AI systems grounded in data that actually integrates modalities. Talk to an expert!

Conclusion

Multimodal AI training is not primarily a harder version of unimodal training. It is a different kind of problem, one where the quality of the relationships between modalities determines model behavior more than the quality of each modality independently. The teams that produce the most capable multimodal systems are not those with the largest training corpora or the most sophisticated architectures. 

They are those that invest in annotation infrastructure that can produce and verify cross-modal accuracy at scale, in evaluation frameworks that measure cross-modal reasoning and hallucination rather than unimodal benchmarks, and in data diversity strategies that explicitly span the variation space across all modalities simultaneously. Each of these investments requires a level of annotation sophistication that is higher than what single-modality programs have needed, and teams that attempt to scale unimodal annotation infrastructure to multimodal requirements will consistently find that the cross-modal quality gaps they did not build for are the gaps that limit their model’s real-world performance.

The trajectory of AI development is toward systems that process the world the way humans do, through the simultaneous integration of what they see, hear, read, and do. That trajectory makes multimodal training data quality an increasingly central competitive factor rather than a technical detail. Programs that build the annotation infrastructure, quality assurance processes, and cross-modal consistency standards now will be better positioned to develop the next generation of multimodal capabilities than those that treat data quality as a problem to be addressed after model performance plateaus. 

Digital Divide Data is built to provide the multimodal data infrastructure that makes that early investment possible across every modality that production AI systems require.

References

Lan, Z., Chakraborty, R., Munikoti, S., & Agarwal, S. (2025). Multimodal AI: Integrating diverse data modalities for advanced intelligence. Emergent Mind. https://www.emergentmind.com/topics/multimodal-ai

Gui, L. (2025). Toward data-efficient multimodal learning. Carnegie Mellon University Language Technologies Institute Dissertation. https://lti.cmu.edu/research/dissertations/gui-liangke-dissertation-document.pdf

Chen, L., Lin, F., Shen, Y., Cai, Z., Chen, B., Zhao, Z., Liang, T., & Zhu, W. (2025). Efficient multimodal large language models: A survey. Visual Intelligence, 3(10). https://doi.org/10.1007/s44267-025-00099-6

Frequently Asked Questions

What makes multimodal training data harder to produce than single-modality data?

Cross-modal alignment accuracy, where the relationship between modalities must be correct rather than just the content within each modality, adds a quality dimension that single-modality annotation workflows are not designed to verify and that requires distinct QA infrastructure to assess systematically.

What is cross-modal hallucination, and how is it different from standard LLM hallucination?

Cross-modal hallucination occurs when a multimodal model generates content inconsistent with its visual or audio input, rather than just inconsistent with factual reality, arising from weak cross-modal alignment in training data rather than from language model statistical biases alone.

How much more training data does a multimodal system need compared to a text-only model?

The volume requirement is substantially higher because diversity must span multiple modality dimensions simultaneously, and quality requirements are more demanding since cross-modal accuracy must be verified in addition to within-modality quality.

Why is temporal alignment in video annotation so important for multimodal model training?

Temporal misalignment in video annotation teaches the model incorrect associations between what happens visually and what is described linguistically or heard aurally, producing models with systematically wrong temporal representations of events and actions.

Multimodal AI Training: What the Data Actually Demands Read Post »

LLM Fine-Tuning

Why Most Enterprise LLM Fine-Tuning Projects Underdeliver

The premise of enterprise LLM fine-tuning is straightforward enough to be compelling. Take a capable general-purpose language model, train it further on proprietary data from your domain, and get a model that performs markedly better on the tasks that matter to your organization. 

The gap between that premise and what most enterprise fine-tuning projects actually deliver is wide enough to have become one of the more reliably frustrating patterns in enterprise AI adoption. Teams spend months on data preparation and training runs, consume substantial GPU budgets, and arrive at a model that performs comparably to the base model they started with, or worse, performs well on the benchmark they optimized for and poorly on the actual production workload.

The gap is not primarily a technical failure. The algorithms work. Parameter-efficient fine-tuning techniques have matured significantly and are accessible to any team with reasonable engineering resources. The failures are upstream and downstream of the training run itself: in the quality and relevance of the training data, in the mismatch between the fine-tuning objective and the actual production task, in the absence of evaluation frameworks that measure what actually matters, and in the organizational assumptions about what fine-tuning is and is not appropriate for. Addressing these failures requires a clearer understanding of what enterprise LLM fine-tuning can and cannot be expected to deliver, and what the preconditions for a project that actually closes the performance gap look like.

This blog examines why most enterprise LLM fine-tuning projects underdeliver, covering the structural reasons that data quality problems dominate fine-tuning outcomes, and how catastrophic forgetting undermines performance.

What Enterprise Fine-Tuning Is Actually Trying to Solve

The Gap That Fine-Tuning Is Supposed to Close

A general-purpose language model trained on broad internet-scale data has learned a great deal about language, reasoning, and general world knowledge. What it has not learned is your organization’s specific terminology, your domain’s particular conventions, your internal document formats, your compliance constraints, or the nuanced judgment calls your subject matter experts make. Fine-tuning promises that additional training on domain-specific examples can close that gap, producing a model that speaks your domain’s language, follows your conventions, and applies the judgment patterns you need.

That promise is real, but it is more conditional than it usually appears in the initial project framing. Fine-tuning is effective at teaching a model to change its style, follow specific output formats, apply domain vocabulary consistently, and replicate the structure of domain-specific responses. It is considerably less effective at teaching a model new factual knowledge, correcting systematic reasoning errors in the base model, or producing reliable behavior on tasks that differ in meaningful ways from the fine-tuning examples. The mismatch between what teams expect fine-tuning to accomplish and what it reliably delivers is the first place where projects begin to underdeliver.

When Fine-Tuning Is the Right Tool

Fine-tuning is most effective when the production task has a consistent structure that can be demonstrated through examples, when the required behavior is primarily a matter of style, format, or domain register rather than novel knowledge, and when a sufficient volume of high-quality task-representative examples can be assembled. 

Legal document summarization with consistent output structure, customer service response generation in a specific organizational tone, and clinical note formatting for a defined documentation standard: these are use cases where fine-tuning is likely to deliver measurable improvement over prompting alone. Tasks that require the model to retrieve specific factual information, reason across long documents, or apply judgment that varies substantially across cases are often better addressed through retrieval-augmented generation or prompt engineering, and deploying fine-tuning for them is a common source of underperformance.

The Data Quality Problem That Derails Most Projects

Why Training Data Quality Is the Primary Determinant of Fine-Tuning Outcomes

The most consistent finding across enterprise fine-tuning programs that underdeliver is that the training data was not as good as the team believed it to be. This is not a subtle problem. It is the dominant failure mode, appearing in various forms across virtually every project that does not achieve its intended performance improvement. 

The relationship between training data quality and fine-tuning outcome is more direct than in pre-training, because the fine-tuning dataset is small enough that individual quality problems have disproportionate influence on the model’s learned behavior. A systematic error in a pre-training corpus of a hundred billion tokens will have a negligible effect on the model’s overall behavior. The same systematic error in a fine-tuning dataset of ten thousand examples will produce a model that reliably replicates the error. 

The Three Most Common Data Quality Failures

The first is inconsistency across examples. Enterprise data assembled from operational systems, human-written documents, or labeled outputs from multiple annotators will typically contain inconsistent patterns: different levels of formality, different approaches to similar cases, and different levels of detail. A model trained on this inconsistency does not learn a clear behavior pattern. It learns an average of conflicting patterns, which produces outputs that are neither definitively one approach nor definitively another, and that satisfy no one’s actual requirements.

The second is contamination by low-quality examples that are included because they are available rather than because they are good. In enterprise data collection, the temptation to include more examples to reach a volume target is strong, and the quality bar for inclusion is often lower than it should be. Examples that are technically correct but poorly constructed, that use domain vocabulary inconsistently, or that apply the target behavior only partially will actively degrade model performance relative to a smaller, cleaner dataset. The quality-over-quantity principle in fine-tuning data assembly is not a platitude. It reflects how the fine-tuning gradient update works: every example in the dataset shifts the model’s parameters, and bad examples shift them in the wrong direction. Text annotation services that apply consistent quality standards across the full dataset, rather than accepting examples that merely pass a minimum threshold, are a structural requirement for fine-tuning data that actually improves model performance.

The third is a distribution mismatch between the fine-tuning data and the actual production inputs. Teams often assemble fine-tuning data from the examples that are easiest to collect, which are the well-structured, easy cases. The production workload includes edge cases, ambiguous inputs, unusual phrasing patterns, and domain variants that the easy-case dataset does not cover. A model fine-tuned on the easy cases will perform well on easy cases and no better than the base model on everything else. If the easy cases constitute a minority of the production workload, the fine-tuning project will yield disappointing real-world results even when benchmark metrics appear acceptable.

Catastrophic Forgetting: The Problem Teams Discover Too Late

What Catastrophic Forgetting Actually Means in Practice

Catastrophic forgetting is the phenomenon where a language model, when fine-tuned on a specific task, loses some of the general capabilities it possessed before fine-tuning. The mechanism is straightforward: the parameter updates that teach the model the new task overwrite some of the parameter configurations that supported pre-existing capabilities. The result is a model that is better at the fine-tuning task and worse at other tasks it previously handled well.

For enterprise programs, catastrophic forgetting shows up in ways that are not always immediately obvious. A model fine-tuned on legal document analysis may become noticeably worse at general reasoning tasks that legal work occasionally requires. A model fine-tuned on customer service responses may lose some of its ability to handle the off-script queries that make up a meaningful fraction of real customer interactions. A model fine-tuned on a narrow set of document formats may fail to handle format variations that it would have managed competently before fine-tuning. These regressions are often discovered after deployment, when users encounter cases that the evaluation framework did not cover.

Why Parameter-Efficient Fine-Tuning Does Not Fully Solve the Problem

Parameter-efficient fine-tuning approaches, which modify only a small fraction of the model’s parameters while keeping the rest frozen, are often presented as a solution to catastrophic forgetting. The intuition is that smaller parameter changes mean less disruption to pre-existing capabilities. This intuition is partially correct but overstated. Research across multiple model families has demonstrated that even low-rank adaptation methods, which are among the most parameter-efficient approaches available, can produce significant forgetting on tasks that differ from the fine-tuning distribution, particularly when fine-tuning datasets are small and the fine-tuning task is narrow.

There is also a specific forgetting risk that receives less attention in enterprise contexts: the erosion of safety behaviors. Models that have been trained with safety guardrails through preference optimization can lose those guardrails when fine-tuned on datasets that do not reinforce them. An enterprise fine-tuning project that improves task performance while inadvertently degrading safety behavior has created a production risk that may not surface in standard evaluation until it produces a visible failure.

Managing Forgetting Through Dataset Design

The most practical mitigation for catastrophic forgetting in enterprise fine-tuning is dataset design rather than algorithm selection. Including a representative sample of general task examples alongside domain-specific examples in the fine-tuning dataset, sometimes called experience replay or rehearsal, helps preserve the parameter configurations that support general capabilities.

Including examples that exercise the model’s safety behaviors alongside domain task examples helps preserve those behaviors. The tradeoff is that a more diverse fine-tuning dataset requires more careful curation and a larger annotation investment. Human-in-the-loop approaches to building generative AI datasets that include deliberate coverage of both domain-specific and general behavioral requirements produce fine-tuning datasets that are less likely to create the forgetting regressions that teams discover in production.

The Evaluation Problem: Measuring the Wrong Thing

Why Benchmark Performance Does Not Predict Production Performance

The evaluation framework used for a fine-tuning project determines what the project appears to achieve. Teams that evaluate their fine-tuned model against a benchmark constructed from the same distribution as the training data will consistently find that their model performs well. Teams that evaluate against production inputs, including the edge cases, the unusual phrasings, the ambiguous requests, and the off-task queries that real users generate, will find a different picture. The gap between these two pictures is the gap between benchmark performance and production performance, and it is one of the most reliable explanations for why fine-tuning projects that look successful in development underperform in deployment.

The construction of the evaluation set is the most consequential methodological decision in a fine-tuning program. An evaluation set drawn from the same source as the training data, or constructed by the same team with the same selection criteria, will not reveal the distribution gaps and edge case failures that determine real-world performance. An evaluation set that is constructed independently, drawn from actual production inputs, and includes deliberate coverage of the cases the team is most uncertain about is significantly more predictive of deployment performance. Model evaluation services that maintain methodological independence between the fine-tuning program and the evaluation framework are a structural requirement for getting an honest picture of what the fine-tuned model actually delivers.

The Missing Behavioral Dimensions in Standard Evaluation

Standard fine-tuning evaluations typically measure task accuracy on held-out examples from the training distribution. What they rarely measure is behavioral consistency across rephrased inputs, robustness to adversarial or unusual inputs, calibration of confidence alongside accuracy, behavior under out-of-distribution conditions, and adherence to the safety and compliance behaviors the model is expected to maintain. Each of these dimensions can reveal failures that task accuracy does not capture.

Behavioral consistency is particularly important for enterprise deployments. A customer service model that gives different answers to semantically equivalent questions phrased differently is producing a user experience problem that accuracy metrics on a fixed test set will not reveal. A compliance-sensitive application that behaves correctly on standard inputs but incorrectly on slight rephrasings has a reliability problem that only behavioral consistency testing will surface. 

Building these dimensions into the evaluation framework from the start of the project, rather than adding them after a deployment failure draws attention to them, is one of the clearest differences between fine-tuning programs that deliver on their promises and those that do not.

Human Evaluation and Where It Cannot Be Replaced

Automated metrics capture some dimensions of output quality and miss others. For tasks where quality is partially subjective, where the correct answer depends on context that is difficult to encode in a metric, or where the model’s behavior needs to meet standards that are easier to recognize than to specify, human evaluation is not supplementary to automated metrics. It is the primary signal. Human preference optimization approaches that systematically collect and incorporate human quality judgments produce evaluation signals that automated metrics cannot replicate, and they are particularly important for catching the behavioral failures that look fine on paper but produce poor experiences when encountered by actual users.

Confusing Fine-Tuning With the Right Solution

When RAG Should Have Been the Answer

One of the most common patterns in enterprise fine-tuning projects that underdeliver is that fine-tuning was the answer to a question that was better answered by retrieval-augmented generation. Fine-tuning teaches a model behavioral patterns and stylistic preferences. It does not give a model reliable access to specific current facts, internal documents, or proprietary information that changes frequently. 

An enterprise that wants its language model to answer accurately about current product specifications, internal policy documents, or recent organizational decisions is unlikely to achieve that through fine-tuning, because fine-tuning encodes statistical patterns from training examples rather than providing a queryable knowledge store. RAG systems that retrieve relevant document chunks at inference time and condition the model’s response on retrieved context are a more appropriate architecture for this category of task, and deploying fine-tuning for it will produce a model that occasionally generates plausible-sounding but incorrect information derived from stale training patterns.

When Prompt Engineering Should Have Come First

Fine-tuning is also regularly deployed as a solution to problems that careful prompt engineering would have resolved at a fraction of the cost. A model that produces outputs in the wrong format when prompted naively may produce the correct format when given a well-structured system prompt with clear instructions and representative examples. A model that uses incorrect terminology when instructed generically may use the correct terminology when provided with a domain glossary in context. 

Prompt engineering services that systematically test the performance improvement achievable through prompt design before committing to a fine-tuning program are a practical and cost-effective step that many projects skip in their eagerness to begin training. The performance ceiling for well-engineered prompts on a capable base model is often higher than teams expect, and establishing that ceiling provides a realistic baseline for evaluating whether fine-tuning delivers meaningful incremental improvement.

The Organizational Assumption That Fine-Tuning Is a One-Time Event

A final underappreciated source of underdelivery is the organizational treatment of fine-tuning as a one-time project rather than a continuous lifecycle. A fine-tuned model that is deployed and left unchanged will experience performance degradation as the production data distribution shifts, as user needs evolve, as new domain terminology emerges, and as the base model it was derived from is updated. 

The initial fine-tuning project is the beginning of a model maintenance commitment, not the end of a capability acquisition effort. Programs that plan and budget for ongoing evaluation, data collection, and re-tuning cycles consistently outperform programs that treat the initial deployment as the finish line.

The Data Flywheel: Why Production Deployment Should Feed Back Into Training

Using Deployment Data to Improve Fine-Tuning Quality

The most valuable source of fine-tuning data for an enterprise model is not a manually curated dataset assembled before training. It is the production data generated by deploying the model and observing how it behaves on real inputs. Production data contains the actual distribution of inputs the model encounters, including the edge cases and unusual patterns that pre-deployment data collection typically underrepresents. It also contains the model’s failures, which are more informative for fine-tuning improvement than its successes.

Building a feedback loop between production deployment and the fine-tuning data pipeline, where failures are flagged, reviewed, corrected by subject matter experts, and incorporated into subsequent training rounds, is the mechanism that transforms a one-time fine-tuning project into a model that continuously improves against the actual production task. This feedback loop requires monitoring infrastructure to detect failures, review workflows to process flagged outputs, and annotation capacity to produce corrected examples at the rate the production system generates failures. Teams that build this infrastructure as part of the initial program design are significantly better positioned than those that attempt to add it retrospectively.

Active Learning and Prioritizing Annotation Effort

Not all production inputs are equally informative for fine-tuning improvement. Inputs on which the model produces confident, correct outputs contribute little to the next training round. Inputs on which the model is uncertain, incorrect, or inconsistent are the most valuable targets for human review and correction. Active learning approaches that prioritize annotation effort toward the most informative examples, rather than randomly sampling from the production stream, produce higher-quality fine-tuning datasets per annotation hour and deliver faster performance improvement per training cycle.

What a Fine-Tuning Project That Delivers Actually Looks Like

The Preconditions That Predict Success

Fine-tuning projects that deliver on their performance goals share a set of preconditions that projects that underdeliver typically lack. The use case has a clear, consistent structure that can be demonstrated through examples. The performance gap between the base model and the target is primarily a matter of style, domain register, or output format rather than factual knowledge. The evaluation framework measures production-relevant behavior rather than benchmark performance on training-distribution examples. The training dataset is small, clean, and highly representative of the production task rather than large, inconsistent, and assembled from whatever data was available. And the team has established clear baselines through prompt engineering before committing resources to fine-tuning.

The Program Architecture That Supports Sustained Performance

Beyond the initial project, the organizational architecture that supports sustained fine-tuning performance includes monitoring infrastructure to detect production failures and distribution shift, annotation capacity to process flagged outputs and produce corrected training examples, a regular re-tuning cycle that keeps the model current with production data distribution, and an evaluation framework that runs on each model version to catch regressions before deployment. Agentic AI systems that incorporate LLMs into complex workflows place additional demands on this architecture because failures in fine-tuned components can compound across the workflow in ways that are harder to diagnose than failures in standalone model deployments.

How Digital Divide Data Can Help

Digital Divide Data provides the data quality, annotation, and evaluation infrastructure that enterprise LLM fine-tuning programs need to deliver on their performance goals rather than falling into the familiar patterns of underperformance. The approach is built around the recognition that fine-tuning outcomes are primarily determined upstream and downstream of the training run itself, and that the training algorithm is rarely the limiting factor.

On the data side, DDD’s data collection and curation services are designed to produce fine-tuning datasets that are genuinely representative of the production task, consistent in quality across all examples, and diverse enough to cover the distribution the model will encounter in deployment. Dataset design explicitly addresses the coverage of edge cases, behavioral consistency requirements, and safety-relevant examples that standard data assembly processes tend to underweight.

On the evaluation side, our model evaluation services provide the methodological independence between the fine-tuning program and the evaluation framework that is necessary for an honest assessment of production performance. Evaluation frameworks are designed to cover production-relevant behavior, including edge cases, behavioral consistency, safety adherence, and out-of-distribution robustness, rather than focusing exclusively on benchmark accuracy.

For programs working with human preference optimization to align fine-tuned models with quality and safety requirements, RLHF and DPO data services provide the human quality signal that automated metrics cannot supply. For teams designing the fine-tuning data pipeline to incorporate production feedback, DDD’s active learning-informed annotation workflows ensure that human review effort is directed toward the examples that most improve model performance rather than spread uniformly across a production stream.

Build fine-tuning programs that actually close the performance gap. Talk to an Expert!

Conclusion

The underdelivery pattern in enterprise LLM fine-tuning is not a mystery. It follows predictably from a set of recurring errors: training data that is inconsistent, unrepresentative, or assembled from whatever was available rather than what was needed; evaluation frameworks that measure benchmark performance rather than production-relevant behavior; catastrophic forgetting that erodes general capabilities and safety behaviors in ways that standard evaluation does not detect; and organizational assumptions about fine-tuning that treat it as a one-time project rather than a continuous lifecycle. Each of these errors has a solution that is known, practical, and implementable without heroic engineering effort. The programs that deliver on their fine-tuning goals are not those that have access to better algorithms. They are those who treat data quality, evaluation rigor, and lifecycle planning with the same seriousness that they bring to model selection and training infrastructure.

For enterprise leaders evaluating their AI investment, the practical implication is that the return on a fine-tuning program is more sensitive to the quality of the data and evaluation infrastructure than to the choice of base model or fine-tuning technique. Investing in those foundations, through structured data curation, production-representative evaluation, and ongoing annotation capacity, is the most reliable lever for closing the gap between the performance that fine-tuning promises and the performance that production deployments actually need. 

Digital Divide Data is built to provide exactly that infrastructure, ensuring that the fine-tuning investment produces models that perform in deployment, not just in development.

References 

Raj J, M., Warrier, H., Desai, A., & Menon, S. (2024). Fine-tuning LLM for enterprise: Practical guidelines and recommendations. arXiv. https://arxiv.org/abs/2404.10779

Li, H., Ding, L., Fang, M., & Tao, D. (2024). Revisiting catastrophic forgetting in large language model tuning. Findings of EMNLP 2024. Association for Computational Linguistics. https://aclanthology.org/2024.findings-emnlp.249

Biderman, S., Portes, J., Ortiz, J. J., Paul, M., Greengard, A., Jennings, C., King, D., Havens, S., Chiley, V., Frankle, J., Blakeney, C., & Cunningham, J. P. (2024). LoRA learns less and forgets less. Transactions on Machine Learning Research. https://arxiv.org/abs/2405.09673

VentureBeat. (2025, February). MIT’s new fine-tuning method lets LLMs learn new skills without losing old ones. VentureBeat. https://venturebeat.com/orchestration/mits-new-fine-tuning-method-lets-llms-learn-new-skills-without-losing-old

Frequently Asked Questions

How much training data does an enterprise LLM fine-tuning project typically need?

A few hundred to a few thousand high-quality, task-representative examples are often sufficient for meaningful fine-tuning improvement; volume matters less than quality and representativeness of the production distribution.

What is catastrophic forgetting, and how does it affect enterprise models?

Catastrophic forgetting occurs when fine-tuning on a specific task overwrites parameter configurations supporting other capabilities, causing the model to perform worse on tasks it handled well before fine-tuning, including general reasoning and safety behaviors.

When should an enterprise choose RAG over fine-tuning?

RAG is more appropriate when the task requires access to specific, current, or frequently updated factual information, since fine-tuning encodes behavioral patterns rather than providing reliable access to specific knowledge.

How do you build an evaluation framework that reflects production performance?

Draw the evaluation set from actual production inputs rather than the same source as training data, include deliberate coverage of edge cases and behavioral consistency, and maintain methodological independence between the team building the fine-tuning dataset and the team constructing the evaluation set.

Why Most Enterprise LLM Fine-Tuning Projects Underdeliver Read Post »

ODD Analysis

ODD Analysis for AV: Why It Matters, and How to Get It Right

Every autonomous driving program reaches a moment when the question shifts from whether the technology works to where and under what conditions it works reliably enough to be deployed. That question has a formal answer in the engineering and regulatory world, and it is called the Operational Design Domain (ODD). The ODD is the structured specification of the environments, conditions, and scenarios within which an automated driving system is designed to operate safely. It is not a general claim about system capability. It is a bounded, documented commitment that defines the edges of what the system is built to handle, and by implication, what lies outside those edges.

The gap between programs that manage their ODD thoughtfully and those that treat it as paperwork shows up early. A poorly defined ODD leads to underspecified test coverage, safety cases that do not hold up under regulatory review, and systems that are deployed in conditions they were never validated against. A well-defined ODD, by contrast, anchors the entire development and validation process. It determines which scenarios need to be tested, which edge cases need to be curated, where simulation is sufficient, and where real-world data is necessary, and how expansion to new geographies or operating conditions should be managed. Getting ODD analysis right is therefore not a compliance exercise. It is a foundation for everything that comes after it.

This blog explains what ODD analysis actually involves for ADAS and autonomous driving programs, how ODD taxonomies and standards structure the domain definition process, and what the data and annotation implications of a well-specified ODD are, and how to get it right.

What the Operational Design Domain Actually Defines

The Operational Design Domain specifies the conditions under which a given driving automation system is designed to function. That definition is precise by intent. The ODD does not describe where a system usually works or where it works most of the time. It describes the bounded set of conditions within which the system is designed to operate safely, and outside of which the system is expected to either hand control back to a human or execute a minimal risk condition.

Those conditions span multiple dimensions. 

Road type and geometry: Is the system designed for motorways, urban arterials, residential streets, or a specific mix? 

Speed range: what is the minimum and maximum vehicle speed within the ODD? 

Time of day: Is a daytime-only operation assumed, or does the system operate at night? 

Weather and visibility: what precipitation levels, fog densities, and ambient light conditions are within scope? 

Infrastructure requirements: Does the system require lane markings to be present and legible, traffic signals to be functioning, or specific road surface conditions? 

Traffic density and agent types: Is the system validated against cyclists and pedestrians, or only against other motor vehicles?

Why Unstructured ODD Definitions Fail

The instinct among many development teams, particularly at early program stages, is to define the ODD in natural language. The system will operate on highways in good weather. That kind of description has the virtue of being readable and the significant vice of being ambiguous. What counts as a highway? What counts as good weather? At what point does light rain become weather outside the ODD? Without a structured taxonomy, these questions have no definitive answers, and the gaps between them create space for validation that is technically compliant but substantively incomplete.

Structured taxonomies solve this by breaking the ODD into hierarchically organized, formally defined attributes, each with specified values or value ranges. Road type is not a single attribute. It branches into motorway, dual carriageway, single carriageway, urban road, and sub-categories within each, each with associated infrastructure characteristics. Environmental conditions branch into precipitation type and intensity, visibility range, lighting conditions, road surface state, and seasonal factors. Each branch can be assigned a permissive value (within ODD), a non-permissive value (outside ODD), or a conditional value (within ODD subject to specific constraints).

ODD Analysis as an Engineering Process

The Difference Between Defining and Analyzing

ODD definition, the act of specifying which conditions are within scope, is the starting point. ODD analysis goes further. It asks what the system’s behavior looks like across the full breadth of the defined ODD, where the system’s performance begins to degrade as conditions approach the ODD boundary, and what the transition behavior looks like when conditions move from inside to outside the ODD. A system that functions well in the center of its ODD but degrades unpredictably as it approaches boundary conditions has an ODD analysis problem, even if the ODD specification itself is well-formed.

The process of analyzing the ODD begins with mapping system capabilities against ODD attributes. For each attribute in the ODD taxonomy, the engineering team should understand how the system’s performance varies across the range of permissive values, where performance begins to degrade, and what triggers the boundary between permissive and non-permissive. That understanding comes from systematic testing across the attribute space, which requires both real-world data collection in representative conditions and simulation for conditions that cannot be safely or efficiently collected in the real world.

The Relationship Between ODD Analysis and Scenario Selection

The ODD specification is the source document for scenario-based testing. Once the ODD is formally defined, the scenario library for validation should cover the full cross-product of ODD attributes at sufficient density to demonstrate that system performance is acceptable across the entire space, not just at the attribute midpoints that are most convenient to test. 

ODD coverage metrics, which quantify what proportion of the attribute space has been tested at what density, provide the only rigorous basis for answering the question of whether testing is complete. Edge case curation is the process of specifically targeting the parts of the ODD that are most likely to produce safety-relevant behavior but least likely to be encountered during normal testing, the boundary conditions, the rare combinations of adverse attributes, and the scenarios that fall just inside the ODD limit. Without systematic edge case coverage, a validation program may have excellent average-case performance evidence and serious gaps in the conditions that matter most.

Coverage Metrics and When Testing Is Enough

Coverage metrics for ODD-based testing answer the question that every validation team needs to answer before a regulatory submission: how much of the ODD has been tested, and how thoroughly? The most basic metric is scenario coverage, the proportion of ODD attribute combinations that have at least one test case. More sophisticated metrics weight coverage by the frequency of conditions in the intended deployment environment, by the risk level associated with each condition combination, or by the sensitivity of system performance to variation in each attribute. Performance evaluation against these metrics provides the quantitative basis for the safety argument that the system has been tested across a representative and complete sample of its operational domain.

Data and Annotation Implications of ODD Analysis

How the ODD Shapes Data Collection Requirements

The ODD is not just an engineering specification. It is a data requirements document. Every attribute in the ODD taxonomy implies a data collection and annotation requirement. If the ODD includes nighttime operation, the program needs annotated data from nighttime driving across the range of road types and weather conditions within scope. If the ODD includes adverse weather, the program needs data from rain, fog, and low-visibility conditions, annotated with the same label quality as clear-weather data. If the ODD includes specific road infrastructure types, the program needs data from those infrastructure types, annotated with the infrastructure attributes that the perception system depends on. The ML data annotation pipeline is therefore directly shaped by the ODD specification: what data is needed, in what conditions, at what volume and diversity, and to what accuracy standard.

The annotation implications of boundary conditions deserve particular attention. Data collected near the ODD boundary, in conditions that approach but do not cross the non-permissive threshold, is the most safety-critical data in the training and validation corpus. A perception model that has been trained primarily on clear, well-lit, high-visibility data but is expected to operate right up to the edge of its low-visibility ODD boundary needs specific training exposure to data collected at that boundary. Annotating boundary-condition data correctly, ensuring that object labels remain accurate and complete as conditions degrade, requires annotators who understand both the task and the sensor physics of the conditions being labeled.

Geospatial Data and ODD Geography

For programs with geographically bounded ODDs, the annotation implications also extend to geospatial data. A system designed to operate in a specific city or region needs HD map coverage, infrastructure data, and traffic behavior annotations for that geography. A system designed to expand its ODD to a new market needs equivalent data from the new geography before the expansion can be validated. DDD’s geospatial data capabilities and the broader context of geospatial data challenges for Physical AI directly address this requirement, ensuring that the geographic scope of the ODD is matched by the geographic scope of the annotated data underlying the system.

The Multisensor Challenge at ODD Boundaries

At ODD boundary conditions, multisensor fusion behavior is particularly important and particularly difficult to annotate. In clear conditions, camera, LiDAR, and radar outputs are consistent and mutually reinforcing. At the edge of the ODD, sensor degradation modes begin to diverge. A dense fog condition that keeps visibility just within the ODD limit will degrade camera performance substantially while affecting LiDAR and radar differently and to different degrees. The fusion system’s behavior in these divergent-degradation conditions is what determines whether the system responds safely or not. Annotating the ground truth for sensor fusion behavior at ODD boundaries requires understanding of both the sensor physics and the fusion logic, and it is one of the more technically demanding annotation tasks in the ADAS data workflow.

ODD Boundaries and the Transition to Minimal Risk Condition

A well-specified ODD not only defines what is inside. It defines what the system does when conditions move outside. The minimal risk condition, the safe state the system transitions to when it can no longer operate within its ODD, is a fundamental component of the safety case for any Level 3 or higher system. Whether that condition is a controlled stop at the roadside, a handover to human control with appropriate warning time, or a gradual speed reduction to a safe following mode depends on the system architecture and the nature of the ODD exit.

Specifying the transition behavior is part of ODD analysis, not separate from it. The engineering team needs to understand not just where the ODD boundary is but how quickly boundary conditions can be reached from typical operating conditions, how reliably the system detects that it is approaching the boundary, and whether the transition behavior provides sufficient time and warning for safe human takeover where human intervention is the intended response. Systems that detect ODD exit late, or that transition abruptly without adequate warning, may have a correctly specified ODD and a dangerously incomplete ODD analysis.

Common Mistakes in ODD Definition and Analysis

Defining the ODD to Fit the Existing Test Coverage

The most common and consequential mistake in the ODD definition is working backwards from what has been tested rather than forward from the system’s intended deployment environment. A team that defines its ODD after the fact to match the test conditions it has already covered may produce a formally complete ODD specification that nonetheless excludes conditions the system will encounter in real deployment. This approach inverts the intended logic of ODD analysis, where the ODD should drive the test coverage, not be shaped by it.

Underspecifying Boundary Conditions

A related mistake is specifying ODD attributes as simple binary permissive or non-permissive categories without capturing the performance gradient that exists between the attribute midpoint and the boundary. A system that works reliably in rain up to 10mm per hour but begins to degrade at 8mm per hour has an ODD boundary that the simple specification may not capture. Underspecifying boundary conditions leads to safety margins that are tighter than the specification suggests, which in turn leads to ODD monitoring systems that trigger transitions too late.

Treating ODD Expansion as a Software Update

Expanding the ODD, adding nighttime operation, extending the speed range, and including new road types or geographies is not a software update. It is a re-validation event that requires new data collection, new annotation, new scenario coverage analysis, and updated safety case evidence for every attribute that has changed. Programs that treat ODD expansion as a configuration change rather than a validation exercise introduce unquantified risk into their systems. The incremental expansion methodology, where each new ODD attribute is validated separately and then integrated with existing coverage evidence, is the appropriate approach. 

Disconnecting ODD Analysis from the Scenario Library

A final common failure mode is maintaining the ODD specification and the scenario library as separate artifacts that are not formally linked. When the ODD changes and the scenario library is not automatically updated to reflect the new attribute space, coverage gaps accumulate silently. Programs that maintain a formal, traceable link between ODD attributes and scenario metadata, so that each scenario is tagged with the ODD conditions it exercises, are in a significantly better position to detect and close coverage gaps when the ODD evolves. DDD’s simulation operations services include scenario tagging workflows designed to maintain exactly this kind of traceability between ODD specifications and the scenario library.

How Digital Divide Data Can Help

Digital Divide Data provides end-to-end ODD analysis services for autonomous driving and broader Physical AI programs, supporting the structured definition, validation, and expansion of operational design domains at every stage of the development lifecycle. The approach starts from the recognition that ODD analysis is a data discipline, not just a specification exercise, and that the quality of the data and annotation underlying each ODD attribute is what determines whether the ODD commitment can actually be validated.

On the validation side, DDD’s edge case curation services identify and build annotated examples of the ODD boundary conditions that most need validation coverage, while the simulation operations capabilities support scenario library development that is systematically linked to the ODD attribute space. ODD coverage metrics are tracked against the scenario library throughout the validation program, providing the quantitative coverage evidence that regulatory submissions require.

For programs preparing regulatory submissions, Digital Divide Data‘s safety case analysis services support the documentation and evidence generation required to demonstrate that the ODD has been defined, validated, and monitored to the standards that NHTSA, UNECE, and EU regulators expect. For teams expanding their ODD to new geographies or conditions, DDD provides the data collection planning, annotation, and coverage analysis support that each incremental expansion requires.

Build a rigorous ODD analysis program that regulators and safety teams can trust. Talk to an expert!

Conclusion

ODD analysis is the foundation on which everything else in autonomous driving development rests. The scenario library, the training data requirements, the simulation environment, the safety case, and the regulatory submission: all of them trace back to a clear, structured, and rigorously validated specification of the conditions the system is designed to handle. Programs that invest in getting this foundation right from the start, using structured taxonomies, machine-readable specifications, and ODD-linked coverage metrics, build on solid ground. Those who treat the ODD as a compliance artifact to be completed after the fact find themselves reconstructing it under pressure, often with gaps they cannot close before a submission deadline. The investment in rigorous ODD analysis is not proportional to the ODD’s complexity. It is proportional to everything that depends on it.

As autonomous systems move from structured, controlled deployment environments to broader public operation across diverse geographies and conditions, the ODD becomes not just an engineering tool but a public safety instrument. The clarity with which a development team can answer the question ‘where does your system operate safely’ is the clarity with which regulators, insurers, and the public can assess the system’s safety case. 

References 

International Organization for Standardization. (2023). ISO 34503:2023 Road vehicles: Test scenarios for automated driving systems — Specification and categorization of the operational design domain. ISO. https://www.iso.org/standard/78952.html

ASAM e.V. (2024). ASAM OpenODD: Operational design domain standard for ADAS and ADS. ASAM. https://www.asam.net/standards/detail/openodd/

United Nations Economic Commission for Europe. (2024). Guidelines and recommendations for ADS safety requirements, assessments, and test methods. UNECE WP.29. https://unece.org/transport/publications/guidelines-and-recommendations-ads-safety-requirements-assessments-and-test

Hans, O., & Walter, B. (2024). ODD design for automated and remote driving systems: A path to remotely backed autonomy. IEEE International Conference on Intelligent Transportation Engineering (ICITE). https://www.techrxiv.org/users/894908/articles/1271408

Frequently Asked Questions

What is the difference between an ODD and an operational domain?

An operational domain describes all conditions the vehicle might encounter, while the ODD is the bounded subset of those conditions that the automated system is specifically designed and validated to handle safely.

Can an ODD be defined before the system is built?

Yes, and it should be. Defining the ODD early shapes the data collection, annotation, and validation program rather than being reconstructed from whatever testing has already been completed, which is the more common but less rigorous approach.

How does the ODD relate to edge case testing?

Edge cases are the scenarios at or near the ODD boundary that are most likely to produce safety-relevant behavior and least likely to be encountered during normal testing, making them the most critical part of the ODD to curate and validate specifically.

What happens when a vehicle exits its ODD during operation?

The system is expected to either transfer control to a human driver with sufficient warning time or execute a low-risk maneuver, such as a controlled stop, depending on the automation level and the nature of the ODD exceedance.

ODD Analysis for AV: Why It Matters, and How to Get It Right Read Post »

Humanoid Training Data and the Problem Nobody Is Talking About

Humanoid Training Data and the Problem Nobody Is Talking About

Spend a week reading humanoid robotics coverage, and you will hear a great deal about joint torque, degrees of freedom, battery runtime, and the competitive landscape between Figure, Agility, Tesla, and Boston Dynamics. These are real and important topics. They are also the visible part of a much larger iceberg. The part below the waterline is data: the enormous, structurally complex, expensive-to-produce training data that determines whether a humanoid robot that can walk and lift boxes in a controlled warehouse pilot can also navigate an unexpected obstacle, pick up an unfamiliar container, or recover gracefully from a failed grasp in a real facility with real variation.

In this blog, we examine why humanoid training data is harder to collect and annotate than text or image data, what specific data modalities system requires, and what development teams need to build real-world systems.

What Humanoid Training Data Actually Involves

The modality stack

A production-capable humanoid robot learning to perform a manipulation task in a real environment needs training data that captures the full sensorimotor loop of the task. That means egocentric RGB video from cameras mounted on or near the robot’s head, capturing what the robot sees as it acts. It means depth data providing metric scene geometry. It means 3D LiDAR point clouds for spatial awareness in larger environments. It means joint angle and joint velocity time series for every degree of freedom in the kinematic chain. It means force and torque sensor readings at the wrist and end-effector. And for dexterous manipulation tasks, it means tactile sensor data from fingertip sensors that can distinguish the difference between a secure grip and one that is about to slip.

The annotation requirements that follow

Raw multi-modal sensor data is not training data. It becomes training data through annotation: the labeling of object identities and spatial positions, the segmentation of task phases and sub-task boundaries, the labeling of contact events, grasp outcomes, and failure modes, the assignment of natural language descriptions to action sequences, and the quality filtering that removes demonstrations that are too noisy, too slow, or too inconsistent to contribute usefully to policy learning. Each of these annotation tasks has different requirements, different skill demands, and different quality standards. Producing them at the volume and consistency that foundation model training needs is not a bottleneck that better algorithms alone will resolve. It is a data collection and annotation infrastructure problem, and it requires dedicated annotation capacity built specifically for physical AI data.

Teleoperation: The Primary Data Collection Method and Its Limits

Why teleoperation dominates humanoid data collection

Teleoperation, where a human operator directly controls the humanoid robot’s movements while the robot records its sensor outputs and the operator’s control signals as a training demonstration, has become the dominant method for humanoid training data collection. The reason is straightforward: it is the most reliable way to generate high-quality demonstrations of complex tasks that the robot cannot yet perform autonomously. A teleoperated demonstration shows the robot what success looks like at the level of sensor-to-action detail that imitation learning algorithms require.

The quality problem in teleoperated demonstrations

Teleoperated demonstrations vary enormously in quality. An operator who is fatigued, distracted, or performing an unfamiliar task will produce demonstrations that include inefficient trajectories, hesitation pauses, unnecessary corrective movements, and failed attempts that have to be discarded or carefully annotated as negative examples. Demonstrations produced by expert operators in controlled conditions transfer poorly to the diversity of real operating environments. A demonstration of picking up a specific bottle in a specific lighting condition, at a specific position on a shelf, does not generalize to picking up a different container at a different position in different light. Generalization requires demonstration diversity, and producing diverse demonstrations of sufficient quality is expensive.

The annotation layer on top of teleoperated demonstrations adds further complexity. Determining which demonstrations are high-quality enough to include in the training set, where in each demonstration the relevant task phases begin and end, and whether a grasp that succeeded in the demonstration would generalize to variations of the same task: these are judgment calls that require annotators with domain knowledge. Human-in-the-loop annotation for humanoid training data is not the same as image labeling. It requires annotators who understand embodied motion, task structure, and the relationship between sensor signals and physical outcomes.

Imitation Learning and the Data Volume Problem

Imitation learning, where a robot policy is trained to reproduce the actions observed in human demonstrations, is the dominant learning paradigm for humanoid manipulation tasks. Its appeal is clear: if you can show the robot what to do with enough fidelity and enough variation, it can learn to reproduce that behavior across a range of conditions. The challenge is that imitation learning’s performance typically scales with both the volume and diversity of demonstration data. A policy trained on 50 demonstrations of a task in one configuration may perform reliably in that configuration but fail in any configuration that differs meaningfully from the training distribution. Achieving the kind of generalization that makes a humanoid robot commercially useful, the ability to perform a task across the range of objects, positions, lighting conditions, and human interaction patterns that a real deployment environment involves requires a demonstration library that may run to thousands of episodes per task category.

What makes demonstration data diverse enough to generalize

The diversity requirements for humanoid demonstration data are more demanding than they might appear. It is not sufficient to vary the visual appearance of the scene. A demonstration library that includes images of the same object in ten different lighting conditions, but always at the same height and orientation, has not solved the generalization problem. True generalization requires variation across object instances, object positions and orientations, operator approaches, surface properties, partial occlusions, and interaction sequences. Producing that variation systematically, and annotating it consistently, requires a data collection methodology that is closer to scientific experimental design than to ad hoc video capture. 

The Sim-to-Real Gap: Why Simulation Data Alone Is Not Enough

What simulation can and cannot do for humanoid training

Simulation is an attractive solution to the data volume problem in humanoid robotics, and it does provide genuine value. Simulation operations can generate locomotion training data at a scale that physical collection cannot match, exposing a locomotion controller to millions of terrain configurations, perturbations, and recovery scenarios that would take years to collect physically. 

The sim-to-real gap is the problem that limits how far simulation can be pushed as a substitute for real-world data in humanoid training. Humanoid robots are highly sensitive to physical variables, including surface friction, object deformation, contact dynamics, and the timing of force transmission through compliant joints. Simulation models of these phenomena are approximations. The approximations that are good enough for locomotion training are often not good enough for dexterous manipulation training, where the difference between a successful grasp and a failed one may depend on contact dynamics that even sophisticated simulators do not fully replicate.

The data annotation demands of sim-to-real transfer

Managing the sim-to-real gap requires real-world data for calibration and transfer validation. A team that trains a manipulation policy in simulation needs annotated real-world data from the target environment to measure the size of the gap and to identify which aspects of the policy need fine-tuning on real demonstrations. That fine-tuning step requires its own demonstration collection and annotation pipeline, operating at the intersection of simulation-aware annotation and real physical deployment data. DDD’s digital twin validation services and simulation operations capabilities are built to support exactly this kind of iterative sim-to-real data workflow, ensuring that the transition from simulation training to physical deployment is grounded in real-world data at every calibration stage.

The annotation challenges specific to sim-to-real transfer are also worth naming directly. Annotators working on sim-to-real data need to label not only what happened in the real-world interaction, but why the policy behaved differently from the simulation expectation. Identifying the specific contact dynamics, object properties, or environmental conditions that explain a performance gap requires physical intuition that cannot be reduced to simple object labeling. It is closer to failure mode analysis than to standard annotation work.

Why Touch Matters More Than Vision for Dexterous Tasks

The current dominant paradigm in humanoid robot perception is vision-first: cameras capture what the robot sees, and perception algorithms process that visual data to plan manipulation actions. For many tasks, this is sufficient. Picking up a rigid object from a known position against a contrasting background is tractable with vision alone. But the manipulation tasks that would make a humanoid commercially valuable in real environments, sorting mixed containers, handling deformable materials, performing assembly operations with tight tolerances, adjusting grip when an object begins to slip, are tasks where tactile and force data are not supplementary. They are necessary.

The manipulation bottleneck that the humanoid industry is beginning to acknowledge is partly a tactile data problem. A robot that cannot sense contact forces and fingertip pressure cannot adjust grip dynamically, cannot detect an impending drop, and cannot handle objects whose properties vary in ways that vision does not reveal. Current fingertip tactile sensors exist and are being integrated into leading humanoid platforms, but the training data infrastructure for tactile-augmented manipulation is still in early development.

What tactile data annotation requires

Tactile sensor data annotation is among the least standardized modalities in the Physical AI data ecosystem. Pressure maps, shear force readings, and vibrotactile signals from fingertip sensors need to be labeled in the context of the manipulation task they accompany, correlating contact events with grasp outcomes, surface properties, and the visual and kinematic data recorded simultaneously. The multisensor fusion demands of tactile-augmented humanoid data are significantly higher than those of vision-only systems, because the temporal synchronization requirements are strict and the physical interpretation of the sensor signals requires annotators who understand both the sensor physics and the task structure being labeled.

Why annotation quality matters more at foundation model scale

At the scale of foundation model training, annotation quality errors do not average out. They compound. A systematic labeling error in task phase boundaries, consistently applied across thousands of demonstrations, will produce a model that learns the wrong task decomposition. A set of demonstrations that are annotated as successful but that include borderline or partially failed grasps will produce a model with an optimistic view of its own manipulation reliability. The quality standards that matter for smaller-scale policy training become critical at foundation model scale, where the training corpus is large enough that individual annotation errors have diffuse effects that are difficult to diagnose after the fact. Investing in high-quality ML data annotation and structured quality assurance protocols from the start of a humanoid data program is considerably more cost-effective than attempting to audit and correct a large, inconsistently annotated corpus later.

What the Data Infrastructure Gap Means for Commercial Timelines

The honest assessment of where the industry stands

The humanoid robotics programs that are most credibly advancing toward commercial deployment in 2026 are the ones that have invested seriously in their data infrastructure alongside their hardware development. 

For development teams that do not have access to large proprietary deployment environments to generate operational data, the path to the demonstration volume and diversity that commercially viable generalization requires runs through specialist data infrastructure: teleoperation setups capable of producing high-quality, diverse demonstrations at volume, annotation teams with the domain knowledge to label multi-modal physical AI data to the standards that foundation model training demands, and quality assurance pipelines that can maintain consistency across large demonstration corpora.

The cost reality that is underweighted in roadmaps

Humanoid robotics roadmaps published by development teams and market analysts tend to foreground hardware milestones and underweight data infrastructure costs. The cost of collecting, synchronizing, and annotating a demonstration library large enough to support meaningful generalization is not a rounding error in a humanoid development budget. For a team targeting deployment across multiple task categories in a real operating environment, the data infrastructure investment is likely to be comparable to, and in some cases larger than, the hardware development cost. Teams that discover this late in the development cycle face difficult choices between delaying deployment to build the data they need and accepting a narrower generalization than their product roadmaps promised. Physical AI data services from specialist partners offer an alternative: access to annotation infrastructure and domain expertise that development teams can engage without building the full capability in-house.

How DDD Can Help

Digital Divide Data provides comprehensive humanoid AI data solutions designed to support development programs at every stage of the training data lifecycle. DDD’s teams have the domain expertise and operational capacity to handle the multi-modal annotation demands that humanoid robotics training data requires, from synchronized video and depth annotation to joint pose labeling, task phase segmentation, and grasp outcome classification.

On the teleoperation and demonstration data side, DDD’s ML data collection services support the design and execution of structured demonstration collection programs that produce the diversity and quality that imitation learning algorithms need. Rather than capturing demonstrations opportunistically, DDD works with development teams to define the coverage requirements for their operational design domain and design data collection protocols that systematically address those requirements.

For teams building toward Large Behavior Models and vision-language-action systems, DDD’s VLA model analysis capabilities and multi-modal annotation workflows support the natural language annotation, task phase labeling, and cross-task consistency checking that foundation model training data requires. DDD’s robotics data services extend this support to the broader robotics data ecosystem, including annotation for locomotion training data, environment mapping for simulation foundation models, and quality assurance for sim-to-real transfer validation datasets.

Teams working on the tactile and force data frontier can engage DDD’s annotation specialists for the physical AI data modalities that require domain-specific expertise: contact event labeling, grasp outcome classification, and the correlation of multisensor fusion data across tactile, kinematic, and visual streams. For C-level decision-makers evaluating their data infrastructure strategy, DDD offers a realistic assessment of what production-grade humanoid training data requires and a delivery model that scales with the program.

Build the data infrastructure your humanoid robotics program actually needs. Talk to an expert!

Conclusion

The humanoid robotics industry is at a genuine inflection point, and the coverage of that inflection point reflects a real shift in what these systems can do. What the coverage does not yet fully reflect is the structural dependency between what humanoid robots can do in controlled demonstrations and what they can do in the real-world environments that commercial deployment actually involves. That gap is primarily a data gap. The manipulation tasks, the environmental diversity, the dexterous skill generalization, and the recovery from unexpected failures that would make a humanoid robot genuinely useful in an industrial or domestic setting require training data at a volume, diversity, and multi-modal quality that most development programs have not yet built the infrastructure to produce. Recognizing that the data infrastructure is the critical path, not an implementation detail to be addressed after the hardware is ready, is the first step toward realistic commercial planning.

The programs that close the gap first will not necessarily be the ones with the best actuators or the most capable base models. They will be the ones who treat Physical AI data infrastructure as a first-class engineering investment, building the teleoperation capacity, annotation pipelines, and quality assurance frameworks that turn raw sensor data into training data capable of generalizing to the real world. The hardware plateau that the industry is approaching makes this clearer, not less so. When mechanical capability is no longer the differentiator, the quality of the data behind the intelligence becomes the thing that determines which programs reach commercial scale and which ones remain compelling prototypes.

References 

Welte, E., & Rayyes, R. (2025). Interactive imitation learning for dexterous robotic manipulation: Challenges and perspectives — a survey. Frontiers in Robotics and AI, 12, Article 1682437. https://doi.org/10.3389/frobt.2025.1682437

NVIDIA Developer Blog. (2025, November 6). Streamline robot learning with whole-body control and enhanced teleoperation in NVIDIA Isaac Lab 2.3. https://developer.nvidia.com/blog/streamline-robot-learning-with-whole-body-control-and-enhanced-teleoperation-in-nvidia-isaac-lab-2-3/

Rokoko. (2025). Unlocking the data infrastructure for humanoid robotics. Rokoko Insights. https://www.rokoko.com/insights/unlocking-the-data-infrastructure-for-humanoid-robotics 

Frequently Asked Questions

What types of sensors generate training data for humanoid robots?

Production-grade humanoid training requires synchronized data from cameras, depth sensors, LiDAR, joint encoders, force-torque sensors at the wrist, IMUs, and fingertip tactile sensors, all recorded at high frequency during demonstration or operation episodes.

How many demonstrations does a humanoid robot need to learn a manipulation task?

It varies significantly by task complexity and demonstration diversity, but research suggests hundreds to thousands of diverse demonstrations per task category are typically needed for meaningful generalization beyond the specific training configurations.

Why can’t humanoid robots just use simulation data instead of expensive real demonstrations?

Simulation is useful for locomotion and coarse motor training, but dexterous manipulation requires accurate contact dynamics and surface properties that simulators still do not replicate with sufficient fidelity, making real-world demonstration data necessary for the most challenging tasks.

What is the sim-to-real gap and why does it matter for humanoid deployment?

The sim-to-real gap refers to the performance drop when a policy trained in simulation is deployed on real hardware, caused by differences in physics, sensor noise, and contact dynamics between the simulated and real environments that require real-world data to bridge. 

Humanoid Training Data and the Problem Nobody Is Talking About Read Post »

Scroll to Top