Transforming Youth Lives Through Education, Training, and Sustainable Employment Opportunities Worldwide.

Author name: saurabh garg

Multilingual Data Annotation

Challenges in Building Multilingual Datasets for Generative AI

Umang Dayal 14 Nov, 2025 When we talk about the progress of generative AI, the conversation often circles back to the same foundation: data. Large language models, image generators, and conversational systems all learn from the patterns they find in the text and speech we produce. The breadth and quality of that data decide how well these systems understand human expression across cultures and contexts. But there’s a catch: most of what we call “global data” isn’t very global at all. Despite the rapid growth of AI datasets, English continues to dominate the landscape. A handful of other major languages follow closely behind, while thousands of others remain sidelined or absent altogether. It’s not that these languages lack speakers or stories. Many simply lack the digital presence or standardized formats that make them easy to collect and train on. The result is an uneven playing field where AI performs fluently in one language but stumbles when faced with another. Building multilingual datasets for generative AI is far from straightforward. It involves a mix of technical, linguistic, and ethical challenges that rarely align neatly. Gathering enough data for one language can take years of collaboration, while maintaining consistency across dozens of languages can feel nearly impossible. And yet, this effort is essential if we want AI systems that truly reflect the diversity of global communication. In this blog, we will explore the major challenges involved in creating multilingual datasets for generative AI. We will look at why data imbalance persists, what makes multilingual annotation so complex, how governance and infrastructure affect data accessibility, and what strategies are emerging to address these gaps. The Importance of Multilingual Data in Generative AI Generative AI might appear to understand the world, but in reality, it only understands what it has been taught. The boundaries of that understanding are drawn by the data it consumes. When most of this data exists in a few dominant languages, it quietly narrows the scope of what AI can represent. A model trained mostly in English will likely perform well in global markets that use English, yet falter when faced with languages rich in context, idioms, or scripts it has rarely seen. For AI to serve a truly global audience, multilingual capability is not optional; it’s foundational. Multilingual models allow people to engage with technology in the language they think, dream, and argue in. That kind of accessibility changes how students learn, how companies communicate, and how public institutions deliver information. Without it, AI risks reinforcing existing inequalities rather than bridging them. The effect of language diversity on model performance is more intricate than it first appears. Expanding a model’s linguistic range isn’t just about adding more words or translations; it’s about capturing how meaning shifts across cultures. Instruction tuning, semantic understanding, and even humor all depend on these subtle differences. A sentence in Italian might carry a tone or rhythm that doesn’t exist in English, and a literal translation can strip it of intent. Models trained with diverse linguistic data are better equipped to preserve that nuance and, in turn, generate responses that feel accurate and natural to native speakers. The social and economic implications are also significant. Multilingual AI systems can support local entrepreneurship, enable small businesses to serve broader markets, and make public content accessible to communities that were previously excluded from digital participation. In education, they can make learning materials available in native languages, improving comprehension and retention. In customer service, they can bridge cultural gaps by responding naturally to regional language variations. Many languages remain underrepresented, not because they lack value, but because the effort to digitize, annotate, and maintain their data has been slow or fragmented. Until multilingual data becomes as much a priority as algorithmic performance, AI will continue to be fluent in only part of the human story. Key Challenges in Building Multilingual Datasets Creating multilingual datasets for generative AI may sound like a matter of collecting enough text, translating it, and feeding it into a model. In practice, each of those steps hides layers of difficulty. The problems aren’t only technical; they’re linguistic, cultural, and even political. Below are some of the most pressing challenges shaping how these datasets are built and why progress still feels uneven. Data Availability and Language Imbalance The most obvious obstacle is the uneven distribution of digital language content. High-resource languages like English, Spanish, and French dominate the internet, which makes their data easy to find and use. But for languages spoken by smaller or regionally concentrated populations, digital traces are thin or fragmented. Some languages exist mostly in oral form, with limited standardized spelling or writing systems. Others have digital content trapped in scanned documents, PDFs, or community platforms that aren’t easily scraped. Even when data exists, it often lacks metadata or structure, making it difficult to integrate into large-scale datasets. This imbalance perpetuates itself; AI tools trained on major languages become more useful, drawing in more users, while underrepresented languages fall further behind in digital representation. Data Quality, Cleaning, and Deduplication Raw multilingual data rarely comes clean. It’s often riddled with spam, repeated content, or automatically translated text of questionable accuracy. Identifying which lines belong to which language, filtering offensive material, and avoiding duplication are recurring problems that drain both time and computing power. The cleaning process may appear purely technical, but it requires contextual judgment. A word that’s harmless in one dialect might be offensive in another. Deduplication, too, is tricky when scripts share similar structures or transliteration conventions. Maintaining semantic integrity across alphabets, diacritics, and non-Latin characters demands a deep awareness of linguistic nuance that algorithms still struggle to match. Annotation and Translation Complexity Annotation is where human expertise becomes indispensable and expensive. Labeling data across multiple languages requires trained linguists who understand local syntax, idioms, and cultural cues. For many lesser-known languages, there are simply not enough qualified annotators to meet the growing demand. Machine translation can fill some gaps, but not without trade-offs. Automated translations may capture literal

Digitization

How Optical Character Recognition (OCR) Digitization Enables Accessibility for Records and Archives

Umang Dayal 13 Nov, 2025 Over the past decade, governments, universities, and cultural organizations have been racing to digitize their holdings. Scanners hum in climate-controlled rooms, and terabytes of images fill digital repositories. But scanning alone doesn’t guarantee access. A digital image of a page is still just that, an image. You can’t search it, quote it, or feed it to assistive software. In that sense, a scanned archive can still behave like a locked cabinet, only prettier and more portable. Millions of historical documents remain in this limbo. Handwritten parish records, aging census forms, and deteriorating legal ledgers have been captured as pictures but not transformed into living text. Their content exists in pixels rather than words. That gap between preservation and usability is where Optical Character Recognition (OCR) quietly reshapes the story. In this blog, we will explore how OCR digitization acts as the bridge between preservation and accessibility, transforming static historical materials into searchable, readable, and inclusive digital knowledge. The focus is not just on the technology itself but on what it makes possible, the idea that archives can be truly open, not only to those with access badges and physical proximity, but to anyone with curiosity and an internet connection. Understanding OCR in Digitization Optical Character Recognition, or OCR, is a system that turns images of text into actual, editable text. In practice, it’s far more intricate. When an old birth register or newspaper is scanned, the result is a high-resolution picture made of pixels, not words. OCR steps in to interpret those shapes and patterns, the slight curve of an “r,” the spacing between letters, the rhythm of printed lines, and converts them into machine-readable characters. It’s a way of teaching a computer to read what the human eye has always taken for granted. Early OCR systems did this mechanically, matching character shapes against fixed templates. It worked reasonably well on clean, modern prints, but stumbled the moment ink bled, fonts shifted, or paper aged. The documents that fill most archives are anything but uniform: smudged pages, handwritten annotations, ornate typography, even water stains that blur whole paragraphs. Recognizing these requires more than pattern matching; it calls for context. Recent advances bring in machine learning models that “learn” from thousands of examples, improving their ability to interpret messy or inconsistent text. Some tools specialize in handwriting (Handwritten Text Recognition, or HTR), others in multilingual documents, or layouts that include tables, footnotes, and marginalia. Together, they form a toolkit that can read the irregular and the imperfect, which is what most of history looks like. But digitization is not just about making digital surrogates of paper. There’s a deeper shift from preservation to participation. When a collection becomes searchable, it changes how people interact with it. Researchers no longer need to browse page by page to find a single reference; they can query a century’s worth of data in seconds. Teachers can weave original materials into lessons without leaving their classrooms. Genealogists and community historians can trace local stories that would otherwise be lost to time. The archive moves from being a static repository to something closer to a public workspace, alive with inquiry and interpretation. Optical Character Recognition (OCR) Digitization Pipeline The journey from a physical document to an accessible digital text is rarely straightforward. It begins with a deceptively simple act: scanning. Archivists often spend as much time preparing documents as they do digitizing them. Fragile pages need careful handling, bindings must be loosened without damage, and light exposure has to be controlled to avoid degradation. The resulting images must meet specific standards for resolution and clarity, because even the best OCR software can’t recover text that isn’t legible in the first place. Metadata tagging happens here too, identifying the document’s origin, date, and context so it can be meaningfully organized later. Once the images are ready, OCR processing takes over. The software identifies where text appears, separates it from images or decorative borders, and analyzes each character’s shape. For handwritten records, the task becomes more complex: the model has to infer individual handwriting styles, letter spacing, and contextual meaning. The output is a layer of text data aligned with the original image, often stored in formats like ALTO or PDF/A, which allow users to search or highlight words within the scanned page. This is the invisible bridge between image and information. But raw OCR output is rarely perfect. Post-processing and quality assurance form the next critical phase. Algorithms can correct obvious spelling errors, but context matters. Is that “St.” a street or a saint? Is a long “s” from 18th-century typography being mistaken for an “f”? Automated systems make their best guesses, yet human review remains essential. Archivists, volunteers, or crowd-sourced contributors often step in to correct, verify, and enrich the data, especially for heritage materials that carry linguistic or cultural nuances. The digitized text must be integrated into an archive or information system. This is where technology meets usability. The text and images are stored, indexed, and made available through search portals, APIs, or public databases. Ideally, users should not need to think about the pipeline at all; they simply find what they need. The quality of that experience depends on careful integration: how results are displayed, how metadata is structured, and how accessibility tools interact with the content. When all these elements align, a once-fragile document becomes part of a living digital ecosystem, open to anyone with curiosity and an internet connection. Recommendations for Character Recognition (OCR) Digitization Working with historical materials is rarely a clean process. Ink fades unevenly, pages warp, and handwriting changes from one entry to the next. These irregularities are exactly what make archives human, but they also make them hard for machines to read. OCR systems, no matter how sophisticated, can stumble over a smudged “c” or a handwritten flourish mistaken for punctuation. The result may look accurate at first glance, but lose meaning in subtle ways; these errors ripple through databases, skew search results,

Scroll to Top