Machine translation (MT) is becoming an essential tool for governments looking to communicate efficiently with migrant and refugee communities. In just a few clicks, entire pages can be translated in seconds—saving time, reducing costs, and streamlining operations.
But the big question is: Can AI translations really replace human expertise, especially when dealing with multicultural communities?
In countries like Australia, where a large portion of the population speaks languages other than English, translation standards are set to ensure quality. The NAATI translation certification guarantees that translations are done by professionals trained in both linguistic accuracy and cultural sensitivity.
While machine translation has its benefits, it doesn’t yet meet these rigorous standards —particularly regarding accuracy, inclusivity, and trust in government communications. Governments must carefully evaluate the ethical risks before fully embracing AI-driven translation. Let’s break down the key concerns.
The Risks of Relying Too Much on Machine Translation
It’s easy to see why governments turn to AI for translation—it’s fast, cheap, and can process thousands of words in seconds. But there’s a problem: AI doesn’t think like humans do. It doesn’t understand context, cultural nuances, or the weight certain words carry.
And we’ve already seen what happens when machine translation goes wrong.
- In 2021, the Virginia Department of Health used Google Translate to convert English health messages into Spanish. One of their vaccine notices originally said, “The vaccine is not required,” but the Spanish translation read “, The vaccine is not necessary.” A subtle mistake—but one that may have discouraged Spanish-speaking residents from getting vaccinated.
- In 2024, New York City emergency drones broadcast flood warnings with poorly translated Spanish audio, making the message incomprehensible to native speakers. In a crisis, a bad translation can be life-threatening.
These cases highlight a bigger issue: AI doesn’t understand meaning the way humans do. And when accuracy is compromised, trust in government communication erodes.
But why does AI struggle with accuracy, and can it ever replace human translators?
Can AI Ever Fully Replace Human Translators?
The short answer? Not yet.
Machine translation, even with advancements in neural machine translation (NMT) and artificial intelligence (AI), still lacks cultural intelligence. While AI can process thousands of words per second, it struggles with:
- Context – AI doesn’t always understand idioms, sarcasm, or legal jargon.
- Cultural sensitivity – Words have different connotations in different cultures.
- Accuracy in specialised fields – Legal and medical translations require human expertise.
Take the legal phrase “You have the right to remain silent”. Some languages don’t have a direct translation, and a machine translation may mistakenly phrase it as “You refuse to speak”, which completely changes the meaning. In a legal case, this error could seriously impact someone’s rights.
How Governments Can Use AI Appropriately
Instead of thinking of AI as a replacement for human translators, governments could use Machine Translation Post-Editing (MTPE). This means:
- AI translates the content.
- Human translators review and finalise the text for accuracy and cultural appropriateness.
This hybrid approach allows for faster translations while maintaining quality. But while MTPE improves accuracy, it still requires proper oversight and investment—especially when dealing with migrant and refugee communities.
And that brings us to another crucial issue: cultural blind spots.
The Cultural Blind Spots of Machine Translation
Translation isn’t just about converting words from one language to another. It’s more about making sure the message makes sense in a cultural context.
Some phrases or concepts in other languages don’t have a direct equivalent in English. In other cases, certain words take on unintended meanings or negative connotations that an AI tool wouldn’t pick up on.
Take “public housing” for example. In some languages, this might be directly translated to something like “poor people’s housing”—which could discourage people from applying, even if they’re eligible.
Another example is a government-issued translation of “Take this medication on an empty stomach” might be understood in some cultures as “Take this medication while skipping meals.” Imagine the potential health risks!
How Can Governments Make Machine Translations More Inclusive?
- Test translations with real native speakers before publishing.
- Engage with CALD communities to understand language preferences.
- Use AI as a starting point, but never without human review.
When governments skip these steps, they risk alienating communities rather than supporting them. But sometimes, cutting costs is prioritised over translation quality—and that’s where the real ethical dilemma begins.
Is Cost-Cutting Compromising Translation Quality?
One major reason governments rely on machine translation is cost-cutting. After all, hiring professional human translators can be expensive. But saving money should never come at the cost of public safety, accessibility, or trust.
- A bad translation can create legal risks.
- Misinformation can lead to public distrust in government communication.
- Excluding CALD communities violates inclusivity policies and ethical standards.
This is where legal compliance becomes crucial—because ignoring translation standards isn’t just unethical, it’s also illegal.
What Are the Legal and Ethical Considerations?
Many governments are legally required to ensure translations are accurate and accessible—and machine-generated translations alone don’t meet these standards.
Here are two major legal concerns:
1. Machine translation can’t be the sole translator in government communications
- The U.S. Department of Justice has made it clear: Machine translation must be reviewed by human linguists before use.
- In European courts, AI translations can’t be used as official evidence without human verification.
- Australian government documents must be certified by human translators or NAATI-certified translators—AI translations alone don’t meet legal standards.
2. Machine translation tools raise data privacy risks
- In the EU, GDPR privacy laws restrict the use of cloud-based AI translation tools for government documents.
- In the U.S., agencies must ensure that AI translations do not expose personal data to security risks.
If machine translations fail to meet these standards, governments could face legal disputes, public backlash, and loss of credibility.
What’s the Best Way Forward?
Rather than asking, “Should governments use AI for translation?” a better question is: “How can AI and human translators work together to create accurate, inclusive, and ethical translations?”
- Use AI as a tool, not a replacement – AI should assist human translators, not replace them.
- Invest in high-quality post-editing – Every AI-generated translation should be reviewed by human linguists.
- Engage multicultural communities – Governments must listen to community feedback to improve translated materials.
- Regularly audit AI translation quality – Conduct ongoing assessments to detect biases and inaccuracies.
Technology is evolving, but governments have a responsibility to ensure that language barriers don’t become barriers to access, rights, and services.