Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

AI predictive models target healthcare resource efficiency

February 14, 2026

State Rep. DeSantis disagrees on AI bill

February 14, 2026

Business leaders face critical deadlines for AI adoption as automation divide widens

February 14, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, February 14
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Business»Combating cultural bias in AI model translation
Business

Combating cultural bias in AI model translation

versatileaiBy versatileaiFebruary 2, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

While AI bias is most often a systematic bias that large-scale language models sometimes exhibit against different genders or races, it is also becoming clear that models can be biased by favoring one language over another.

In recent years, efforts have been made to curb this trend in collaboration with AI model developers such as Google and OpenAI. Creating a translation model. Most recently, Google released TranslateGemma on January 15th. It was trained on 55 languages ​​and 500 language pairs, languages ​​that can be easily translated from one language to another.

However, translation models fail to capture some of the nuances of spoken language. Enterprise AI Platform Vendor Articul8 it says LLM-IQ Your agent will provide further insight on this. The multi-tier evaluation agent system scores models based on five qualitative aspects: fluency and naturalness, consistency, cultural norms, and consistency and clarity.

Regarding this framework, Articul8 found that many models fail in terms of cultural appropriateness, suggesting that more work is needed to prepare AI technologies on a global scale.

Related:Anthropic aims for transparency with Claude’s Constitution

In this Q&A, Articul8 CEO and Founder Arun Subramanyan Learn how the framework was developed and why it’s important to have a culturally appropriate model.

What inspired Articul8 to develop the LLM-IQ agent and why did we focus on the nuances of translation in AI models?

Arun Subramanyan: Our customers are in Japan and South Korea. To begin expanding into these regions, we needed a model that could actually understand multiple languages.

One of the things that happened was that when we introduced some systems early on, our customers were happy and dissatisfied at the same time.

In both Japan and Korea, I was told, “Your answer is correct, but it’s rude.”

We said, “Okay,” but we didn’t know the difference.

I learned that Japanese language has many layers of complexity. Many languages ​​have it. For example, in English, you are just you. It’s not respectful or disrespectful. In many languages, we use “you” to refer to someone on the same level, but when addressing someone older, senior, or respectful, it’s a different word. And while sometimes those nuances are understood, most often they are not.

But there’s another level in Japanese where the context of what you say is taken into account: who you’re saying it to, who’s saying it, and what the outcome of the conversation is. You can be direct, indirect, polite, overly polite, or a little harsh. Depending on the context, if you use incorrect intonation, for example, that can also be considered incorrect.

Related:Opinion: Work with shadow AI, not against it

This is on a language level, so that’s what really intrigued us. Although it is not a technical field, domain specific Language for Japanese people.

Upon further investigation, I found that it was very systematic. All models were built primarily in English or a language similar to Latin, and even Chinese models completely missed this nuance. Their ability to express Japanese in digital content may be better. But they weren’t trained to understand these nuances.

In what situations does it matter if an LLM is polite or rude?

subramanyan: for example, supply chainI don’t know if someone gave a recommendation or if someone gave an instruction with significant impact.

It can also incur significant costs.

If you have an automotive system, recommendations will be generated for you. A human in the loop is reading the recommendations. Humans cannot be sure that a recommendation should be followed with 100% certainty. This has a significant impact on the industrial environment.

With the rise of Sovereign AIwith more and more regional AI vendors tackling local issues with their own technology, why should a vendor from a foreign country like Japan need to address language issues?

Related:Center for Responsible AI, combining research and industry know-how

subramanyan: I look at this as someone who has global insight, rather than someone who only has local insight. Must be enabled locally, but globally optimistic.

Global learning that can be used immediately in Japan, with localization unique to Japan. This varies greatly. Because we’ll see about localization in a moment, but imagine having to operate globally with all the data you need to do what you need.

For example, our energy model is based on a global dataset. Our local partnerships are based on a global partnership manufacturing model. Our research partnership with Meta and our scaling partnership with AWS are all possible because we’re a global operator. However, even though we are global, we operate with a deep understanding of the need to customize our actions.

Why do you think LLMs can’t seem to understand the nuances of languages ​​like Japanese?

subramanyan: The biggest drawback is that all The dataset is extremely skewed. “Bias” here means that the distribution of English and non-English is asymmetrical. Even in Latin and Latin-based languages, the distribution is asymmetric. So 99% vs. 1%. It’s not an insignificant difference.

Even the non-English content that is digitized comes from sources we don’t have access to, primarily from Western countries or from China.

Politeness, what is considered polite and rude, what is considered close to natural human interaction all came from the West.

Did you have any special considerations when developing this framework? open source model Did it work better than your own model?

subramanyan: We benchmarked all open source models and all closed source models. However, because we needed to balance our dataset, we had to build these models from scratch. If you don’t balance your dataset, you’ll always have the same bias.

There is a concept called a model mesh that can be used to coordinate and decide which models to call and for what purpose at runtime. You don’t necessarily need a large generic model that needs to be fine-tuned for each task. You can prepare independent, task-specific models and link them together as a system. This system becomes a runtime inference engine that can be run together.

Yes, we use generic models to obtain information about the world. But when it comes to Japan and the Japanese language, we have our own model.

Another question on people’s minds is probably, “Oh my god, do I need to build a large model for every single task?”

The answer is no. Ultimately, you’ll have a family of models that will grow together. When a model performs one task very well, it somehow affects and improves the whole.

Editor’s note: This interview has been edited for clarity and brevity

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAmerican Florist Association launches AI-powered business consultancy Aster
Next Article Countries are rushing to deny the personhood of AI as the battle over who should regulate it intensifies
versatileai

Related Posts

Business

Business leaders face critical deadlines for AI adoption as automation divide widens

February 14, 2026
Business

How To AI author Christopher Mims makes AI approachable for business leaders

February 10, 2026
Business

Deploying AI in business events requires data governance and stronger policies

February 9, 2026
Add A Comment

Comments are closed.

Top Posts

CIO’s Governance Guide

January 22, 202611 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20269 Views

Bridging the gap between AI agent benchmarks and industrial reality

January 22, 20269 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

CIO’s Governance Guide

January 22, 202611 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20269 Views

Bridging the gap between AI agent benchmarks and industrial reality

January 22, 20269 Views
Don't Miss

AI predictive models target healthcare resource efficiency

February 14, 2026

State Rep. DeSantis disagrees on AI bill

February 14, 2026

Business leaders face critical deadlines for AI adoption as automation divide widens

February 14, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?