Are We Teaching AI to Forget Our Cultures? w/ Kelly Marchisio
Language is more than words: it’s identity, memory, and connection.
But as artificial intelligence learns to “speak,” a question rises: Are we teaching it to understand us, or just to sound like us?
That’s what Gabriel Fairman and Rodrigo Demetrio explored in this episode of Merging Minds, joined by Kelly Marchisio, the Multilinguality Lead at Cohere.
Together, they dove into one of AI’s biggest challenges: how to make machines multilingual without making them monocultural.
Can AI really speak everyone’s language?
Kelly works at the front line of multilingual AI. Building models that can understand and respond in many different languages. But she says the challenge is deeper than most people think.
“We can’t just throw data at a model and expect it to learn every language equally,” Kelly explained. “Some languages have tons of content online. Others barely exist digitally.”
That means English and a few other big languages dominate how AI learns. Smaller languages.
The ones spoken by millions of people in real life, often don’t have enough online text to be represented well.
- Big languages = plenty of training data
- Small languages = nearly invisible online
- Result = uneven AI understanding
This creates an invisible divide, one that might make AI smart in one culture but blind in another.
The trade-off: Efficiency vs. diversity
Gabriel brought up a powerful idea: to make models efficient, we often make them less diverse.
AI companies need speed, accuracy, and cost savings, and that means focusing on languages with the most data.
“When we make things more efficient, we often make them less diverse,” Gabriel said.
Rodrigo added that this choice isn’t neutral.
It’s a quiet kind of bias that decides whose voice matters most.
“It’s not just about words,” he said. “It’s about culture, identity, and who gets to be heard.”
By simplifying language, AI risks simplifying humanity.

What does it mean to be truly multilingual?
Kelly believes real multilinguality is about more than translation.
It’s about understanding how people express ideas, not just what they say.
“If a model only learns from English, even when it speaks another language, it’s still thinking in English,” she said.
That insight hit home for Gabriel.
He reflected on how many global systems: education, business, technology, often carry English as their “default mindset.”
“It’s like we’re teaching AI to think in one cultural frame,” Gabriel said. “That’s dangerous, because it flattens human diversity.”
When AI doesn’t understand cultural nuance, it misses the soul of language: the humor, rhythm, and emotion that make each culture unique.
Building a more human future
Even with these challenges, Kelly stays hopeful. She believes that with the right priorities, we can design AI that reflects humanity’s full range. Not just its loudest voices.
“We can’t forget the smaller voices,” Kelly said. “That’s where the beauty of language really lives.”
She imagines a future where AI systems support endangered languages, where machines learn from difference instead of ignoring it.
Gabriel and Rodrigo agreed that this is the real test of AI’s intelligence . Not how fast it answers, but how deeply it listens.

Why this episode matters
This conversation isn’t just about technology. It’s about the story of how humanity communicates. And what we risk losing if we stop protecting that diversity.
We continue to ask the hard questions about the relationship between humans and machines.
As Gabriel put it at the end:
“AI will either make us more alike, or help us see how beautifully different we are. The choice is ours.”
Ready to protect language diversity in your own work?
At wxrks, we believe technology should empower (not erase) cultural identity.
Our platform helps you manage translations with AI that respects human context and keeps every voice alive.
Sign up today and join the movement to make language diversity part of the digital future.














