
The fight for “objectivity” in AI always reflects a set of values. It is therefore political and varies across cultures. Meta has just entered the arena.
We all remember the Google Gemini fiasco: an image generator that portrayed Black Nazi soldiers or a female pope in the 18th century. A quest for inclusivity pushed to the point of absurdity. In early April, Meta revived that example to claim that today’s large models “historically lean left” and to unveil Llama 4, billed as “more balanced.” Why now, and who exactly decides what “balanced” means?
A nod to conservative America
Meta says it wants to represent “both sides” of every question (but are there only two sides?). After pledging to scrap political fact-checking, its new announcement looks like another olive branch to a potential Trump administration. Axios notes that Llama 4 (Meta) and Grok (X/Elon Musk) are already racing to be crowned the least politically correct AI. The thing is we usually call “neutral” whatever viewpoint is closest to our own. In America’s current climate, a so-called “balanced” model is likely to drift rightward.
Some “biases” are embedded values, and a quiet form of soft power
No one should be shocked that a model is biased: it mirrors its training data. And the Internet (where these models learn) is still overwhelmingly English-language and North-American. That alone is a soft-power machine: every chatbot answer quietly exports a U.S. view of the world, especially on sensitive topics (the very ones Llama 4 and Grok promise not to censor). AI thus spreads that perspective into other languages and cultures.
The stakes are geopolitical, because these models diffuse a particular worldview, often to non-English audiences, without users even noticing.
The European and Chinese responses
Europe. French startup Mistral AI is betting on Le Chat, designed to embody a more European, multilingual outlook. Its founders argue that models rooted in other cultures are vital to counter U.S. dominance.
China. DeepSeek adds another layer: tight political filtering and “accuracy of answers” as defined by the Party line. China AI regulation has softened its original demand for absolute “accuracy,” but ideological control remains central.