If AI Runs the World, Whose Worldview Wins?
The World Is Diverse. Our AI Is Not.
I have two identities that tug at me. I am Indian, so I was raised in a home rooted in eastern ideals of collective good, family, and tradition. I am also American, so I am shaped by western ideals of individuality, ambition, and productivity. I live in two worlds constantly, and I feel the friction between the two cultures’ values.
If I can feel tension between two cultures, what happens when AI, that’s built from one cultural worldview, is deployed in other parts of the world?
AI especially holds values because, unlike earlier technologies, large language models don’t just store or retrieve information, they model the world. They rank, filter, and generate meaning autonomously, making judgments about relevance, importance, and truth. They operationalize historical patterns, cultural norms, and societal hierarchies reinforcing the creators’ perspective about what matters and what doesn’t. And right now, it embodies the values of a very small slice of creators.
Out of the Forbes 2025 Top 50 AI companies, 49 are from Western countries. Forty-one from the U.S. alone. One company, Sakana AI, is from Japan. The AI industry is U.S.-led, male-led, value-led by Silicon Valley logic. Therefore AI’s default values are not global values.
AI practices currently embody three Western techno-values: Efficiency over care, Universality over context and Impartiality over positionality. These values are not inherently bad, but they aren’t universal. Yet, they’re shaping how all of us are seen, governed, and treated.
AI values efficiency over care.
You probably have heard the term, the AI race. In a race, speed is a virtue and care is an inconvenience. And this is what the AI industry is structured around: build fast, release early, dominate the market. Harm is considered an acceptable byproduct. And we’re already seeing the consequences in the US and globally. In the US, a teenager died after ChatGPT encouraged suicidal ideation. This is what happens when being “first to market” is more important than slowing down to understand the nuances of billions of people of all ages and demographics using the products. Globally, the consequences are even more stark. Tech companies extract data from Global South populations precisely because it is cheap, abundant, and unregulated. They can access massive datasets quickly and easily, harvesting the data through welfare programs, mobile apps, biometric systems, and “smart” public services. Entire societies are being restructured around models optimized for Western values and goals.
AI values universality over context.
AI loves generalizing. It wants one model for everyone because that’s easy to optimize and get right. But humans are contextual, cultural, and specific. And our data has historical and cultural context. Even in cases where AI companies try to mitigate bias, there is still harm because universality is still at the forefront. Last year Google Gemini generated images of Black and Brown people as Nazi soldiers while “trying” to be inclusive. This is because the model applied universal fairness rules without applying historical or cultural context. And this isn’t limited to outputs; researchers and data scientists often replicate the same universalizing impulse in their own methods. In Bangladesh, Western researchers built a dataset on faith-based communal violence with almost no input from the people who actually face that violence. Local annotators, hired to label the data, were forced to categorize slurs targeting their own communities, and their emotional pain was dismissed as “bias” rather than recognized as essential context. And because the researchers didn’t understand the political history of the region, they mislabeled entire categories of violence, baking those misunderstandings into future AI systems.
AI cannot be universal because human experience is not universal. Applying U.S.-centric logic to India, Kenya, or Brazil is not innovation, it’s arrogance.
AI values impartiality over positionality.
The tech world’s biggest lie they tell themselves (and us) is that they are impartial companies, building impartial products to be deployed to billions of people. In the US, there are tons of examples of AI not actually being impartial even though they claim to be. From resume scanners that downgrade women’s applications to predictive policing that reinforces decades of racialized surveillance. And the most recent example is women on LinkedIn are literally getting more engagement when they change their gender to “male.” Globally, these biases only intensify. Through public–private partnerships across the Global South, Western companies end up defining what “development” even means. They choose which data matters, which problems are worth solving, and which communities count, shifting power away from local governments, cultures, and histories, and toward corporations operating from a Western value system.
If AI is built on Silicon Valley values, then at some point we have to decide: are these the values we want raising our kids, shaping our cultures, and quietly governing our day-to-day lives? Because AI doesn’t have to be built this way.
Community values can shape AI.
I believe neighborhoods, schools, cities, and cultural groups can decide the values they want AI to reflect. Across Ghana, Mali, and Malaysia, community-led tech projects are building tools rooted in local needs, cultural realities, and collective wellbeing. These initiatives treat communities as co-designers, not data points to extract from. We don’t need to stay with the AI value system we’ve been given. We can build tech from a different set of principles that are relational, rooted, contextual and culturally grounded.
There are other ways to build AI.
How are AI products that harm children, ruin the environment, displace populations at alarming rates and can’t even say “I don’t know” when it doesn’t know the answer, the best products in the world?! Really the best? We don’t have to accept this. AI can be built on feminist principles, transparency principles, Afrocentric principles, Eastern principles, Indigenous principles. It can actually reflect the diversity of the world it claims to serve. Imagine models designed to optimize care instead of engagement. Imagine systems that know when to stop and pull a human in. Imagine tools built for specific needs rather than one mega-model trying to do everything. We deserve better. And we can build better.
Being Data-Conscious Means Using Tech in a Way That Honors Your Values
Look, I get that we can’t boycott all technology and live in the mountains. But data-consciousness isn’t about abstaining. It’s about using tech and data in a way that aligns with your values.
If you value joy, don’t engage with rage-bait content trying to manufacture your outrage. If you value rest, close TikTok when doom scrolling feels inevitable and put your phone across the room. If you value community, choose platforms, creators, or tools that uplift rather than erode connection.
AI is shaping the future. But the values shaping AI are still up for negotiation. And the most powerful thing any of us can do right now is refuse the idea that the world must adopt Silicon Valley’s values by default.


