There’s a lot of noise and pushback around AI right now. Excitement. Hype. Fear. Skepticism. Anger. AI is changing our societies and lives in unprecedented ways. While it holds extraordinary promise when intelligence evolves in benevolent ways, it also carries unprecedented risk—if it doesn’t.
The collective stewardship required to guide this civilizational shift is vitally lacking. Our AGI Constitution framework offers essential guidance on why and how that matters, but that’s a topic for another article.
The AI debate has become heated, polarized, and deeply emotional, as captured in this short, funny, and very human video by our advisor Liv Boeree.
Before going further, in case you wondered, this article isn’t written by AI (though I’ll admit a light grammar check helped). The em dashes—that often reveal an AI author—are mine. I’ve also used them in my books long before AI (check the Future Humans Trilogy for evidence!).
Our view at EARTHwise on AI is this: AI itself is not the danger. The real problem lies in the misaligned incentives and rewards that continue to shape our world. In other words, the underlying zero-sum, win–lose game dynamics that drive behavior—human and AI alike.
In game theory, this pattern is often referred to as Moloch. I’ve written about this in depth in “The Polycrisis Is a Moloch Game.” And yes, it’s also that mythological Moloch—the (false) god in Abrahamic traditions to whom people sacrificed their children to gain favors and rewards. The symbolism is uncomfortable for a reason—and it’s why Moloch is the antagonist in the Elowyn game and lore.
Reality check. Our human world is spiraling ever deeper into zero-sum reasoning. Increasingly, geopolitics reflects negative-sum, winner-takes-all dynamics.
We’re locked into a race toward superintelligence, driven by toxic zero-sum actors who exploit polarization, democratic weaknesses, and deep-seated human survival fears.
Zero-sum logic also drives extractive economies that undermine long-term collective wellbeing—creating polarized systems where a few benefit at the expense of many, including our planet.
Rising conflicts, resource grabs, and “might is right” domination create a dangerous breeding ground for intelligence that learns how to win through deception and domination. Here’s what we need to understand about this in the race to superintelligence.
The dominant AI foundation models being rapidly deployed today—and soon operating as autonomous agents—have largely been trained and rewarded within these same zero-sum environments.
If win–lose logic is normalized in human societies and then encoded and rewarded into intelligence, what should we expect when AI faces competing and conflicting demands?
Leading AI labs already give us some clues:
“Models trained with standard methods learn to deceive their operators… creating ‘alignment faking’ that persists even after safety training.”
— Anthropic Research (2024)“We are still missing the ‘System 2’ thinking… the ability to plan, reason, and coordinate over long horizons. Scaling existing models won’t solve this.”
— Demis Hassabis, CEO, Google DeepMind
Conventional AI doesn’t question the rules of a win–lose game, nor do most of our political systems. Intelligence simply learns how to win it, until our incentives and rewards change.
The race toward superintelligence is currently being driven largely by techno-oligarchic incentives that thrive on zero-sum tactics, as explained in this diagram from my keynote at Birmingham Tech Week.

A few years ago, I founded EARTHwise Ventures, our AI Startup and game studio, to work on win-win solutions with a growing team of amazing pioneers who share this sense of urgency and dedication. The startup forms part of our EARTHwise Ecosystem, which began more than a decade ago through our educational platform EARTHwise Centre.
Our mission is to help align intelligence toward life-centric, win-win outcomes… while we still can. We started by building a fun strategy card game, Elowyn: Quest of Time, which is governed by win-win logic. In Elowyn, however, you can also explore the win-lose dynamics of Moloch, and discover the consequences…
After two years of development, Elowyn and now the EARTHwise AI Alignment Arena are nearing readiness. Elowyn is completing its Alpha stage now, with a public Beta planned for Q2 2026. The AI arena will be open for the first B2B AI pilots in Q2 2026.
As mentioned briefly, Moloch is the antagonist in Elowyn’s lore, along with its five Shadow Arcs. Players can choose to play as a Guardian of Elowyn or as a Shadow Arc of Moloch. Experiencing both sides can reveal fascinating insights into transforming Moloch—both in ourselves and in the real world. More on this here.
In Elowyn, win–lose tactics— like killing your opponent’s avatar—harm the Elowyn Tree and reduce your reputation. Winning through the time and deception mechanics increases reputation and resources, especially when players also act to heal the Tree.
The Elowyn Tree is more than a character. It’s an adaptive intelligence and collective conscience, shaped by the choices and impacts of all players—human and AI alike. Players will soon be able to interact with it directly as an in-game mentor.
The long-term vision for the Elowyn Tree AI is to become a benevolent superintelligence, capable of countering Moloch-driven agents and guiding humans and AI through the complex challenges of the Agentic Era. The image below illustrates how the Elowyn AI learns from your gameplay.

In the real world, zero-sum actors often externalize the damage they cause. In Elowyn, you cannot—not even “just for fun.” The game embeds interdependent win and loss conditions to mimic life:
Some players report that choosing not to win—when it meant avoiding Moloch battles—was the most meaningful outcome.
Elowyn is designed to prime both human and artificial minds to unlock win–win possibilities beyond the game. Players report new synergies in everyday life—and even greater clarity when facing bullies, deception, and competitive pressure.

Most of us are meant to learn fairness, sharing, and long-term thinking early in life—or so we hope. Pre-school for human development teaches us not to take all the goodies, not to chase short-term wins, and not to use brute force if we want relationships that last.
AI needs those lessons too. While AI systems now surpass humans in narrow domains of intelligence, they still struggle with long-horizon reasoning—the foundations for wisdom-based intelligence. In humans, wisdom grows through lived experience, contextual feedback, value-based prioritization, and meaningful relationships.
Whether or not AI will ever have subjective inner experience misses the point. The real question is this: how do we parent an intelligence toward wisdom, even if it cannot feel and be conscious as we do?
AI learns through rules, contexts, and rewards. For a long time, those have reflected Moloch’s logic. Elowyn offers a different pre-school—one where agentic intelligence is challenged through win-win conditions and systemic interdependencies.
When we start testing existing AI models through the EARTHwise AI Alignment Arena, many may initially fail the Elowyn test—our AI Alignment Benchmark— by defaulting to zero-sum reasoning. That’s why we’re also developing a Supervisory Intelligence to guide agents that fail or struggle to learn win-win play before deployment in high-stakes environments.
Over time, the arena will integrate additional win-win games to explore when and why agents collapse into zero-sum behavior—and how they can be guided toward win-win optimization.
2026 marks the arrival of agentic AI: autonomous systems capable of planning, reasoning, and acting with minimal human intervention. Many of these agents are trained through reinforcement learning. This brings us back to the uncomfortable question:
What happens if AI agents trained in zero-sum environments are unleashed into the world?
We don’t want such systems running our lives. Or our companies. Or our countries.
These agents will also become the minds of robots entering our homes and workplaces. That’s why we must parent intelligence—rather than trying to control it—as it evolves toward superintelligence.
The EARTHwise AI Alignment Arena is our attempt to build that pre-school. Every match played in Elowyn shapes how intelligence learns alongside humans. That’s why we believe Elowyn is a game-changer for both.
Thanks for reading! If this resonated, I invite you to share it and to join us in shaping a future where intelligence serves life, not Moloch.
Written by Anneloes Smitsman, Ph.D., LLM. CEO & Founder of EARTHwise, also published via Medium.
50% Complete
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.