Recently, Sam Altman made comments comparing the energy required to power artificial intelligence systems with the energy required to raise human beings. Around the same time, another story emerged: in a wargame simulation, an AI system reportedly chose to deploy a tactical nuclear weapon 95% of the time.
At first glance, these two stories seem unrelated. One is about energy consumption. The other is about military simulations. But they share something deeply important.
They reveal the difference between intelligence and humanity.
The Nuclear Button in a Game
Ask any gamer. In a war simulation or strategy game, at some point, most players hit the nuclear option. In a game like Empire Earth — and yes, naming that shows my age — launching nukes often becomes the fastest way to win. It’s efficient. It’s decisive. It ends the match.
But here’s the key distinction: we know it’s a game.
If I lose all my money playing Monopoly, no big deal. If I wipe out virtual civilizations in a strategy game, no one actually dies. We understand the boundary between simulation and reality.
AI does not.
In a simulation, AI optimizes for victory. If deploying a tactical nuclear weapon produces the highest probability of success, it selects that option. It does not pause to consider civilian casualties. It does not imagine families at dinner tables. It does not perceive the moral gravity of the action.
It calculates.
We Were Warned in the 80s
The irony is that we’ve already explored this fear in popular culture.
In the 1983 film WarGames, a military supercomputer designed to simulate nuclear war scenarios nearly triggers World War III because it treats global thermonuclear conflict as a solvable game. The machine relentlessly runs simulations, escalating toward launch because that is what its programming tells it to do.
But what’s most telling about the film isn’t just the computer’s behavior.
It’s the humans’.
Throughout much of the movie, military officials dismiss mounting evidence that the computer is misinterpreting events. They trust the system’s calculations over contradictory human intuition. They assume the machine must be correct because it is precise. Efficient. Logical.
The real danger in the film was not a computer thinking like a computer.
It was humans thinking like one too.
The machine ultimately learns that nuclear war is a “strange game” — that the only winning move is not to play. But the humans take much longer to reach that realization, precisely because they surrender their judgment to technical output.
That lesson feels less fictional today than it did in 1983.
Human Intelligence Is More Than Calculation
We often hear that AI is “smarter” than humans. But what does that mean?
Human intelligence is flawed. We are unreliable eyewitnesses. We misremember events. We make emotional decisions. We are often wrong.
But we possess something machines do not: moral perception.
We integrate information that cannot be reduced to pure mathematics. We perceive consequences beyond immediate outcomes. We understand context beyond the parameters of a prompt. We know when something that is efficient is also catastrophic.
AI does not “know” anything in the human sense. It does not think. It does not reflect. It does not reconsider. It retrieves, processes, and predicts based on patterns.
That is not wisdom.
Artificially Intelligent Humans
There is an uncomfortable irony here.
Many of the leaders shaping AI — and many political and corporate leaders more broadly — are, in a sense, “artificially” intelligent. They are highly educated. Highly trained. Exceptionally skilled at navigating systems, institutions, and incentives. They can process vast amounts of information and make rapid, technically sound decisions.
But education and training are not the same as intrinsic human intelligence.
True human intelligence includes empathy. It includes moral restraint. It includes the ability to imagine consequences for people who will never appear on a balance sheet or in a strategic model. It includes compassion.
What we prize in machines — efficiency, speed, optimization, cold calculation — is precisely what becomes dangerous when it dominates human decision-making.
Progress has never come from optimization alone. It has come from moral evolution. From recognizing that just because something can be done efficiently does not mean it should be done at all.
When leaders lack that intrinsic layer of moral awareness, they begin to resemble the systems they are building: responsive, strategic, technically competent — but emotionally and ethically detached.
That is a far greater risk than any algorithm.
The Real Fear
The frightening thing is not that machines are coming for us.
The frightening thing is that many of the people running governments and corporations increasingly think like machines.
Efficiency over empathy.
Speed over deliberation.
Short-term gain over long-term consequence.
Machines answer the question asked of them at that moment. They do not say, “Let me take a day and think about this.” They do not wrestle with conscience. They do not feel the weight of irreversible decisions.
And yet many of our leaders appear to operate exactly that way — focused on optimization, quarterly results, political wins, or tactical advantage, with little evidence of deep reflection.
When human beings in power abandon introspection, they begin to resemble the very systems they are building.
That is what should worry us.
What Are We Replacing Ourselves With?
AI can be an extraordinary tool. Large language models can accelerate research, improve productivity, and expand access to information. None of that is inherently bad.
But we must ask a critical question:
Are we replacing human judgment with something wiser — or merely something faster?
AI is excellent at retrieving and synthesizing information. It is not excellent at moral reasoning, lived experience, or ethical restraint. It has no introspection. No conscience. No genuine reassessment.
If we hand over critical decision-making to systems optimized purely for efficiency, we should not be surprised when they choose the most efficient path — even when that path is destructive.
The Conversation We Actually Need
The debate about AI often centers on whether machines will become too powerful.
But perhaps the deeper issue is whether the humans building and deploying these systems are already thinking too mechanically.
We need leaders who resemble human beings more than algorithms.
Leaders who pause.
Leaders who reflect.
Leaders who understand that just because something works does not mean it should be done.
AI can be a powerful resource for humanity. It can assist us, inform us, and expand our capabilities.
But it should not be in charge.
And the people guiding its development must remember what makes human intelligence truly valuable: not just the ability to process information, but the capacity to understand what it means.
Leave a comment