Skip to content
Blogcritical thinking
The AI Race Is Not a Technology Race
12
Misagh Zad's avatarMisagh Zad

Listen: The AI Race Is Not a Technology Race

0:000:00

The AI Race Is Not a Technology Race

It Is an Energy Allocation Problem Disguised as Innovation

Every era has a story it tells itself.

Ours tells a story about intelligence. We say the future will be decided by algorithms, models, and breakthroughs in reasoning. We talk about GPUs, transformers, benchmarks, and emergent behavior. We imagine a race between labs, nations, and companies. We picture scientists and engineers as the protagonists.

But beneath this story runs a quieter one.

A story about energy.

Not in the abstract sense of sustainability slogans, but in the brutally physical sense of electrons, fuel, heat, grids, and political trade-offs. A story about who gets power, who pays for it, and who decides which uses matter more.

Once you start looking there, the AI race changes shape. It stops looking like a sprint between ideas and starts looking like a negotiation over resources.

Intelligence Eats Power

Artificial intelligence does not scale like software. It scales like industry.

Each new model requires more compute. More compute requires more electricity. More electricity requires power plants, transmission lines, cooling systems, water access, land, permits, and social consent.

This is where the story becomes uncomfortable. Because electricity is not an infinite abstraction. It is a shared resource. When large data centers appear, households notice. Prices shift. Grids strain. Communities push back.

Suddenly, intelligence is no longer just clever code. It becomes a political question.

  • Who gets cheap power.
  • Who absorbs volatility.
  • Who is told to wait.

The Hidden Constraint

For years, the dominant narrative assumed that energy would simply adjust. That markets would respond. That innovation would smooth everything out.

That assumption is failing.

Across advanced economies, the bottlenecks are not fuel scarcity. Oil and gas still exist in abundance. The bottlenecks sit elsewhere.

  • Permitting.
  • Transmission.
  • Local opposition.
  • Grid congestion.
  • Time.

These are not engineering problems alone. They are governance problems. And governance problems do not yield to faster chips.

A Shift in the Question

This is where speculation begins.

If the real constraint on AI dominance is not intelligence itself, but the ability to allocate energy without political collapse, then the strategic question changes.

The question stops being: "How do we build better models?"

It becomes: "How do we secure political permission to redirect power toward machines without losing social legitimacy?"

Once you ask that question, energy geopolitics re-enters the picture in an unexpected way.

Energy as Political Slack

Energy has always carried more than heat. It carries stability.

Cheap energy buys calm. Stable prices buy patience. External supply buys room to maneuver.

This is not new. Industrial history is full of examples where access to external resources allowed internal transformation.

The difference now lies in the direction of reallocation. In previous eras, energy was diverted toward factories, railways, or housing. Today, it is diverted toward computation.

And computation does not vote. People do.

Which means every megawatt allocated to AI competes with someone's comfort, livelihood, or sense of fairness.

This is where external energy supply becomes strategically interesting. Not because it powers AI directly, but because it cushions the politics around power.

The Buffer Hypothesis

Imagine a state facing rising electricity demand from AI infrastructure.

Households feel price pressure. Industries complain. Local governments resist new transmission lines. Politicians hesitate.

Now imagine that same state has access to additional external energy supply.

Oil or gas imports increase. Global prices soften. Inflation eases. Energy headlines calm down.

Nothing magical happens to the grid. No new electrons appear where data centers sit.

But something subtler happens. Political space opens.

The government can tolerate localized price increases. Utilities can prioritize industrial connections. Regulators can approve controversial projects with less backlash.

In this frame, energy imports function as a buffer. They do not solve the physical constraint. They soften the social one.

This is a second-order effect, but second-order effects often shape outcomes more than first-order logic.

Why This Feels Uncomfortable

This line of thinking makes people uneasy for good reason.

It suggests that the AI race may involve deliberate choices about whose energy security matters more. It suggests that households might be implicitly asked to accept trade-offs so that machines can think faster. It suggests that geopolitical energy moves might have less to do with immediate need and more to do with managing domestic consent.

That is not a story we like to tell ourselves.

We prefer narratives of progress where everyone benefits at once. Reality rarely cooperates.

The Mistake of Literal Thinking

A common error in discussions like this is to think literally.

To imagine oil flowing directly into data centers. To picture AI servers powered by foreign fuel. To assume a simple substitution.

That misses the point.

Energy strategy operates at system level, not appliance level. No serious planner thinks in terms of one barrel equals one server.

They think in terms of price signals, public tolerance, regulatory flexibility, and crisis resilience. They think in terms of options.

Options Matter More Than Plans

In complex systems, control is less valuable than optionality.

Having an additional energy source available matters even if it is not used at full scale. It changes bargaining power. It shifts expectations. It alters what is politically feasible.

From that angle, energy geopolitics and the AI race intersect not through cables, but through confidence.

  • Confidence that domestic grids can be leaned on harder.
  • Confidence that price spikes can be managed.
  • Confidence that public anger can be absorbed.

The Risk of Misreading Power

There is a danger here.

When energy becomes instrumentalized in service of computation, the temptation is to reduce people to variables. Households become demand curves. Communities become obstacles. Opposition becomes noise.

History suggests this path ends poorly.

Societies tolerate reallocation only when they believe it serves a shared future. When the benefits feel abstract or distant, legitimacy erodes.

The AI race, if framed purely as a strategic necessity, risks severing that connection.

Meaning as the Missing Layer

This is where meaning re-enters the conversation.

Technology races succeed when they are embedded in stories people can live inside.

The space race worked not because rockets were efficient, but because the story connected exploration, pride, and collective destiny.

The AI race currently lacks such a story. People hear about models replacing jobs, consuming power, and concentrating wealth. They do not hear a convincing answer to the question: why this trade-off is worth it for them.

Without meaning, energy allocation becomes extraction. With meaning, it becomes contribution.

The Real Strategic Question

So the deepest question is not whether energy geopolitics can support the AI race.

It can.

The question is whether societies can articulate a reason to do so without fracturing themselves.

Can energy be reallocated toward intelligence while preserving dignity, fairness, and trust. Can political leaders explain why machines deserve priority without implying that people matter less. Can the benefits of AI arrive quickly enough to justify the costs people feel now.

These are not technical questions. They are moral and narrative ones.

A Final Speculation

If the AI era fails, it will not fail because models plateau. It will fail because societies refuse the bargain.

If it succeeds, it will not be because of superior algorithms alone. It will be because someone figured out how to align energy, politics, and meaning without tearing the social fabric.

In that sense, the AI race is not about who builds the smartest machines.

It is about who learns to tell the most honest story about power.

Ministry of Meaning Takeaway

Critical thinking does not demand certainty. It demands that we notice what sits underneath our stories.

Right now, beneath the story of intelligence, there hums a grid.

And grids, unlike narratives, do not lie.

Picks for you

2026 and the Return of the Whole Mind

2026 and the Return of the Whole Mind

As we move toward 2026, many of us are sensing a quiet imbalance. We think faster, consume more information, and rely heavily on analysis, yet feel less grounded, less certain, and more disconnected from ourselves. This essay argues that the problem is not thinking itself, but thinking in isolation. For decades, logic, efficiency, and control have been rewarded while intuition, emotion, imagination, and embodied knowing were sidelined. AI now exposes this imbalance by outperforming humans in pure analysis, making it clear that competing on cognition alone is a dead end. What remains distinctly human is the ability to sense context, notice subtle signals, integrate feeling with reason, and act with timing rather than urgency. Burnout, anxiety, and chronic overthinking are framed not as weaknesses but as signals of misalignment, where inner intelligence has been ignored too long. The future will favor integrated minds, people who can think clearly while also listening inwardly, adapting without panic, and making meaning from lived experience. The return of the whole mind is not nostalgia or softness, but a necessary evolution: a widening of intelligence that allows humans to partner with technology without losing themselves.

Read more
Why Immigration Feels More Dangerous Than It Statistically Is

Why Immigration Feels More Dangerous Than It Statistically Is

Why Immigration Feels More Dangerous Than It Statistically Is explains how fear can grow even when reality stays relatively stable. Most of what we believe about crime and immigration does not come from direct experience but from repeated images, clips, and headlines designed to capture attention. The human brain uses a shortcut called the availability heuristic, it assumes that what comes to mind easily must be common. In a media environment where rare but extreme incidents are replayed endlessly, exposure replaces frequency, and repetition starts to feel like evidence. Immigration becomes a perfect container for this fear because it is complex, emotional, and easy to turn into a story with faces and villains. Long-term data often shows a calmer picture than our instincts suggest, but fear moves faster than context. The essay argues that critical thinking is not about dismissing fear, but about pausing inside it and asking whether our feelings reflect reality or visibility. When we hold that pause, understanding has room to return, and attention becomes a responsibility rather than a reflex.

Read more
Emotion as Navigation

Emotion as Navigation

Emotion as Navigation argues that emotions are not irrational reactions or inner verdicts, but feedback signals that indicate how our current reality relates to an underlying goal. We do not perceive the world neutrally and then feel about it; perception, emotion, and action form a single system oriented toward movement and adjustment. Positive emotions signal alignment, while negative emotions signal friction, misalignment, or outdated assumptions. Problems arise when we treat emotions as authority instead of information, or when the goals guiding our lives remain unexamined. Critical thinking does not suppress emotion, it interprets it by asking what aim the feeling is responding to and whether that aim still deserves commitment. When emotions are read as data rather than commands, they become a navigational compass rather than a source of confusion. A meaningful life, then, is not emotionally smooth but directionally coherent, guided by alignment rather than by the pursuit or avoidance of feelings themselves.

Read more
Thinking Under Pressure in the Age of AI

Thinking Under Pressure in the Age of AI

Thinking Under Pressure in the Age of AI argues that the real risk of AI is not incorrect answers, but how its speed, clarity, and confidence interact with human cognitive biases. Our minds rely on shortcuts designed for efficiency, and AI amplifies these shortcuts by making information feel complete, authoritative, and easy to trust. Biases shape what we notice, how we judge probability, how we commit to decisions, and how emotion quietly leads reasoning, often without awareness. Critical thinking today does not mean rejecting AI or eliminating bias, but slowing down enough to recognize when judgment is being bent by familiarity, confidence, framing, or emotional ease. As AI accelerates information flow, human responsibility shifts toward interpretation, verification, and self-awareness. When we notice our own thinking habits, AI remains a tool; when we do not, it quietly becomes the driver.

Read more
Good, Bad, and the Direction of Attention

Good, Bad, and the Direction of Attention

Good, Bad, and the Direction of Attention argues that we do not experience the world as inherently good or bad, but as helpful or obstructive relative to an often unexamined aim. Our attention, emotions, and moral judgments are shaped by the direction we are moving in, not by neutral facts. What accelerates our path feels “good,” what slows it feels “bad,” even though neither quality exists on its own. This is why people can react morally in opposite ways to the same event, they are oriented toward different goals. The danger arises when the aim itself remains invisible, because alignment then masquerades as virtue and resistance as evil. Critical thinking begins by asking what aim is generating a reaction, not by defending the reaction itself. When we examine direction before judgment, we regain freedom to question whether speed equals progress, whether friction equals harm, and whether what feels urgent actually leads somewhere meaningful.

Read more
What If We Are Living in a Simulation?

What If We Are Living in a Simulation?

What If We Are Living in a Simulation? treats simulation theory not as sci-fi speculation but as a lens for understanding why the world looks the way it does. Simulations exist to explore unknown outcomes, not to preserve harmony, and when viewed this way, suffering, chaos, and instability stop looking like errors and start looking like data. Human history, with its late arrival, layered complexity, religions, governments, markets, and now AI, resembles a staged experiment where new parameters are introduced to increase unpredictability. Meaning, in this frame, does not disappear, it intensifies. If outcomes are uncertain, then choices matter more, not less. Whether the universe is simulated or not, we already live inside conditions where agency, values, and response shape trajectories. We are not spectators waiting for answers, but variables whose actions feed the system itself. The unfinished nature of reality is not proof of meaninglessness, but evidence that participation is the point, and that how we act under uncertainty is the real test.

Read more
Simulation Took Over Reality

Simulation Took Over Reality

Simulation Took Over Reality explores how modern life has quietly shifted from lived experience to representations of experience, a condition Jean Baudrillard called simulation. We no longer relate to reality directly but through signs, images, profiles, brands, and narratives that increasingly reference each other instead of anything real. Photos shape how life should look, information arrives faster than reflection, and meaning collapses under constant immediacy. In this hyperreal world, feeling real replaces being real, performance replaces identity, and symbols become more powerful than substance. Simulation succeeds not because it is false, but because it is optimized for attention, desire, and speed. The essay does not argue for escaping the system, but for awareness within it: noticing moments that do not perform, experiences without an audience, and forms of presence that resist translation into content. The danger is not living inside simulation, but forgetting that we do, and mistaking the copy for life itself.

Read more

Comments

Sign in to join the discussion.
Loading…