Zone of Probable Impact
There is so much commentary and analysis about the field of AI these days. Perhaps too much. But I also feel that much of it is exaggerated and hyperbolic. You can generally break down analysts into one of four categories.
Doomers believe AI will be amazingly impactful and will ultimately destroy human civilization à la Skynet from Terminator. Maybe they envision a kinder version like Wall-E. But ultimately, this is AI as a massively negative force for humanity.
Zoomers believe AI will revolutionize human experience and lead to some version of Utopia, solving most, if not all, human problems. Think Star Trek — a world of abundance where technology has freed us from scarcity. (Most sci-fi leans dystopian, so the zoomers have fewer good references to point to. Maybe that tells us something about human nature — or maybe just about what makes for good storytelling.)
Skeptics believe that AI is overhyped, overblown, and will ultimately fail to deliver on its promises. Perhaps they view AI as being something like VR — a technology that was always “just around the corner” and never quite arrived.
Realists believe AI is a genuinely transformative technology, but one that will take a long time to produce massive societal impacts and represents a manageable change for humanity. This view is perhaps best represented by Arvind Narayanan and Sayash Kapoor’s essay “AI as Normal Technology” — even revolutionary, general-purpose technologies are still, at the end of the day, technologies.
You could plot each of these positions on a 2x2 grid. The x-axis is “Scale of Impact” — from negligible to transformative. The y-axis is “Direction of Impact” — from catastrophic to utopian. The Doomers live in the bottom right. The Zoomers in the top right. The Skeptics on the far left. And the Realists occupy a big zone in the middle — high impact, but manageable. Positive on balance, but not utopian.
Despite all the commentary going into AI, the overwhelming majority of the public conversation falls into one of the first three categories. The Doomers and Zoomers get the most airtime because their arguments are dramatic and emotionally vivid. The Skeptics get attention because contrarianism always draws a crowd. But the Realists — the most likely camp to be right — are chronically under-discussed.
And I get why. “This will be really important but also manageable” is a boring headline. It doesn’t generate clicks, doesn’t fill conference keynotes, and doesn’t make for compelling late-night dorm room debates. But I think it’s where we’ll end up, and I’d put the probability well above 85%.
The Box in the Middle Is Still Huge
Here’s the thing, though. Saying “we’ll probably end up in the Realist zone” is not the same as saying everything will be fine. That box in the middle is enormous. It encompasses a wide range of outcomes — from AI that modestly improves productivity in a few industries to AI that fundamentally reshapes healthcare, education, scientific discovery, and economic opportunity. From a world where we’ve managed the transition reasonably well to one where we’ve captured only a fraction of the potential upside while fumbling through unnecessary disruption.
The difference between landing in the top-right corner of that box versus the bottom-left is — in practical, human terms — really significant. We’re talking about whether millions of people get access to better healthcare, whether scientific breakthroughs happen a decade sooner, whether the economic gains flow broadly or concentrate narrowly.
So the question isn’t really “will AI be transformative?” I think it will. The question is: within the zone of probable impact, where do we end up? And that’s not predetermined. It’s a function of choices being made right now by three groups: developers, deployers, and regulators.
Developers: Build for the Margins, Not Just the Middle
AI developers — the labs building foundation models and the companies building applications on top of them — will shape where we land more than anyone. And the biggest risk I see isn’t that they’ll build something dangerous. It’s that they’ll build for the easiest use cases and leave the hardest, most impactful ones underserved.
It’s natural to focus on where the money is — enterprise productivity tools, coding assistants, marketing copy generators. These are real, valuable applications. But the transformative potential of AI lies disproportionately in harder problems: drug discovery, materials science, climate modeling, education in underserved communities. The areas where the market signal is weaker but the human impact is highest.
I’m not suggesting developers ignore commercial viability — that’s what funds everything else. But I think the best AI companies will be the ones that deliberately invest in high-impact, harder-to-monetize applications alongside their core business. AlphaFold didn’t come from a startup chasing revenue. It came from a lab that believed solving protein folding was worth doing even if the business model wasn’t obvious.
The developers who think beyond the next quarter’s revenue will do more to push us toward the top-right corner of that box than anyone.
Deployers: Stop Waiting for Perfect
By deployers, I mean the organizations — companies, hospitals, schools, governments — that actually put AI into practice. And the biggest issue I see here isn’t recklessness. It’s paralysis.
Too many organizations are sitting on the sidelines, waiting for AI to “mature” or for someone else to go first. I understand the instinct. Nobody wants to be the cautionary tale. But the cost of inaction isn’t zero — it’s just invisible. Every month an organization delays adopting AI in its workflows is a month of lost productivity, worse outcomes, and falling behind competitors who are learning by doing.
I’ve written before that the real danger isn’t making a wrong call — it’s standing still. That applies here. The organizations that will get the most value from AI aren’t the ones that waited for a perfect solution. They’re the ones that started early, iterated, and built the institutional knowledge to deploy AI effectively.
This is especially true in sectors like healthcare and education, where the potential upside is enormous but the institutional inertia is strong. Every hospital system that delays implementing AI-assisted diagnostics, every school district that waits another year to explore personalized learning — those delays have real costs measured in real human outcomes.
Deploy thoughtfully. Deploy carefully. But deploy.
Regulators: Learn from Nuclear
I’ve written about this before, but it bears repeating: my biggest worry about AI isn’t the technology. It’s the regulation.
I keep coming back to nuclear power. Here was a technology with the potential to transform our energy infrastructure and make real progress on climate change. And we effectively regulated it into irrelevance — not because it didn’t work, but because the fear of what it could do overwhelmed the appreciation of what it could deliver. Decades later, we’re desperately trying to reverse course as we realize how much that decision cost us.
The parallel to AI is uncomfortably close. The loudest voices in the conversation are warning about catastrophic risks. Regulation is the natural response to fear. And if we’re not careful, we’ll build a regulatory framework that successfully mitigates the downside risks while also forfeiting the upside — the medical breakthroughs, the scientific acceleration, the expansion of what’s possible.
Good regulation is important. We need thoughtful rules around data privacy, algorithmic transparency, and accountability for AI-driven decisions. But as I’ve argued elsewhere, there’s a difference between regulation that creates guardrails and regulation that creates roadblocks. The former helps us navigate toward the top-right of the Realist box. The latter pushes us toward the bottom-left — or worse, keeps us from entering the box at all.
The regulators who get this right will be the ones who resist the urge to regulate based on what AI might become and instead focus on what it actually does today — who’s harmed, how, and what specific safeguards would help. That’s harder than writing broad prohibitions, but it’s the only approach that protects people without killing the potential.
Moving to the Top Right
I’m a Realist. I think AI will be genuinely transformative, broadly positive, and slower to reshape society than either the Doomers or Zoomers expect. But “broadly positive” isn’t guaranteed — it’s an outcome we have to work toward.
The developers building these tools, the organizations deploying them, and the regulators writing the rules all have a role to play. And right now, we’re spending too much of our collective attention on the dramatic but unlikely scenarios — the Doomers and Zoomers arguing about Skynet vs. Utopia — while under-investing in the practical, unglamorous work of steering toward the best realistic outcome.
That’s what I’d like to see change. Less debate about whether AI will save or destroy us. More focus on how we make sure the most likely outcome — the one in the big box in the middle — is as good as it can be.
That’s the zone of probable impact. Let’s make the most of it.