I spend most of my time these days reading, thinking, writing, and podcasting about AI. And I’m optimistic about where it’s headed — substantively optimistic. Not in the “everything will be fine, don’t worry” sense, but in the “history gives us strong reasons to believe this will be net positive” sense.

That puts me in something of a minority. Polls show Americans are increasingly pessimistic about AI. The loudest voices in the conversation tend to be warning about job losses, existential threats, or societal collapse. And I get the appeal of those arguments — they’re vivid and specific in a way that optimism often isn’t. It’s easy to picture the jobs that disappear. It’s much harder to picture the ones that don’t exist yet.

But I think the pessimists are mostly wrong — and the optimists have a stronger case than they usually make. Here’s mine.

Why the Doomers Are Overwrought

I’d break the AI pessimists into three broad camps: economic doomers, existential doomers, and societal doomers. Each has a version of the argument that sounds compelling. But I think they all share a common mistake — overestimating how “this time is different” and underestimating what history actually teaches us about how these transitions play out.

Economic Doomers

The economic doomers believe AI will cause mass unemployment — that it’ll replace huge swaths of white-collar work quickly, concentrating wealth among the few who own the technology. I understand the fear. But I think it dramatically overestimates the speed at which these changes actually work through the economy.

Anyone who studies forecasting knows a useful principle: we tend to underestimate the base case and overestimate how “this time is different.” I think that’s exactly what’s happening in the AI conversation. The base case — what history actually teaches us about technology adoption — is that these transitions are slow. Not because the technology isn’t powerful, but because organizations are slow. Change management is hard. Retraining takes time. Entire industries don’t restructure overnight.

Think about accountants. Spreadsheets didn’t eliminate accounting — they transformed it. There are more accountants today than before spreadsheets existed, and they do more sophisticated work. More people have access to accounting, too. The profession expanded because the tool made it more accessible and more valuable, not less.

Or think about this: twenty-five years ago, “mobile app developer” wasn’t a job. It wasn’t even a concept. Today it’s a massive job category supporting entire businesses and industries. It was essentially impossible to picture that role before the iPhone existed. And that’s the pattern — it’s easy to see the jobs that get disrupted. It’s hard to imagine the new ones that emerge. That asymmetry is a big part of why the negative argument feels more intuitive than the positive one. But the positive one is what history actually delivers.

Will AI change work? Absolutely — and massively. But not as rapidly as the doomers suggest. And will the endpoint be net positive? I believe it will — because that’s what every major technological revolution in history has produced, once we’ve had time to adapt.

Existential Doomers

The existential doomers worry about something much bigger — that AI will become a superintelligence, a digital god that either destroys or subjugates humanity. The Skynet scenario. I take this concern seriously, and I can’t prove it’s impossible. Nobody can prove a negative.

But I don’t see any evidence that we’re close to that kind of capability. What we have today — and what we’re likely to have for the foreseeable future — are very powerful tools. Not sentient beings. Not digital gods. Very, very good tools that can process information and identify patterns at scales humans can’t match. That’s world-changing, but it’s a fundamentally different thing than the sci-fi scenario.

Arvind Narayanan and Sayash Kapoor make this case well in their essay “AI as Normal Technology” — even transformative, general-purpose technologies like electricity and the internet are still “normal” technologies. AI may be the most important technology of our lifetimes, but it’s still a technology. Not a new species. I think treating it that way leads to much better decisions than treating it as an existential threat.

Societal Doomers

The societal doomers worry that AI will rip apart the fabric of society — through misinformation, manipulation, deepfakes, erosion of trust. And I’ll be honest: I think this camp has the most legitimate concerns of the three. These are real risks.

But here’s what I keep coming back to: we’re already experiencing most of these problems. The internet, social media, algorithmic feeds, smartphones — these technologies have already strained our information ecosystem, polarized our politics, and created real challenges for mental health and social trust. These aren’t hypothetical concerns. They’re the world we live in right now.

I don’t think AI represents a step-function change in those problems. It’s a continuation of trends that started well before large language models existed. And if anything, AI might actually give us better tools to address some of these challenges — from detecting misinformation to personalizing education to making complex systems more navigable.

I’m not dismissing the risks. I’m saying they’re not new, and I’m more optimistic about our ability to manage them than most.

The Real Upside — and Why It’s Bigger Than Chatbots

Here’s where I think the conversation gets most interesting — and where the doomers most badly miss the mark. When most people think about AI, they picture chatbots, image generators, maybe a coding assistant. I’m excited about those things and the impact they’re already having. But they’re just a small part of the story.

The thing that excites me most is AI’s ability to accelerate discovery — particularly in fields where the bottleneck isn’t creativity or insight but the sheer scale of possibilities to explore.

Take medicine and biology. AlphaFold solved the protein folding problem — a challenge that had stumped researchers for decades — by doing something humans simply can’t: systematically exploring an enormous possibility space and identifying the structures that work. That’s not “artificial genius.” It’s a fundamentally different kind of tool — one that can sift through millions of potential drug compounds, protein structures, or genetic combinations and surface the most promising candidates for human researchers to investigate.

This pattern applies well beyond biology. In material science, we’re using AI to evaluate thousands of potential battery chemistries to find the ones worth investing in. In energy, AI is helping optimize everything from grid management to the search for better solar cell materials. In climate science, it’s accelerating the modeling of complex systems that are too large and interconnected for humans to analyze alone.

Dario Amodei, the CEO of Anthropic, wrote a long essay laying out many of these potential upsides in detail. His framing is different from mine — I don’t love the anthropomorphizing of AI as “geniuses in a data center” — but the underlying point resonates. We’re building tools that can explore possibility spaces at a scale that was previously unimaginable. The question isn’t whether that’s powerful. It’s whether we’ll let ourselves use it.

And that’s the pattern that gives me the most confidence. This isn’t speculation about some far-off future. AlphaFold exists today. AI-assisted drug discovery is happening now. The tools that will help us address climate change, develop better energy storage, discover new materials — those are in progress, not in the realm of science fiction.

When I look at the history of technology, this is what the big revolutions actually do. They don’t just automate existing work — they open up entirely new possibilities that we couldn’t have imagined before. The printing press didn’t just make scribes faster. The internet didn’t just make mail faster. And AI won’t just make knowledge workers faster. It will let us attempt things we couldn’t have attempted at all.

The One Thing I Worry About

So if I’m this optimistic, what keeps me up at night? Regulation.

Not the existence of regulation — some regulation is necessary and important. What worries me is that we’ll over-regulate AI out of fear, locking it away before we get the chance to realize its benefits. That we’ll see the risks — which are real — and respond by putting this technology in a box.

I keep thinking about nuclear power. Here was a technology with the potential to fundamentally transform our energy infrastructure and help address climate change. And we effectively regulated it into irrelevance. Not because the technology didn’t work, but because the fear of what it could do outweighed the appreciation of what it could deliver. Decades later, we’re desperately trying to reverse course as we realize how much that decision cost us.

I worry we’re on that same path with AI. The doomer narratives are loud. Regulation is the natural response to fear. And if we’re not careful, we’ll end up in a world where we’ve mitigated the downsides but also forfeited the upsides — the medical breakthroughs, the scientific acceleration, the expansion of what’s possible.

For what it’s worth, I’m pretty optimistic about the private market figuring out education and corporate adoption. Companies will adapt because they have to — the competitive pressure is too strong. But regulation is the one area where well-intentioned decisions, driven by fear, could hold us back.

We’ve Done This Before

I’ll close with the observation that gives me the most comfort. We’ve been here before — and we’ve gotten through it.

There was a time when the vast majority of humans worked in agriculture. That’s not true anymore. We didn’t end up with mass permanent unemployment. We found new kinds of work, built new industries, adapted our institutions. It wasn’t always smooth or fast or painless. But we adapted.

I don’t pretend to know exactly what the AI-enabled future looks like. Nobody does. But I think the historical base rate is overwhelmingly on the side of optimism — not naive optimism, but the earned kind. The kind that says: this will be hard, there will be real challenges, and we’ll figure it out. We always have.

The biggest risk isn’t that AI changes too much. It’s that we don’t let it change enough.