Another week, another viral essay predicting that artificial intelligence will end civilization. These arguments range from the sophisticated (Nick Bostrom’s Superintelligence) to the sensational (countless Twitter threads about imminent doom). Having spent time with both the technical literature and the breathless commentary, I find myself skeptical of the doom scenarios while still believing AI safety matters. Here’s why.

The Standard Doom Argument

The typical doom argument runs something like:

  1. AI capabilities are improving rapidly
  2. Eventually, we’ll create an AI more intelligent than humans
  3. A superintelligent AI would be difficult or impossible to control
  4. An uncontrolled superintelligent AI would likely cause human extinction
  5. Therefore, we’re probably going to be extinct soon

Each step has its problems.

Where I’m Skeptical

Timelines: Predictions about when we’ll achieve AGI have been consistently wrong for 70 years. Current LLMs are impressive at language tasks but lack the kind of general reasoning, world modeling, and autonomy that “superintelligence” implies. The gap between “good at autocomplete” and “can take over the world” is enormous.

The intelligence explosion: The idea that an AI could rapidly self-improve into a god-like superintelligence assumes capabilities we don’t understand. Improving AI systems is hard. It requires not just intelligence but compute, data, and insight into what changes will help. There’s no evidence that intelligence is sufficient for rapid capability gain.

Alignment is impossible: Doom arguments often assume that aligning AI with human values is essentially impossible. But we align intelligent systems all the time—children, employees, organizations. These systems don’t perfectly follow our values, but they don’t destroy us either. Why should AI be categorically different?

Paperclip maximizers: The scenario where an AI single-mindedly pursues a poorly specified goal (like making paperclips) to the exclusion of all else assumes both extreme capability and extreme stupidity. An AI smart enough to take over the world would probably be smart enough to realize that “maximize paperclips” wasn’t really what its creators wanted.

Where I Take It Seriously

That said, I don’t dismiss AI safety concerns:

Capability growth is real: Whatever the timeline to AGI, AI capabilities are clearly growing. Systems that seemed science fiction five years ago are now routine. Planning for more capable systems makes sense.

Current systems cause harm: You don’t need superintelligence for AI to cause problems. Biased hiring algorithms, manipulative recommendation systems, and AI-enabled disinformation are causing harm today. These near-term issues deserve more attention than speculation about superintelligent paperclip maximizers.

Concentration of power: Powerful AI systems are expensive to train, requiring massive compute and data. This concentrates AI development in a few large companies and governments. The governance of these systems matters regardless of whether they’re “superintelligent.”

Unknown unknowns: I could be wrong. The history of technology is full of surprises. Taking some precautions against scenarios I find unlikely is reasonable, especially if those precautions also address more likely harms.

A Different Frame

Rather than “will AI kill us all,” I find it more productive to ask:

  • How do we ensure AI systems do what we intend?
  • How do we maintain human oversight as systems become more capable?
  • How do we distribute the benefits of AI broadly?
  • How do we govern AI development across companies and countries?

These questions are important regardless of whether superintelligence is coming next year, next century, or never. They focus on the things we can actually influence rather than on scenarios that may be imaginary.

The Epistemics of Doom

One thing that bothers me about doom arguments: they’re unfalsifiable in the short term. If AI doesn’t end civilization this year, doomers say it’s still coming. If it never comes, we’ll never know, because we’ll be dead or the prediction was right.

This is a red flag for any argument. Predictions should be testable. If your worldview generates no falsifiable predictions, you should be suspicious of it.

Conclusion

I think AI will be transformative and important. I think we should invest in making AI systems safe, interpretable, and aligned with human values. I think the governance of AI development is a crucial challenge.

But I don’t think we’re facing extinction. The doom scenarios assume too many uncertain steps, each multiplied together to produce confident catastrophe. I prefer to focus on the near-term challenges that are clearly real and clearly tractable. That’s where I think my time and attention are best spent.