Between Collapse and the Cosmos
A framework for thinking about utopia, dystopia, and the future of humanity
Originally written in March of 2021, this essay has been lightly revised as part of an ongoing effort to revisit and refine my thinking.
The future is like the next season of a popular television show. Everyone has an opinion for what it will be like, yet, even with our most sophisticated foresight tools, it's impossible to predict exactly what it will entail. Unlike a show though, there is no writers’ room to guarantee a satisfying ending. Despite this, we can zoom out from the game of specific predictions to instead examine the broad possibilities of future scenarios. As I write about in this newsletter’s About Page, humanity is in the midst of a revolution — from the advancement of artificial intelligence to the colonization of the cosmos, our species sits on a pendulum that can swing toward peril or prosperity. To better understand these possibilities, I'll expand this pendulum into a 2x2 framework, mapping technological risks against ideological alignment. What emerges is revealing: a single path to prosperity amidst three potential futures we'd want to avoid.
The philosopher Nick Bostrom, famous for bending our minds to unappealing futures — see the Simulation Argument, the Great Filter, and his work on AI — offers "The Vulnerable World Hypothesis" in an essay in Aeon Magazine. In it, Bostrom explains how the progress of technology may inevitably lead to a scenario where it becomes so easy to create harmful technology that just one person can bring the world to civilizational collapse. This potential leads not only to catastrophe but also to dystopian futures where technological power creates undesirable living conditions for many humans.
Consider these scenarios based on different technologies:
Artificial intelligence: A super-intelligent AI is created by a global superpower with authoritarian motives, using it to control the rest of the world to the ends of its own political will.
Biotechnology: Human-made viruses escape containment or are deliberately released, creating a cycle of endless pandemics that fragment societies and collapse global systems of cooperation.
Nanotechnology: Self-replicating machines *go grey goo* on everything (a hypothetical end-of-the-world scenario involving out-of-control nanobots), consuming matter indiscriminately in unforeseen ways.
Virtual reality: The technology is used by corporations and governments in criminal punishment, leading to long and unforgiving sentences, as well as disturbing torture techniques (cue Black Mirror).
What makes the above scenarios dangerous is that we only need one of them to come true, only one of the technologies to be used maliciously, for the future to become undesirable.
On the flip side, these technologies can also be used for positive progress, working together to create a renaissance that pushes our species to new heights. But to reach this utopia, we need every single one to be safely and ethically utilized, both individually and within the complex network of interactions. For example, we don’t want AI developers instructing their artificial lifeforms to create biological weapons. Bostrom discusses potential solutions — "global governance" or a "system of total surveillance" — to this problem and how we can prevent it, because once Pandora’s Box is opened, there is no going back. In a paper from 2007, he outlines four potential futures: posthumanity, extinction, recurrent collapse, and plateau.1 He considers the likelihood of each possibility, using technology as an axis with time as the other. But technology alone doesn’t shape the future — how we cooperate and clash is just as crucial. To complete our framework, we need a second axis: ideological alignment.
In Graeme Wood's essay in The Atlantic, he profiles Peter Turchin, a scientist known for predicting 2020 would be a year of great turmoil. Turchin developed cliodynamics, "the search for general principles explaining the functioning and dynamics of historical societies." Using mathematical analysis to study human history, he examines the dynamic between elites and counter-elites in societies, hypothesizing that an imbalance between them leads to a civilization's collapse. Wood captures this approach:
“Turchin looks into a distant, science-fiction future for peers. In War and Peace and War (2006), his most accessible book, he likens himself to Hari Seldon, the “maverick mathematician” of Isaac Asimov’s Foundation series, who can foretell the rise and fall of empires. In those 10,000 years’ worth of data, Turchin believes he has found iron laws that dictate the fates of human societies.”
Although Turchin draws clear distinctions between his field and Hari Seldon's psychohistory, the comparison resonates — science fiction often prefigures our most imaginative innovations. The key insight from Turchin's work is the sociological dimension of collapse, which he defines through elite competition but which we might broaden to encompass any opposing ideological beliefs that fragment society's ability to address existential challenges.
Let’s now get into the framework. The vertical axis represents the degree to which technological risks are realized or averted, from realized (bottom) to averted (top), while the horizontal axis maps ideological dynamics, from polarization (left) to alignment (right). The initial inclination might be to place utopia and dystopia in opposite corners, yet reality is more complex. In actuality, the framework reveals two distinct types of dystopia, with apocalypse/collapse opposing utopia. Let's examine each quadrant:
Utopia (Technology Risks Averted, Aligned Ideology): This represents a perfect society where we've both averted technological risks and achieved ideological harmony, thus ensuring abundance and equality across humanity. Here we might find majestic cities that both reach to the stars and seamlessly integrate with natural systems, new forms of well-being through ethical neural interfaces, sustainable interstellar expansion and communication with peaceful extra-terrestrials, radical longevity without losing the meaning of life, and technologies that enhance rather than replace human connection. The crucial element is that these advances serve all people, not just specific groups.
Technological Dystopia (Technology Risks Realized, Aligned Ideology): This dystopia emerges when we achieve ideological alignment, but one or more technological risks materializes — perhaps a superintelligence with goals misaligned with human flourishing, or a biotechnological advance that fundamentally alters human nature in ways we come to regret. Most humans either perish or subsist in diminished conditions, while a small elite might temporarily insulate themselves through wealth or privileged access to protective technologies.
Ideological Dystopia (Technology Risk Averted, Polarized Ideology): Here, we successfully constrain dangerous technologies but humanity's opposing ideologies remain in perpetual conflict, creating societies where some groups continue to systematically dominate others. This might manifest as continuous warfare between ideologically opposed nations, or through systems like Bostrom's "freedom tags"— surveillance devices monitoring everybody’s every move to prevent disaster but creating fervent ideological battles of privacy versus security.2 As William Gibson is famously quoted for saying: "The future's already here — it's just not very evenly distributed."
Apocalypse/Collapse (Technology Risks Realized, Polarized Ideology): The worst-case scenario, where we neither manage technological risks nor resolve ideological conflicts. Competing powers race to implement their worldviews, triggering existential threats like nuclear warfare, engineered pandemics, or environmental collapse that fundamentally undermine human civilization.
Now, of course, this framework simplifies the potential futures for our species and planet, as dystopias can bleed into collapse and the boundaries between quadrants aren’t always clean, but it helps illustrate why the coming decades must prioritize both safe technological progress and the resolution of our most fundamental cultural, political, and moral conflicts. It’s important to note that "Ideological Alignment" does not require consensus on every issue — diversity of thought remains valuable. Rather, it means overcoming our biological tendencies toward tribal conflict over issues where agreement is essential, such as recognizing every human's right to exist without persecution.
In The New Yorker, Corinne Purtill writes about Oxford philosopher Toby Ord and his book The Precipice, whose work on existential risk captures the gravity of our potential apocalypse:
“As “Precipice” closes, Ord zooms out to the cosmos and, against the backdrop of its unfathomable vastness, asks us to grasp the scale of what we risk losing if the human story ends prematurely. He writes that, just as our early forebears, huddled around some Paleolithic fire, couldn’t have imagined the creative and sensory experiences available to us today, we, too, are ill-equipped to conceive of what is possible for those who will follow us. Humanity’s potential is worth preserving, he argues, not because we are so great now but because of the possibility, however small, that we are a bridge to something far greater.”
Ord's cosmic perspective highlights the immense responsibility we bear. Our 2x2 framework maps potential futures where humanity either fulfills its vast cosmic potential or squanders it through technological recklessness or ideological fracture. The single utopian quadrant represents the ideal, that "bridge to something far greater."
Balancing the problems of today with those of tomorrow remains our central challenge. How much of our ideologies are we willing to sacrifice to ensure humanity's future? Would the guaranteed extended existence of billions more human lives, including your direct descendants, be enough for personal sacrifice? Or would you choose to maintain today's freedoms, even with an increasing risk of global annihilation?
The quadrants of our future await our collective choices.3
Jim Dator's futures framework offers an alternative lens, breaking down possibilities into Continuation, Transformation, Limits and Discipline, or Decline and Collapse.
My framework explores two types of dystopia, but I love Darren Allen's essay identifying four varieties based on science fiction: Orwellian (surveillance and thought control), Huxleyan (pleasure-based pacification), Kafkaesque (bureaucratic absurdism), and Phildickian (reality manipulation). Each diagnose our society’s ills with familiar foreshadowing.