The future. Everyone has an idea for what it will be like, and yet, despite our best-educated guesses, it's impossible to predict exactly what it will entail. Despite this uncomfortable uncertainty, it's possible to zoom out from the game of making specific predictions to instead examining the broad possibilities of the different futures that may come to be.
In my brief overview of this newsletter, I wrote, "Humanity is in the midst of a revolution. From the advancement of artificial intelligence to the colonization of the cosmos, our species sits on a pendulum that can swing toward peril or prosperity." Despite numerous essays on this topic, I feel the need to reinforce the weight behind those words with a visual framework.
In one of my recent newsletters featuring a discussion of surveillance technology, I linked a Nick Bostrom essay about the Vulnerable World Hypothesis, where he diagnoses the danger behind the pendulum. He explains how future technologies like artificial viruses may be so powerful that small groups of people, or even individuals, may utilize them to bring about mass death and destruction.
Expanding beyond catastrophe, we can look at these technologies and see how they may produce future states with less than desirable living conditions for the majority of humans. Let's consider a few of these technological risks:
Artificial intelligence: A super-intelligent AI is created by a global superpower with less altruistic motives, using it to dominate the rest of the world with a less than ethical ideology.
Virtual reality: The technology is used by corporations and governments in criminal punishment, leading to long and unforgiving sentences, as well as disturbing torture techniques (see Black Mirror).
Biotech: Human-made viruses create a cycle of endless pandemics.
What makes the above scenarios dangerous is that we only need one of them to come true, only one of the technologies to be used maliciously, for the future to look eerily like a dystopia. On the flip side, these technologies can also be used for good, producing a renaissance that pushes our species to new heights. But to reach this utopia, we need every single one to be safely and ethically utilized. That's quite a tall task.
Additionally, each of these technologies can feed off each other and work together in positive harmony or deepening dystopia. A virtual world built by one individual corporation can lead to immoral practices, but one built on the blockchain would be owned and operated by its users. AI developers can instruct their artificial lifeforms to create life-extension technology or biological weapons. These combinations continue to get more potent as we add to them.
Visualizing the future with a quadrant chart can help us understand this further. The initial inclination is to place utopia and dystopia in opposite corners of the chart, yet, this doesn't ring true. In actuality, the chart has two placements for dystopia, with apocalypse sitting opposite utopia. Let's examine each quadrant.
Utopia: This is the perfect society, where we have averted all risks from technological advancement and have also reached ideological alignment, thus ensuring equality and equity for all humans across the planet.
Dystopia-bottom right: This is a dystopia where we have reached ideological alignment, but one or more of these technological risks was realized, such as a super-intelligence with goals that do not promote human flourishing. This could lead to a majority of humans either perishing or living in poor conditions, with a few elites surviving off the resources their wealth or power provides them.
Dystopia-top left: This is a dystopia where we avoid technological risk, but humanity's opposing ideologies remain in confrontation, thus ensuring a continuous struggle that results in some groups having advantages over others, which becomes especially pronounced given the advanced technology at their disposal. Put another way, this could mean we avoid a classic AI apocalypse but instead see nations fighting continuous wars with automated weaponry.
Apocalypse: Finally, the worst-case scenario, where we are unable to avoid technological risks or come to a consensus ideology. Consider different nations competing to enforce their beliefs on the world, leading to the actualization of an existential threat, such as nuclear warfare.
Now, of course, this greatly simplifies the potential future for our species and planet, but it gives us an idea of the possibilities, and why it's critical we ensure the following decades are dedicated to the safe advancement of technology and a resolution to the cultural, political, and moral debates that may create barriers in our doing so. It's also important to note that "ideological alignment' does not mean we have to agree on everything; there is value to different opinions across various topics. Instead, I mean that we do not fall victim to the failures of our biology and fight ourselves over issues that should be agreed upon, such as the need to end climate change.
Despite all this futurism talk, we must not ignore the present. Earlier this year, Martin Mitchell shared an interview with Monika Bielskyte, who has a vision for creating Protopia Futures. In the interview, Monika says:
"Protopia Futures is about proactive prototyping of hope for tomorrow. Not magical thinking and leapfrogging the present and near-future problems, but engaging with them and trying, together, to find the urgently needed solutions."
What is important is that this future is inclusive and diverse, ensuring it is not just for those with today's resources. Although Bielskyte makes a good point about the danger of the word utopia—her example being its use by totalitarian regimes in the past—my chart above aims to rectify that and agree with her premise by ensuring abundance and equality for all in a true utopian society. An essay by Samantha Culp in the Atlantic builds on this theme, discussing how the genre of pop futurism is starting to spread its wings into more open territories.
Balancing the problems of today with those of tomorrow is a difficult task, but one I will continue to examine through this newsletter, with the hope of inspiring positive progress in this domain.
In my research, I stumbled upon this BBC essay by Richard Fisher about the "hinge hypothesis," which describes the "pendulum" I write about in the introduction of this essay, or the "precipice" as described by Toby Ord, who I talk about in an essay exploring similar topics. Fisher has a newsletter of his own called The Long-Termist's Field Guide which I highly recommend for more on this subject.
Additionally, in my research, I came across the book Four Futures: Life After Capitalism by Peter Frase, who uses a similar framework to discuss the future. Although I haven't read the book yet, this review in The Guardian provides a short overview worth reading as a companion to my essay above.
🧠Bonus Brain Bits🧠
We've discussed the dangers of GPT-3 quite a bit, but a new use of the program in combination with AI voice generation is pretty sublime. The example shows how computer characters in the game Modbox can respond and sound like humans as you go up to them in the streets of the world. A virtual reality filled with AI participants may not be as far away as once thought.
In case you missed it, Tom Cruise is now famous on TikTok. Not quite. An impersonator using the power of deep fake technology is on TikTok. The videos are uncanny and give a glimpse of the future I prescribed in my essay on Cyber Collapse (which also includes a discussion of the dangers of GPT-3 ).
If you enjoyed this newsletter, please consider sharing, subscribing, and contributing a comment to the discussion below.