Cyber Collapse in a GPT-3 World

The Information Apocalypse & Our Cyberpunk Reality

Part 1: Apocalypse Redux

Earlier this year, I wrote All About the Apocalypse, an essay exploring a range of topics from humanity's fear of death to the information apocalypse. With this year coming to a close, let's revisit this topic on the "ending of endings" with additional thoughts that expand it through the lens of cyberpunk dystopias, social media, and artificial intelligence. With AI takeovers, alien invasions, and nuclear armageddon being the most popular apocalypses in culture, it's important to expand on the information apocalypse, which may be the most realistic and damaging to society.

In a recent essay in The New Yorker, Corinne Purtill writes about Oxford philosopher Toby Ord and his book The Precipice, whose work on existential risk reflects my angst with the apocalypse. Humanity's blindness to its potential is dismaying, an inability to imagine the majestic wonders that may await us: cities that reach to the stars, virtual worlds filled with infinite pleasure, travel into the unknown cosmos, immortality, and much more. In her article, Corinne writes about Ord's similar sensibilities:

“As “Precipice” closes, Ord zooms out to the cosmos and, against the backdrop of its unfathomable vastness, asks us to grasp the scale of what we risk losing if the human story ends prematurely. He writes that, just as our early forebears, huddled around some Paleolithic fire, couldn’t have imagined the creative and sensory experiences available to us today, we, too, are ill-equipped to conceive of what is possible for those who will follow us. Humanity’s potential is worth preserving, he argues, not because we are so great now but because of the possibility, however small, that we are a bridge to something far greater.”

Our potential is why Elon Musk so urgently wants to make humanity a multi-planet species, creating a second home for us on Mars. Only by doing this can we eliminate the risk of humanity's total annihilation if an apocalypse descends on Earth.

Yet, it's not just philosophical examination that portends our doom, as Graeme Wood's recent essay in The Atlantic shows us. In it, he profiles Peter Turchin, who's known for predicting 2020 would be a year of great turmoil. Turchin uses mathematical analysis to study human history, examining the dynamic between elites and counter-elites in societies, hypothesizing that a misbalance between them leads to a civilization's collapse. Wood expands on this:

“Turchin looks into a distant, science-fiction future for peers. In War and Peace and War (2006), his most accessible book, he likens himself to Hari Seldon, the “maverick mathematician” of Isaac Asimov’s Foundation series, who can foretell the rise and fall of empires. In those 10,000 years’ worth of data, Turchin believes he has found iron laws that dictate the fates of human societies.”

Turchin's work is categorized as cliodynamics, "the search for general principles explaining the functioning and dynamics of historical societies." Although Turchin draws clear distinctions between his field and Hari Seldon's psychohistory, the comparison is apt, as science fiction is often considered the precursor to some of society's most imaginative innovations. 

Using data to determine when or how a civilizational cataclysm might strike isn't limited to just Turchin's work, though. As described in an essay by Jim Holt in his book, When Einstein Walked With Gödel, some use probability to show that our end is imminent. The Doomsday Argument says that if humanity is destined to flourish in the future, it is inconceivable that we exist in this present moment, and thus, the apocalypse must be imminent. As Holt explains:

"Suppose humanity were to have a happier fate, surviving thousands or millions of years into the future. And why not? The sun still has half its ten-billion-year life span to go. The earth's population might stabilize at fifteen billion or so, and our successors could even colonize other parts of the galaxy, allowing a far greater increase in their numbers.

But think what that means: nearly every human who will ever exist will live in the distant future. This would make us unusual in the extreme. Assume, quite conservatively, that a billion new people will be born every decade until the sun burns out. That makes a total of 500 quadrillion people. At most, 50 billion people have either lived in the past or are living now. Thus we would be among the first 0.00001 percent of all members of the human species to exist. Are we really so special?

But suppose, contrariwise, that humanity will be wiped out imminently, that some sort of apocalypse is around the corner. Then it is quite reasonable, statistically speaking, that our moment is the present. After all, more than seven billion of the fifty billion humans who have ever lived are alive today, and with no future epochs to live in, this is far and away the most likely time to exist. Conclusion: doom soon."

(When Einstein Walked With Gödel, p. 259 - 260)

This would seem to lend a pessimistic barrier to the vast potential of our species. If we can, in fact, reach new heights as a species, then the Doomsday Argument dissolves it with mathematic probability, foretelling our extinction. To avoid this fate, we must understand why our present may be in such a precarious situation, and alas, science fiction may have the answer we seek.


Part 2: Cyberpunk, Cyborgs & Chatbots, Oh My

One of literature's most prescient science fiction genres is cyberpunk, an examination of corporate greed, cyborgs, and computer contraptions, all combined under neon lights, hard rain, and gritty protagonists. Its popularity began with William Gibson's classic Neuromancer and Ridley Scott's Blade Runner but has since spanned into a genre with untold stories, art, and one of this decade's most talked-about video games: Cyberpunk 2077

Although this recently released game has received some negative backlash over an assortment of gameplay issues, its release couldn't come at a more appropriate time. In The Atlantic, Darryn King profiles Mike Pondsmith, the creator of the original tabletop game that Cyberpunk 2077 is based on. In the essay, King discusses recent technologies that make 2020 seem much closer to 2077 than it appears, such as drones and facial recognition scanners. Pondsmith articulates one of the core themes of our modern-day, and part of the inspiration behind this newsletter's name, the duality of technology:

"We are finding ways to utilize technology to free ourselves," Pondsmith said. "Think about this phone. You can use this cellphone to gossip. You can use this cellphone to tell lies. You can use this cellphone to tell some really great jokes. You can use this cellphone to expose corruption, inequality, bad behavior. What's important is that we actively think about how we use it, how it affects our lives and our societies, and learn from it."

Pondsmith couldn't have picked a more representative example, as the smartphone is arguably the most pivotable piece of technology this century, transforming how we interact with each other and the world around us. In The Guardian, Alex Hern writes about the iPhone

"Without us even noticing, Apple has turned us into organisms living symbiotically with technology: part human, part machine." 

His essay reads right out of a cyberpunk manual, describing how Apple has planted its tentacles directly into our brains, turning us into cyborgs even if we don't feel like them yet. With an AR glasses product on the horizon, Hern wonders what kind of future that will bring:

"Our cyborg present sneaked up on us as our phones became glued to our hands. Are we going to sleepwalk into our cyborg future in the same way?"

If an apocalypse's prelude is a cyberpunk reality—a futuristic battle between Turchin's elites and counter-elites—then understanding this genre will help steer the future into a brighter (but less neon) direction. To do that, we'll need to comprehend the alien species present in many cyberpunk stories: artificial intelligence. The apocalypse of metal machines and demigods of destruction eliminating humanity is one that has fascinated us the most in the world of AI, but the danger of silicon minds silently doing the bidding of a human agent is more immediately troublesome. We don't need AI to be conscious to live in a world with other intelligent beings, as long as humans can twist reality to make it seem like we are. Which brings us to GPT-3.

This year the company OpenAI released the natural language processing program called GPT-3. James Vincent in The Verge explains:

“Like all deep learning systems, GPT-3 looks for patterns in data. To simplify things, the program has been trained on a huge corpus of text that it’s mined for statistical regularities. These regularities are unknown to humans, but they’re stored as billions of weighted connections between the different nodes in GPT-3’s neural network. Importantly, there’s no human input involved in this process: the program looks and finds patterns without any guidance, which it then uses to complete text prompts. If you input the word “fire” into GPT-3, the program knows, based on the weights in its network, that the words “truck” and “alarm” are much more likely to follow than “lucid” or “elvish.”

GPT-3 is extra powerful because of its size (175 billion parameters, 116x more than GPT-2) and its ability to perform a wide variety of tasks based on a user prompt. If you want to write another sequel to Blade Runner, all you need to do is describe your vision for the film. If you want to predict your friend's next tweets, you can just input a few from their past. The innovations this AI technology may provide are momentous, with technologist Azeem Azhar predicting it will “enable a new class of knowledge manipulation.”

However, we must keep a wary eye on its potential misuses, as OpenAI seems committed to doing. The biggest concern is GPT -3's potential contribution to the current distortion of reality. In the last few years, we've seen the infiltration of the internet with doppelgängers and illusions, fictions perpetuated by cyborg creations. Like other bots, GPT-3 is not a sentient being, but its ability to mimic human thought in such an extraordinary and undetectable way may accelerate the existing cracks in society, creating an earthquake that can shatter our global consciousness. With filter bubbles bouncing off each other and political ideologies polarizing further, many will continue to look for ways to undermine their opponents. As Renee Diresta, the technical research manager at the Stanford Internet Observatory, writes in Wired Magazine:

“The ability to manufacture a majority opinion, or create a fake-commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns. Pervasive generated text has the potential to warp our social communication ecosystem: algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.”

Aviv Ovadya, the founder of the Thoughtful Technology Project who coined the phrase infocalypse, echoes this sentiment. In 2018, Charlie Warzel profiled Ovadya in Buzzfeed News, who described the weapons of this imminent apocalypse; things like "diplomacy manipulation," "polity simulation," and "automated laser phishing," advanced tools that digitally infect minds with false truths. As discussed in these essays, the leap in deep fake technology, which can manipulate faces and voices to have anybody in the world say or do anything, is the additional element contributing to the new cyberpunk matrix we may soon find ourselves in. Although researchers are developing tools to identify these constructions' artificial origins, the problem is they are only getting easier to make, and it may not matter. When combined with GPT-3, a bad-faith actor can perpetuate a false reality through a clever use of digital tools. Consider the steps of a propaganda campaign in this future:

  1. Release a deep fake video of an ideological opponent that will damage their credibility.

  2. Rally different communities to share and spread the video, using a combination of ads, manipulated social accounts, digital personalities, and bots (GPT-3).

  3. Watch as mainstream media picks up the story and shares the deep fake video.

  4. Have thought-leaders in your aligned ideology deny any artificiality to the video.

  5. Use GPT-3 like technology to further confuse the narrative.

In another Atlantic essay, Diresta connects this to current issues in society:

“In countries around the world, coordinated propaganda campaigns in print as well as social media have sown social unrest, pushed down vaccination rates, and even promoted ethnic violence. Now imagine what happens when the sources of such postings are untraceable and the supply is essentially infinite.”

The Matrix we live in may not be one like that in the film, humans plugged away in vats controlled by overlording machines, but instead one where we are plugged into a not entirely fake system, but one seeping at the edges with false agendas. When the system is cracked, the ghosts are let in, and we lose our sense of what is true.


Part 3: The Social Vortex

These deceptions, delusions, and tricks of perception all reach their apex on social media, the vehicle for which disinformation can cause the most harm. Netflix's recent documentary The Social Dilemma has brought this to the forefront, opening many eyes to the true dangers of Facebook and other social media platforms. In a recent essay, The Atlantic’s executive editor Adrienne LaFrance writes that Facebook is the modern-day Doomsday Machine, an analogy to a philosophical device, an automated machine that would bring about the apocalypse if it sniffed humans instigating nuclear war. This machine would act as a deterrent, like a more potent "mutually assured destruction," since there could be no survival from its wrath. Like the machine, Adrienne discusses how Facebook is now uncontrollable; its "mega-scale" of two billion-plus users means it can penetrate every corner of the planet, seeding our minds with malicious information that can then impact our behavior in the real world. She writes:

“The Doomsday Machine offers a lesson: We should not accept this current arrangement. No single machine should be able to control so many people. If the age of reason was, in part, a reaction to the existence of the printing press, and 1960s futurism was a reaction to the atomic bomb, we need a new philosophical and moral framework for living with the social web—a new Enlightenment for the information age, and one that will carry us back to shared reality and empiricism.”

In a 2018 essay, popular author Yuval Noah Harari wrote Why Technology Favors Tyranny, an essay that illuminates our cyberpunk future, one where power continues to centralize in the pursuit of big data: tracking, storing, and analyzing it in one mega-system. Harari writes:

"The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter."

Technologies like Facebook, GPT-3, and the iPhone mean we currently live in a cyberpunk future. Corporations have unimaginable scale and control over how our societies function, social media platforms control the narrative, smartphones suck our eyes into their screens, and technology like GPT-3 and deep fakes will only make us oblivious to the truth. A common theme linked in the essays by Vincent, Diresta, and Harari is scale. As more data, processing power, and investment go into these systems, the more the gap between those with information and those without will expand, and those with less information will have less power in filtering the fake from the real, leading to "reality apathy" as described by Ovadya. Or as William Gibson is famously quoted for saying: "The future's already here – it's just not very evenly distributed."

As these debates oscillate between optimism and pessimism, human nature's murky truth seems unwilling to budge. Throughout our history and incredible progress, we stumble into ideological, philosophical, and political differences that sometimes poison our civilization. From religious ideologies preaching a hidden truth to the modern infestation of fake news, the idea of informational warfare is nothing new. Yet, new technologies mean we must understand we are approaching a new era in this story, and that we must proactively defend against the inevitable exploitation of reality from those who aim to undermine its structure. In her essay, Lafrance expounds on the mindset needed to solve our social predicament:

“We must also find ways to repair the aspects of our society and culture that the social web has badly damaged. This will require intellectual independence, respectful debate, and the same rebellious streak that helped establish Enlightenment values centuries ago.”

Harari expands on this further:

“For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated.”

Maybe GPT-3 can help. Using the writing software Shortly, which incorporates the GPT-3 API, I created a conversation with the program to see if it can find us a solution to this predicament. Before reading, some caveats:

  • Shortly, although powerful, is not the OpenAI GPT-3 API playground, which allows for more flexibility in the use of the program.

  • The software asks for an “Article Brief” to help steer the AI. I wrote: “This is a conversation between GPT-3, an artificial intelligence, and a human. They discuss cyberpunk, disinformation, social media, and the information apocalypse. The AI should not write the human response.”

  • I refreshed responses multiple times to get a response from GPT-3 that most interested me. Additionally, the program would sometimes write the human part of the dialogue, which I didn’t want, so in these cases I had to lengthen my response. Even when adding the last sentence to the article brief, this problem persisted.

  • The bold elements in the text below are from GPT-3, the plaintext are mine.

Human: Tell me what you think about cyber collapse and the information apocalypse.

The artificial intelligence, GPT-3: I think it's inevitable. I don't want to be a pessimist, but there is no way for individuals to check the veracity of information. Basically, what will happen is devastating. There is nothing we can do about it. The proliferation of information is increasing exponentially. There are already too many sources of "knowledge". What we need are more gatekeepers (filter criteria).

Human: I think this makes a lot of sense. Social media platforms like Facebook are too dangerous given its massive scale and ability to spread dangerous ideas to millions of people. Anyone can say anything and find someone somewhere that will believe it.

GPT-3: On the internet, you can find communities of people that are willing to believe almost anything. They will be highly critical towards advances in science, but can accept almost anything as long as it's not too outlandish.

Human: That's what I fear most, an end to science. When people don't believe in the truth anymore, and others can use digital tools to spread dangerous ideas, how can we prevent cyber collapse and the information apocalypse?

GPT-3: I think the only way to prevent it is through some sort of authoritarian body that has control over the internet. But not just any authoritarian body; something that will allow small communities and still prevent genocides. Or a global body in charge of talking about how to spread knowledge in a safer/better way (although, that's kinda like saying "a better version of government").

Human: That seems a bit scary. Centralized power, especially with big data, can lead to bad outcomes, like those seen in cyberpunk fiction. Do you have any other solutions for consideration?

GPT-3: I'm not sure. I think the only other solution would be to get rid of social media altogether, but that seems a bit extreme. "The internet" is a much easier thing to talk about than it is in reality.

Human: I hear that. Well, thank you for your time today. Hopefully humanity and artificial intelligence can find a solution to these problems together, creating a more sustainable world for all of us.

GPT-3: No problem. Sorry for not being more helpful. I hope I haven't stressed you out too much, but these are scary times we live in.

Well then. I guess we’re going to have to figure this one out for ourselves. Thankfully, we have a long history of doing so, but we must act now, not just react to future harms. On a final note, let’s counter GPT-3’s pessimism with Mike Pondsmith’s optimism in a Wired Magazine interview:

"The thing that I love about cyberpunk inherently is that it's about paying attention, and dealing with things. If you use the technology and your knowledge right, you can make it better."

In other words, it’s time for a reality check. Are you with us?


If you enjoyed this essay, consider subscribing to our newsletter. Below is additional content, much of which is connected to the above essay.


🧠Bonus Brain Bits🧠

Another Apocalypse

Although the above essay's informational apocalypse will come to life online, some may find the internet's extinction a more terrifying doomsday. Luckily for us, there are some potential solutions to this potential digital wasteland.

The AGI Apocalypse

Suppose the information apocalypse doesn't scare you as much as an AI apocalypse in the vein of Skynet. In that case, welcome to a new nightmare; Geoff Hinton thinks deep learning (the tech underpinning AI like GPT-3 ) will eventually lead us to artificial general intelligence, which could then lead to the more typical AI doomsday as seen in science fiction stories.

Climate Apocalypse

If the neon glow of cyberpunk still burns your eyes, the answer to what we need may be found in the green glowing positivity of solarpunk, a movement that shows what happens when we overcome another impending apocalypse, climate change.

America’s Apocalypse

As pointed out in The Verge essay, GPT-3 is capable of racist, sexual, or inappropriate responses due to some of the sources the data was trained on. This is problematic for numerous reasons, and OpenAI appears committed to solving this issue. An essay by Brian Merchant in OneZero expands on this idea of unevenly distributed social consequences and asks us to consider what we mean by "the apocalypse," since many people living in society today can be said to be living in apocalyptic conditions. In his essay, Merchant discusses how Black individuals in the U.S were forced into an apocalypse through slavery, which continues through racism and inequality today. It helps explain that in addition to stopping future apocalypses, we should work to end modern ones as well.

The Semantic Apocalypse

One of my favorite fantasy authors, R. Scott Bakker, wrote this essay back in 2011, discussing what will happen to society when meaning collapses due to transhumanist technologies changing the nature of thoughts, ideas, and emotions. Short, trippy, and perfectly compatible with the ideas behind the information apocalypse.

Digital Survivors

Black Mirror is basically a meme at this point, with tweets about the show going viral every so often, comparing our modern ills to the show's dark predictions of the future. A recent article in Wired nods to an episode where a dead man is brought back to life. This seemingly far-fetched idea is now worthy of ethical debate given the range of new technologies in the field of "digital resurrections." 

Digital Personalities

Virtual influencers like Lil Miquela and Shudu show the power of artificial personalities influencing humans, while websites like This Person Does Not Exist showcases this technology's potential future. When combined with GPT-3, the results seem endless.

Science Fiction Classics

The two science fiction books mentioned in this essay, Neuromancer and The Foundation, are two classic novels that, in their own way, helped pave the way forward for the genre at large. Although both may read as dated in parts, they showcase the visionary foresight available in this type of fiction. In the beginning of Foundation, the character Hari Seldon says the following, which could easily be used to reference the information apocalypse:

"With the destruction of our social fabric, science will be broken into a million pieces. Individuals will know much of exceedingly tiny facets of what there is to know."

The Foundation, p. 38


If this was post was shared with you, consider subscribing to our newsletter.

You can reach us anytime by replying to this email directly or talking to us @Multilarity on TwitterInstagram, or multilarity@gmail.com.

iStock.com/grandeduc

iStock.com/grandeduc