Vulnerable World Hypothesis
If you enjoy this essay, please consider clicking the little heart button to give it a “like,” sharing it with someone who you think may enjoy it, or subscribing.
—
There’s a concept I’ve been mulling over for several years, for which I only recently stumbled upon a suitable label. And even better, that label was fleshed out, quantified (to a degree), and put into broader context in a paper by a skilled writer and thinker.
The paper is here (PDF), the author is Nick Bostrom—a man perhaps most famous for convincing technologists like Bill Gates and Elon Musk that artificial superintelligences could accidentally kill us by turning the universe into paperclips—and the concept in question is called the Vulnerable World Hypothesis.
The basic thesis of this concept is that technology, and the scientific understanding that underpins technology, could develop in such a way that we eventually discover or create something so dangerous and ubiquitous that we cannot help but destroy ourselves.
This is similar to but distinct from the Great Filter theory, by the way, which posits that one of the reasons we don’t gaze out at the stars and see abundant evidence of alien life is that there might be a stage of development most civilizations go through that 99.99% of the time leads to that civilization’s destruction. The Vulnerable World Hypothesis could be seen as one possible filtering mechanism, but it could also be perceived as an intersecting but otherwise orthogonal issue, since it has little to say about alien life and plenty to say about the immense civilizational ramifications of the pursuit of knowledge.
In the aforementioned paper, Bostrom posits a theoretical scenario in which the process of building nuclear weapons is simpler than it is in reality: rather than requiring very difficult-to-make and difficult-to-acquire components (several grams of highly refined plutonium or uranium, which would be almost impossible for non-governments to produce under current technological circumstances)—not to mention substantial technological know-how—what if building a nuclear device only required a specific arrangement of off-the-shelf hardware and a little bit of engineering education?
What if, in other words, a very large number of people suddenly had the ability to build a device that could wipe out a city?
It’s an interesting question, not because of its direct relevance to nuclear weapons, which thankfully, up till now at least, have remained holstered by the state actors that wield them, post-WWII, due in large part to international regulations, persistent norms around testing such devices, and the near-assurance that any entity that used such a weapon in anger would be wiped off the face of the earth (literally or figuratively) by other entities wielding the same or similar weapons.
Instead, it’s interesting because of what it asks about other technologies: those that might take a different path than nuclear weapons, but which could be just as, if not more, devastating, because of their proliferation, and the ease with which they could be deployed.
This is a question that has been asked and addressed more frequently in fictional works than in nonfictional works, I think.
While reading Bostrom’s paper, I was reminded of a science fiction series called The Long Earth, by British authors Terry Pratchett and Stephen Baxter, set in a world in which a new technology called a Stepper—a device which allows users to move sideways into parallel universes that exist “next” to our universe—is invented. The device itself is immensely simple to construct and made out of wires, a few basic electronic components, and a natural power supply; a potato is used in the book’s explanatory diagram, which was selected, I suspect, to make clear just how casually one might construct such a device and just how available the requisite components were.
The consequences of the emergence of this technology—the construction schematics of which were swiftly distributed across the internet—was that a huge chunk of the human population suddenly disappeared from Earth, moving sideways to other universes, settling Earths that were similar or wildly dissimilar from the baseline Earth where this technology was developed. Those who didn’t like existing governmental structures, economic realities, or how things were evolving ecologically, could simply flip a switch and leave, staking a new claim elsewhere, developing a new government or economy, wherever they ended up.
The fundamental precepts of civilization as we’d come to know them were shattered more or less overnight as a result of this innovation and its availability.
Bostrom gets fairly deep into conceptual details in his paper, outlining a typology for different types of threats, and coming up with rough numbers that would likely delineate actionable and non-actionable consequences for the majority of people: the amount of damage that would need to be done in terms of human lives and in terms of economic output before we decided, as a species, to make substantial changes to the status quo in the face of an emergent threat of this kind; what would need to happen before we decided to implement oppressive laws to keep people from utilizing easy-to-utilize technologies.
But most relevant to the concept of perception, I think, is the question of what this means, or would mean, for our social priorities and sense of acceptable tradeoffs.
At what point would it be worth giving up a degree of freedom—of expression, of behavior, of thinking and communicating, even—if giving up those freedoms would help prevent some kind of near-certain, recurring cataclysm?
If a technology (using that term loosely, meaning something like an easy-to-make nuke, or some kind of discovery like the latent human ability to kill with a thought if you meditate in a specific way) emerged that would allow an individual to murder millions of people, and that capability would be granted to a huge portion of the population, how many of us would support censoring that information, even to the point, potentially, of imprisoning or killing those who wanted to spread it?
At what point would truly draconian measures not just be acceptable, but desirable, due to the immensity of the negative consequences if we failed to act?
How many of us would be in favor of pulling the requisite materials from shelves and banning the sale of let’s say aspirin or cinnamon rolls, if it became clear that one or both of these materials was required to create a devastating weapon that anyone could build if they got their hands on the right painkiller or pastry?
Even those of us who consider always-on surveillance by the state to be invasive and generally not okay might support the installation of such surveillance if this type of threat emerged.
At some point, the idea of a world government keeping an eye on everyone, with the power to disappear people in the night, to imprison or kill at will, to ban anything they like from shelves and to stifle information whenever and wherever they like, becomes a little more thinkable.
Because the alternative—the chance that some rando with a pocket nuke, or a cinnamon roll-based alternative could wipe out humanity due to thoughtlessness or recklessness or rebelliousness or extremism—is so horrible that our mental math, our ideological calculations, might change.
We have not, as far as I know, at least, discovered or developed a technology of this kind quite yet.
We may be at the precipice of doing so, however, as new understandings in biology and artificial intelligence and chemistry and just about every other field of inquiry present us with remarkable opportunities to understand and improve the world, alongside the parallel opportunity to screw things up royally. But thus far, that genie remains in the bottle, thankfully.
This valid concern, though, perhaps sheds some light on why certain entities in the world, today—individuals and governments—favor a more top-down, authoritarian approach to governance.
I’m not at all in favor of censorship and widespread surveillance, or authoritarianism, but through this lens, thinking about how quickly everything could unravel—how rapidly our millennia of development could be snuffed out due to the thoughtless or intentional actions of the ignorant or the radicalized—makes it more evident, to me, that there could actually be a scenario in which authoritarian governance would seem desirable compared to the alternative.
Of course, there’s also a good chance that the consequences of such restrictions, of authoritarianism for the sake of sustaining the species or the civilization we’ve built, would prove to be even more damaging than the threats they’re installed to prevent or diminish.
Consider a scenario in which technological development is restricted in order to prevent such threats from arising: a world government with unlimited rights over the lives of individuals and the ability to act with impunity is created, ostensibly for our own protection from the things we might learn and invent.
It could be, as a consequence of this cap on technological development, that we then succumb to a threat that we otherwise would have been capable of both predicting and fending off.
An asteroid on a collision course with Earth is the prototypical scenario, here, I think. We develop a world-ending technology, pull back from that precipice, but then, due to our return to nature, or severe limits on what we’re able to research and create technologically—limits implemented to prevent us from developing those potentially world-ending technologies again in the future—we find ourselves unaware of this space-based threat before it arrives. Or we’re unable to respond to it, to do anything about it if we do notice, because our scientific and technological capabilities have so severely atrophied.
The same might be true of runaway global climate change, or the emergence of an immensely virulent and deadly disease. It may be that we avoid blowing ourselves up, but if the cost of avoiding that outcome is to make ourselves more vulnerable to other threats approaching from different angles, have we really protected ourselves, or have we just exchange one threat vector for another?
What we face here, then, is a situation in which all of our solutions, if taken to extremes, are almost certainly as bad or close to as bad as the threats they’re meant to address. And short of changing humanity in some fundamental way—removing our distinctiveness, our pseudo-anarchic way of thinking that has resulted in so many flavors of ideology and governmental model, our freedom to think and pursue lives that align with our personal beliefs—that’s unlikely to change any time soon.
It’s valuable, though, to recognize that our own world views, our own perception of the global game board, might fail to take threats of this kind into account. And lacking backups and failsafes and perhaps even preemptive infrastructure—even those that toe the line of what we’re comfortable with—to address them leaves us just as willfully ignorant about potential world-ending scenarios as those who are dragging their feet on accepting the many potential consequences of global climate change, or those who downplay the possibility of large-scale, biological or nuclear weapon-related terrorist attacks or military interventions.
These are all possible outcomes, which means we have to decide what the possibility of such threats arising, due to our action or our inaction, means for how we think about risk, and how we develop more resilient societies.
—
If you enjoyed this essay, consider subscribing and/or sharing it with a friend.
You may also enjoy my news analysis column, Understandary, or my podcast, Let’s Know Things.