Insatiability
Free essays and podcast episodes are published every Tuesday, and paid subscribers receive an additional essay and episode on Thursdays.
In the world of theoretical artificial agents—that is, potential future computer-based intelligences—the term “instrumental convergence” refers to a posited tendency for these agents to acquire and consume as many relevant resources as possible.
The idea is that an AI that is instructed to perform a particular task, and to optimize itself and its circumstances to perform that task better, over time, would logically need to control its context in order to do so. In practice, this would mean acquiring more control over electricity flow to keep its servers going, raw material resources to build more servers, and money, to pay for all that land, raw materials, and electricity.
Expounding on that concept, at some point an AI with sufficient capabilities would, of necessity, need to more or less take over the global economy and other power structures, because failing to do so would cap its ability to further optimize itself. Its capacity for self-preservation, self-improvement, and growth, according to its standards for growth, would be artificially limited if it did not grab all possible power, and thus—lacking some kind of pre-built cap on its ambitions—it would do whatever it could to remove those limits; to take all the power.
This is the theory underpinning such wild and disturbing thought-experiments as Nick Bostrom’s Paperclip Maximizer, and Stuart Russell and Peter Norvig’s assertion that an AI instructed to solve the Reimann hypothesis might try to convert the whole planet into computational materials, expanding ever-outward to convert the universe, as well, in order to achieve that seemingly harmless mathematical goal.
Fortunately, even Bostrom has said that there’s a chance we could figure out how to frame and limit goals with suitable specificity so that it’s less likely that an unmoored AI gobbles up all matter in the universe in order to produce more paperclips.
Bostrom’s Orthogonality Thesis says, in essence, that there are a great many possible goals and a great many possible levels of intelligence with which AI could be imbued. Thus, not all AI will inherently aim at any particular outcome.
This means, in practice, that we could accidentally create what amounts to an all-powerful digital being that kills us all and turns our bodies into paperclips, but we could also, potentially, create AI entities of incredible power that have goals and capabilities that align with our own.
There’s nothing keeping a theoretical Paperclip Maximizer from building humanity some kind of utopia to live in, alongside its paperclip-related activities, because doing so would not prevent its larger goals, nor cost it a substantial amount of resources. Just as mathematicians working on life-consuming problems continue to eat and use the bathroom, so too might an all-powerful AI maintain, or even improve, human-livability standards on Earth, because why not? Or maybe because it was programmed to do so, alongside its other, arguably larger-scale activities.
Potential limits on the ambitions of such theoretical organisms are interesting, in part, because of what they reflect about our own ambitions.
Human beings, like our potential AI overlords, have goals that are hard-wired, and others that are the consequence of pseudo-goals and path-dependencies entangling, reshaping, and evolving over time.
We act in accordance with certain basic rules, and though more complex rules are built atop the more fundamental ones, our underlying needs are informed by certain base-level, biological drives: to eat for energy and growth, to reproduce to pass on our genetic materials—things like that.
Other goals, like the desire for social position, are predicated on those fundamental drives: we desire social status because it, ostensibly at least, increases our ability to survive and successfully procreate. There are also often biological mechanisms that reward behaviors that move us closer to these goals.
Everyone exists on a spectrum for all of these needs, of course, and there are extreme versions of all such desires—from those who are addicted to social climbing, to those who require far less sleep—alongside epigenetic and psychological variations of every conceivable sub-type: these are not just traits we’re born with, then, but also traits that we develop over time based on experience, environment, and so on.
Such variations, though, support the idea that although there are underlying path-dependencies that shape how organisms come to be, there’s also a lot of potential for deviation from those paths, and such paths can therefore lead to difficult-to-predict outcomes: ideal-seeming childhoods can result in adults who are murderers, and optimal-seeming programmed goal-sets can still lead to a planet converted into paperclips.
One implication of this larger concept is that organisms, be they humans, dogs, or AI, will sometimes consume as many resources as are available and accessible.
In nature, many organisms will consume as much food—as much energy—as possible, storing the excess for lean-times, but otherwise gobbling up whatever they can get their claws, tendrils, paws, jaws, or hands on.
There are environmental checks on this tendency—a lack of unlimited food, for starters, but also potential threats, and at some point the physical limits of an entity’s bodily or external storage capacity—but lacking such constraints, some organisms will just keep gobbling, even to the point where they experience physical harm as a consequence.
The human desire for more, then, which often manifests as stress an anxiety, alongside compulsive consumption of various kinds, may have a biological component. Our species may never have needed to evolve internal ceilings on our ambitions, because our inherently limited environments kept us in check.
When we aggregate humans into larger entities, like corporations or governments—both of which can, at times, behave like AI entties, with rules and algorithms guiding their behavior—these mega-organisms, likewise, latently lack internal limitations.
What corporation, after all, is formed with the intention of growing, but not too big? What government has ever decided to put a cap on the resources they can generate, to staunch their own growth, to limit their economic success levels?
We can more clearly see the external consequences of such unbridled ambition in mega-entities, because their capabilities are also mega-scale.
But the internal ramifications of uncapped goal-setting is experienced by individuals as stress, anxiety, and compulsive consumption of various kinds. These drives, in moderation, can help us grow and experience satisfaction, but because they are often unlimited in scope, the very processes that give us direction and a sense of purpose can also make us feel inadequate and unquenchably desirous.
If we ever want to feel sated, then, it may be necessary to adjust our goals so that new constraints, new ceilings on our desires and ambitions, are artificially applied.
Lacking such ceilings—be they socially mandated, individually reinforced, or borne of physical necessity—some of us will experience bottomless quantities of stress due to our understandable lack of infinite resources and our endless pursuit of moving targets.
Enjoying Brain Lenses? You might also enjoy my news analysis podcast, Let’s Know Things.
There’s also a podcast version of Brain Lenses, available at brainlenses.com or wherever you get your podcasts.