Predictive Thinking
Beginning today, I’ll be publishing two Brain Lenses essays each week: one that’s available free for everyone (as usual), and another for folks who become paid subscribers.
If you’re finding value in these essays, consider supporting Brain Lenses by becoming a paid subscriber. I’d love to be able to commit more time to this project each week (and produce additional writings and supplementary work), and making Brain Lenses economically sustainable is what will allow me to do that. Your support will also gain you access to those extra writings.
If you’re enjoying what you read and are not in the position to pay anything for it at the moment, no worries: consider sharing the essays with someone who might enjoy them, sharing on social media, and/or clicking the little heart icon to grant the project a little more algorithmic credibility.
Thanks very much, and happy new year to you and yours from here in London :)
A system is a collection of somethings that are in some way connected to other somethings, either within a larger context—a collection of systems called a meta-system—or in isolation, a closed-system.
A portion of a larger system is called a sub-system, and thinking about life, the world, your work, or whatever else as a collection of elements in a larger system is often called systems thinking.
Systems thinking has become a popular catchphrase in the business world as it can help one optimally leverage power to achieve certain outcomes: “equifinalities” in systems thinking parlance, usually referring to outcomes that you can arrive at in an infinite number of possible ways. But the concept of systems and perceiving things through the lens of systems can be of more universal value, as well.
If one can accept that things are connected to other things—that if you knock a mug off your desk it will land on the floor, spilling coffee all over the carpet—then one can likely accept that said mug is part of many systems: the “office” system, the “coffee supply chain” system, the “getting myself ready in the morning” system. Each of these labels encompass different collections of things, each of them reliant upon, influenced by, and perceived to be part of a larger whole containing other objects. Your elbow that does the knocking, the carpet that soaks up the liquid, the air you inhale as you gasp at the coffee’s probable destiny; they’re all components of many systems.
Within this broad structure of systems, we have different levels of consequences.
The primary effects, the things that happen as a direct consequence of acting upon system, are called first-order consequences.
If you knock a mug of coffee off a desk, the mug drops to the floor and the coffee spills. Depending on how you choose to carve the system up and how granular you want to get with your thinking, this is arguably a first-order consequence of your actions.
Another first-order consequence might be that a co-worker looks up from their computer and makes a horrified face. Yet another might be your reflexive grab for the mug as it falls.
Most first-order consequences are fairly easy to predict as long as we’re within a system that we understand.
We know that coffee-spilling is a sad and even alarming event, we can likely predict that at least a few other people near enough to see what’s happening will respond as people often respond to sudden, startling developments, and we probably know that we’ll need to procure some cleaning supplies in the near-future, depending on the extent of the spillage and stainability of the carpet.
Second-order consequences are far trickier to predict with any accuracy because we’re forced to increase the number of unknowns, massively.
For each primary consequence, an array of possible secondary consequences emerges, branching outward. And although primary consequences are relatively easy to predict based on past experience, each level of removal from that initial act introduces vastly more potentialities, and thus, fewer reliable knowns.
A wonderful and illustrative example of the increasing unknowability of the long-term existence and success of the car industry was written by Ben Evans, a tech analyst and investor, back in 2017.
He posited that there are two key innovations in this industry that could change a whole lot very quickly, and perhaps even fundamentally: the shift from fossil-fuel powered cars to electricity-powered cars, and the segue from human-driven cars to autonomously driven cars that are fully controlled and steered by software.
A first-order consequence of shifting to electric cars alone would be that—because an electric car requires two orders of magnitude fewer moving parts than its gas-guzzling kin: about 20 instead of the current 2000-ish—such a shift would almost certainly change the shape, safety requirements, quantity and type of materials required to build, and expected age of a vehicle before it needs to be replaced (500,000 to a million miles are the most common estimates I’ve seen), which in turn would have immediate effects on the automotive industry, the automotive repair industry, all of the industries that harvest and refine the materials used in cars, the organizations that determine and police safety standards, and the petroleum industry, just to name a few.
From there, though, once those initial changes are considered, we might look yet further into the future—perhaps not in time, as some first-order effects take longer to propagate, while some second-order effects happen almost immediately after those first-order effects. But peering into the future in the sense of one thing causing another thing, and that second thing causing a third thing: what third-order consequences might emerge from the widespread or ubiquitous electrification of vehicles?
Well, maybe because cars no longer need gas, and because we will out of necessity need to be capable of storing car-ready levels of electricity in our energy grid, convenience stores would disappear, or change shape and purpose, because most gas stations actually sell fuel at very low-margins, earning most of their profits on snacks and bottled water. Why would people go to these shops if their cars no longer need gas and will mostly be topping-up their batteries at home or at their destination?
So the devastation or dramatic reshaping of the gas station/convenience store industry could occur as a secondary consequence of the electrification of cars.
A third-level knock-on effect from that secondary knock-on effect, though, might be that certain types of products—those that are primarily sold at convenience stores—would themselves take a hit or change shape.
In the US in particular, over half of tobacco sales occur in gas stations, and there’s data that shows non-heavy smokers buy fewer cigarettes when they aren’t available as an impulse buy at the check-out counter of a purposeful (non-tobacco-focused) destination.
Converting the US car market to electrification, then, could have some very consequential and immediate effects within the automobile and adjacent industries. But that change could also reverberate greatly, instigating substantial first-order, second-order, third-order, and onward, effects, throughout the economy and society.
Every single thing we do results in this type of outward-ripple: it’s just that some actions, some changes, make bigger or more measurable ripples than others and are thus more likely to be noticed and connected with their cause.
You could attempt to trace the system-based consequences of telling someone you love them or eating cake for breakfast instead of Cheerios, but the impact of those actions will likely be more personal than societal or ecological, and thus, the measurable effects will typically remain on that scale, as well.
What’s important to recognize here is that with each step we take away from the catalyzing event—the thing that changed—it becomes more difficult to predict with accuracy what will happen as a result.
The reason for this oracular fuzziness is that each step, each sequence of cause and effect, introduces a new, immensely large collection of potential outcomes, compounding with each additional step.
Some of these potential outcomes will be zero sum: we probably won’t see convenience stores both disappearing because they can no longer sell fuel, and thriving because, lacking fuel, they start selling some other good or service that allows them to make far more money than they ever made slinging petroleum.
But a lot of these maybes will not be mutually exclusive; they’ll overlap. So it’s not always a matter of choosing from one of a million possible futures, it’s choosing maybe one, maybe a million of those options, all of which may or may not come to be, and may or may not be influenced by those other new realities.
Each new step increases the number of possibilities, then, which in turn reduces our ability to accurately predict what happens next; to even imagine what might lead to what with any reliable resolution.
Imagining a world in which cities have ceased to be rooted in one place, and where instead we move from room to room, building to building, each of these hubs constantly shifting their location in space to suit our needs, our whims, and our social expectations; that’s a bit of a leap from the world in which we live, today.
We can imagine autonomous cars, though, and we can imagine cleaner sources of fuel, enabling the creation of vehicles that are long-lasting and ever-moving—parking spots becoming a thing of the past.
But to imagine these technologies resulting in ubiquitous always-on autonomous cars that take us from place to place, Uber-style, eventually evolving into larger vehicles that serve as movie theaters and gyms and WeWork-style offices, those same vehicles then evolving into on-demand motorhome-style pseudo-buildings, our cities reshaping to accommodate the increased prevalence of these larger, more hospitable, increasingly high-end and functionally building-like “vehicles”? That’s another thing entirely.
It’s thinkable, if you stretch your brain a little, to imagine such a world. To imagine how things might change, from how we get groceries (delivered by other ever-moving autonomous vehicles? Or perhaps we’re delivered to the farms where the food is grown, because why not?) to how our social patterns and dating habits change (“Have your auto-pod schedule a meetup with my auto-pod”).
It’s not obvious or certain how such things would play out, however, because these potential futures require specific and fairly significant shifts to occur, first, and each of those outcomes is competing with a near-infinite number of other possible outcomes, some of which seem more likely than others, but all of which could happen.
Each step along the way, as new realities lock into place, some potential next-steps will become more likely while the others will become substantially less likely.
Second-order thinking is attempting to imagine a step further than we’re generally taught to imagine and allowing our minds to operate in a less certain space, with fewer reliable knowns. It’s devilishly difficult, and even the most skilled prognosticators fail more than they succeed.
Third-order thinking, then, is even less-certain and more difficult than second-order thinking. It involves imagining outcomes based on outcomes that have themselves not yet been determined, so the likelihood of getting a foundational concept that is core to your prediction wrong, is high.
Consider the wonderful science fiction works from the 70s and 80s that presented interesting and prescient-seeming takes on what the future might bring, but which ultimately failed to predict the emergence of the internet and the many knock-on effects of that technology.
Consider the also-quite-good science fiction of the 90s and early 2000s that failed to predict how smartphones and smartphone-related technologies would become the ever-present, all-encompassing centers of our digital and personal lives.
In both cases, the failure to predict a single disruptive technology rendered a lot of other very thoughtful and high-quality predictions, moot. And the potential that we’ll miss a fundamental prognostication increases the further out we try to forecast—the number of things we must predict perfectly increasing with each fuzziness-inducing step.
The way we think about ourselves and the world is biased by our (understandable) disposition toward first-order, relatively short-term thinking.
Part of the reason we are thus predisposed, it’s thought, is related to a concept made famous in the book Thinking, Fast and Slow, by Nobel laureate (in economic sciences) and psychologist, Daniel Kahneman.
One of Kahneman’s main theses in this book was predicated on research that he and his team conducted over the course of decades, researching different modes of thought which he calls System 1 and System 2.
System 1 thinking is quick, instinctive, and emotional. System 2 is more deliberate, logical, and slow.
There are a slew of pros and cons to both modes of thinking, but many of the cognitive biases that we fall prey to on a daily basis—things like loss aversion, the sunk-cost fallacy, and anchoring—are the consequence of our heavy-leaning on System 1 thinking.
Meaning: a lot of our processing is based on subconscious heuristics—mental shortcuts—and those mental reflexes rely on all kinds of rules-of-thumb to keep us safe and thriving, but which are often partially or entirely wrong because of how reliant they are on our subjective experiences and emotional state.
Said another way: our more primitive brains are great at a lot of things, but sometimes they lead us in the wrong direction. But because they’re optimized for the kind of quick-twitch thinking that’s often required when we make snap-judgements, they tend to pilot the ship even when pausing to utilize System 2 thinking might be the better option.
Even when making an initial cognitive leap, then, our predictions are informed by illogical impulses and irrelevant data, spuriously inserted by well-meaning but ultimately less rational portions of our brains.
And because each step, from first-order thinking onward, is informed and distorted by the steps that came before, it’s no wonder that we tend to be spectacularly bad at making predictions about most things, pretty much any distance into the future.
Enjoying what you read? Consider becoming a Brain Lenses subscriber.
Free subscribers receive one essay in their inbox each week, and paid subscribers receive two.