What You See Is All There Is
Free essays and podcast episodes are published every Tuesday, and paid subscribers receive an additional essay and episode on Thursdays.
The human brain grants us all kinds of interesting powers, and one of them is the ability to create narratives even when we have very few raw materials with which to build those narratives.
So if we’re walking down the street and we see someone walking toward us, and that person seems a little off—is maybe shifty-eyed or has a bulge in their jacket pocket or seems to be trying really hard not to look at us, and there’s no one else nearby, just us and this stranger about whom we know absolutely nothing except that which we can glean from their aesthetics and behavior—we might come to the conclusion that this person is dangerous and act accordingly.
In some circumstances, this sense of unease, fueled by what Nobel Laureate and psychologist Daniel Kahneman calls System 1 thinking and what the rest of us would probably call instinct or intuition, can be lifesaving.
In other circumstances, it’s mostly harmless, though also quite often the source of false assumptions.
But what’s interesting about this type of internal response to external stimuli is that when we have few data points upon which to base our understanding of something, we’re not just able to expound those points into a fully realized pseudo-truth, we’re essentially inclined to do so; we create such stories without even trying and without necessarily realizing we’re doing it.
This ability would have served our ancestors well, as avoiding predators and other threats is a lot easier when you’re able to weave pieces of evidence together into something more complete and meaningful.
The sound of breaking branches, the smell of fresh panther dung, all nearby birdsong suddenly going silent: these indicators can add up to very meaningful and perhaps even life-saving realizations, and humans who were able to combine these data points in this way were almost certainly more likely to survive than those for whom these indicators remained mere isolated bits of information.
The trouble with this sort of inferencing, though, is that the narrative we build around limited data is often pure extrapolation. We base it on past experience and here-say and assumption, and each of these bases can be accurate in one circumstance but very wrong in another.
Sometimes the stranger who is behaving oddly will be a threat, and sometimes they’ll be a person who is just behaving oddly—or someone who we perceive to be behaving oddly, when in reality we’re applying that spin to their actions, because we, ourselves, are feeling odd, alone, or in some way heightened by something else going on in our our lives.
Kahneman also coined another term that’s useful here: What You See Is All There Is.
This term refers to our brain’s propensity to fill in the blanks—to make unknowns seem more known—by assuming that the data we have is the only important data, and that we can accurately infer everything else we need to know if we want to understand the world around us.
This is an appealing notion that is unfortunately untrue.
If you meet one person who is from France, you do not know enough to make a judgement about everyone who is from France: that tiny bit of information you have about a single person is insufficient to accurately extrapolate about an entire population of a country.
Likewise, having had an experience—direct or second-hand—in which a shifty-eyed person did something scary or violent does not imply that all people who seem to be warily scanning their environment will try to attack you or someone else. There are plenty of reasons why someone might behave in this way, including the possibility that they’re listening to a podcast through a tiny earbud that you can’t see, and their immersion in that audio content is manifesting as a lack of normal-seeming focus and locomotion.
Organizations often benefit from systems of checks and balances: they allow an individual within a business or similar group to make a What You See Is All There Is-based assumption, but to then have that assumption challenged by someone else who has a different perspective.
It’s possible for individuals to benefit from a similar system, though it’s tricky to make it a consistent, reflexive thing.
We humans are famously unreliable assessors of our own perceptual credibility, and thus, it’s common to grant questionable assumptions internal credibility because we can convince ourselves, in the moment, that a System 1-based supposition is predicated on actual knowledge, rather than narrative speculation and confirmation bias.
If we utilize what Kahneman calls System 2 thinking—which is a more deliberative, if much more ponderous cognitive method—we’re a lot more likely to catch our own confabulations and to mark them as such.
But doing so requires the in situ summoning of our System 2 mechanisms consistently and reliably, and an understanding that, in many cases, being more right will also mean being less certain.
Enjoying Brain Lenses? You might also enjoy my news analysis podcast, Let’s Know Things.
There’s also a podcast version of Brain Lenses, available at brainlenses.com or wherever you get your podcasts.