Risk Perception
Free essays and podcast episodes are published every Tuesday, and paid subscribers receive an additional essay and episode on Thursdays.
When we’re deciding whether to smoke a cigarette, vote for a bill that would lead to the installation of a nearby nuclear power facility, or whether to go outside and socialize during a pandemic, we’re engaged in a type of analysis called risk perception.
Fundamental to our perception of risk is a subconscious weighing of what we know about the matter in question, including the supposed pros and cons to various actions we might take in response to that information.
In the 21st century, most people in most places around the world are aware that smoking a cigarette comes with health risks. What’s often important in determining who takes up smoking and who doesn’t, then, is often less about making sure people are informed and more about helping people engage with the available information in a way that is meaningful to their internal risk perception process.
Putting graphic images of people who have suffered from tobacco-related cancers and other afflictions, for instance, might cause a potential buyer to consider how they would feel if they had that kind of cancer or had their teeth discolored in that way.
Interestingly, there’s some evidence that graphics depicting less-dramatic consequences—discolored teeth rather than death-by-cancer—may have more of an impact on potential smokers’ behaviors than information about more dramatic consequences.
There isn’t a ton of published research on this potential tendency yet, but the theory is that people are more able to imagine the annoying, day-ruining effects of having to pay the government a few hundred dollars for a speeding ticket, rather than the potentially financially devastating consequences of having to pay several thousand dollars; one seems more realistic than the other, and thus, the threat of the imaginable punishment might prove to be the more effective one, for the intended outcomes.
Likewise, imagining oneself dying of cancer is more of a stretch than imagining oneself dealing with the potential social consequences of stained, and maybe even blackened or potholed, teeth.
Our ability to imagine the consequences of potential threats, then, may influence the math we do when determining how to behave, and which threats to take seriously: which ones to be aware of, intellectually, and which ones to feel in such a way that we integrate a consciousness of them into our behaviors.
Dread also seems to be influential in our perception of risk.
Nuclear power plants are far safer than comparable, conventional energy production methods by essentially every possible metric. But nuclear power plants, and truthfully, anything nuclear, or involving radiation, can evoke a sense of dread—fearful apprehension, bordering on existential terror—whereas other sources of power, like coal- and gas-powered plants, would be unlikely to evoke the same.
Part of the issue here might be a fear of the big and seemingly unknowable, but part that dread might stem the fact that when nuclear power plants have failed at newsworthy levels, the failures have tended to be disasters: think Fukushima, think Three Mile Island, think Chernobyl.
The concept of nuclear power, for many, is psychologically entangled with the concepts of mushroom clouds, radiation poisoning, and Godzilla; only one of which (radiation poisoning) could actually result from a nuclear power plant disaster, by the way. And according to official numbers, all but three of the worst-ever nuclear power-related disasters have caused deaths in the single-digits to the low-teens, with the top-three most deadly nuclear power-related disasters ranging from a few dozen to a few thousand apiece, depending on who’s figures you trust; the top two, Kyshtym and Chernobyl, are highly disputed, and the Russian government hasn’t been super keen to allow independent groups to check their numbers for a variety of reasons.
That said, Kyshtym took place in 1957 and Chernobyl in 1986. The Windscale fire, which is the third most-deadly nuclear power-related disaster in history, also took place in 1957, and is thought to have caused thirty-something deaths: which is obviously still a tragedy, but compared to the deaths linked to coal power, alone—over 100,000 people died in coal mining accidents in the 20th century, and deaths directly caused by airborne particulates from coal power plants range from 30,000 to 3,000 a year, depending on local regulations and the type of coal plant being operated.
But while coal might be annoying and ugly, and we might have a sense that it’s bad for us, it doesn’t cause the visceral sense of terror that anything nuclear can cause; even in people who understand that their fears are probably misplaced.
This is similar, in some ways, to the fear many of us have of flying in planes, despite happily hopping in cars every single day.
We face vastly more danger while in a car than in a plane—about 1.25 million people die in car accidents each year, worldwide, and 20-50 million are estimated to be severely injured or disabled in such accidents, compared to the one fatal plane-related accident that takes place for every 16 million flights—but the mental image of dying in a plane crash is far more terrifying, stimulating a sense of dread that we don’t tend to feel when we buckle up and hit the road.
Alongside information from our gut, influenced by dread and similar feelings, the benefits we gain from facing certain risks can likewise skew our perception of the risks, themselves.
Smoking might be bad for us, but if we’re hooked on the chemicals that cigarettes provide, we’re more incentivized to ignore the potential negative health outcomes; we know it’ll make us feel good, so the analysis stops there.
If many of our friendships are dependent on being able to socialize while out on a smoke break, we likewise may be more inclined to ignore what we know to be true, numerically, in favor of what we feel to be true: that we’d miss that moment of chilled-out camaraderie if we ever quit the habit, so we can’t allow ourselves to seriously consider data that might incline us to do so.
There are almost certainly individual psychological aspects at play here, as well, from an individual’s history with risk—someone who grows up with a stable safety net might be more inclined to take risks because they understand from experience that even if the worst happens, they’ll probably be okay—to a person’s neurological makeup, how well we’re able to imagine possible negative consequences compared to positive ones, and even variables like who might be at risk, the scale of the threat in question, the level of uncertainty or certainty about the threat, and whether the threat is construed to be natural or unnatural.
Some research has also indicated that we probably succumb to a representativeness heuristic when it comes to analyzing new risks: we label a novel, unfamiliar thing with a known category, and then judge that thing based on the analysis we’ve previously done on that larger category.
So if there’s a new disease going around, we might make determinations about that new disease based on our experience with past diseases: which could lead us to over- or under-estimate it, in terms of contagiousness, deadliness, and so on.
Similarly, the availability heuristic, which helps us categorize and judge things based on the most available example that comes to mind, can help us make snap judgements about potential threats, but can also cause us to miscategorize and misjudge—especially if we’re making superficial connections that don’t actually apply, in practice.
The COVID-19 pandemic, for instance, has been frequently compared to the flu. This comparison works on some levels, but completely falls apart on others.
As a result, people who are making decisions about how to respond to this disease based on their understanding of and experience with past flus may subconsciously feel as if they have things properly figured out, when in reality they may be treating a nuclear power plant like a coal plant, or a plane like a car, not realizing that there are important distinctions in both type of threat and outcome that are being left out of their risk perception formula due to this reflex.
Despite all the research that’s been done in this space, it’s unlikely that we have anything close to a complete picture of how we make risk-related decisions, and how we map out the larger risk-landscape in our minds.
It’s probably prudent, though, to assume that many of the assessments we make about risks will be colored by a great many variables beyond the actual facts and figures.
Enjoying Brain Lenses? You might also enjoy my news analysis podcast, Let’s Know Things.
There’s also a podcast version of Brain Lenses, available on Substack or wherever you get your podcasts.