Preference Falsification
In the world of public polling, "preference falsification" refers to the tendency of those being surveyed to provide false answers in order to fit in with what they perceive to be the socially acceptable majority.
This concept was already being explored when it was given this monicker in 1995 by a social scientist named Timur Kuran, and it's been of particular interest to pollsters looking to collect more accurate data about various things, and analysts looking to understand how protests, riots, and uprisings emerge seemingly out of nowhere.
In the former case, poll-workers wanting to accumulate better data about political and brand-related views—who cares about what policies, how different products are perceived by their intended customer-base—try to avoid slanting the data they collect in numerous ways, but it's tricky to account for the possibility that a socially unpopular (broadly, or within a given demographic) product or brand will be accurately assessed, as a recent headline may cause respondents to tilt their responses one way or the other in order to avoid being perceived as "someone who likes the unpopular brand associated with that recent bad thing in the news"—or in contrast, to avoid being someone who is out of the loop and not aware of the cool new whatever that everyone suddenly seems to like.
This is similar in some ways to the complexities and issues associated with what's often called "virtue signaling," where an individual expresses some kind of moral, political, or other ideological belief based on the assumption that it will associate them with a virtuous cause or group.
If everyone in my social clique is suddenly talking about environmentalism, I might post some stuff on Facebook about environmentalist causes despite not actually knowing or caring much about it because I suspect doing so will align my "personal brand" (other people's perception of me) with that cause, and thus, my social position will increase.
The main difference between these concepts is that preference falsification tends to focus on the avoidance of being associated with certain ideas, especially in contexts where that association wouldn't necessarily have any immediate social consequences (like anonymous polling), while virtue signaling is a pejorative term for something like social posturing: performative acts we hope will make us seem cool or more "with it" in social groups we respect and amongst people whose respect we crave.
In the latter case, social scientists and think tanks in particular are keen to quantify and make more legible preference falsification because doing so may help them see the early signs of unrest and allow them to work that awareness into their models and plans.
In retrospect, many surprising (in the moment) uprisings and overthrows of governments have been predictable based on the true opinions and beliefs of the majority of people in an area.
When those opinions and beliefs are measured ahead of these events, however, many people provide answers they believe will align with the mainstream (as they perceive it), rather than sharing their true opinions and possibly flagging themselves as some kind of outsider with dangerous ideas, as a consequence.
As long as such ideas seem outsidery and dangerous, then, they tend to be hidden, and a perhaps significant chunk of a burgeoning belief-based demographic will seem to be part of another demographic (with different beliefs) as long as that’s the case: the numbers won’t capture them.
At some point, though, that dynamic may flip, and this previously hidden group may emerge seemingly all at once, out of nowhere, and a trend that seemed to be going in one direction might shove hard in the other direction.
This same effect can be seen in governments or organizations led by cults of personality, and in some cases those cults will grow not because everyone truly believes in the person at the top, but because the perception is that everyone does, so anyone who harbors doubts keeps those doubts hidden. This is key to the sustainability of strongman and other authoritarian governmental models, and for leadership positions within organized crime and similar organizations: as long as everyone believes that everyone else supports the person at the top, that person will be untouchable; but as soon as that belief changes, the person at the top becomes as target.
These intentionally incorrect, self-censored views, then, can help prop up systems by emphasizing seeming conformity, but can also lead to their downfall if and when the truth ever comes out, and people look around and realize their closely held opinions are not as obscure (or heretical) as they suspected.
This effect can also distort our top-down understanding of movements, the popularity of politicians and products, and our assessment of both the composition and size of various demographics.
Paid Brain Lenses subscribers receive twice as many essays and podcast episodes each week. They also fund the existence and availability of all the free stuff.
You can become a paid subscriber for $5/month or $50/year.
You can also support all my work (and receive gobs of bonus content) via Understandary.