Observer-Expectancy Effect
Free essays and podcast episodes are published every Tuesday, and paid subscribers receive an additional essay and episode on Thursdays.
There are a great many reasons to question the validity of categorization methodologies like the Myers-Briggs Type Indicator. Alongside the bizarre theories of its creators and capitalistic nature of its popularizers, tests predicated on self-reported data are often biased by a phenomenon called “reactivity.”
In this context, reactivity refers to the tendency of those who are being studied or watched to alter their behavior based on a slew of variables introduced by the circumstances in which they find themselves.
Said another way: when we know we’re being observed, we behave differently, and the same is true when we know we’re being asked questions that may inform a judgement about who we are.
The specific type of reactivity that distorts self-reported data in psychological research is sometimes called the “Observer-Expectancy Effect.” This effect can sway our responses to questions and adjust our overall behavior when we’re in environments in which we know our actions are being observed and recorded, even if we don’t know precisely what’s being measured and to what end.
We find this effect in animals as well as humans. One of the most well-documented and archetypical examples of the Observer-Expectancy Effect in non-humans is the case of Clever Hans: a horse that seemed to be able to do math and present solutions to arithmetical problems by stomping its hoofs to indicate numbers.
After some analysis, though, it was found that Clever Hans was actually watching the body language of whomever presented the math problem and stopping based on changes to their posture and facial expression.
We humans often do the same, changing our behavior based on nonverbal signals from those in perceived positions of power. In research scenarios, this propensity can distort data, because the research subjects modify their answers and actions to match the expectations and/or hopes of those conducting the research.
In modern, controlled experiments, this phenomenon is often accounted for by using what’s called double blind experiment design, which requires that subjects, but also those interacting with subjects, know as little as possible about the research being conducted so that expectations are less likely to subconsciously influence the results.
Not only are the people being observed not told what’s being tested or how, or what the researchers think will happen, then, but the people distributing the drugs and placebos, the folks asking the questions, and the research assistants greeting the subjects as they enter the building will also be left as ignorant as possible about the work being done—all in an effort to avoid influencing the expectations, and thus, the behaviors of those involved.
Of course, there’s only so much that can be done to curtail issues of this kind, as someone who is aware they are part of an experiment will often behave differently than they normally would because they are part of an experiment.
This is one of the core arguments in favor of privacy legislation and against increased surveillance in public spaces: when we’re observed, we change our behavior. And that means we’re not able to be ourselves—fully ourselves—in any space in which it’s possible that we’re being monitored or recorded, even if those recordings are not actually being monitored and the recording devices are well-concealed.
Alongside the data we collect about animals, humanity, and society, then, our social interactions may also be distorted by the knowledge that we’re being observed, and even the potential that we may be under scrutiny; which in turn can influence the decisions we make and actions we take.
Enjoying Brain Lenses? You might also enjoy my news analysis podcast, Let’s Know Things.
There’s also a podcast version of Brain Lenses, available at brainlenses.com or wherever you get your podcasts.