ELIZA Effect
Anthropomorphism refers to the human tendency to infer human-like traits, motivations, and emotions in non-human things.
We might curse the weather for ruining our picnic, for instance, as if the concept of "weather" were an entity with consciousness that intentionally pushed rainclouds into our neighborhood at the exact moment we stepped outside.
We see faces in doors and in the grills of cars and in wood grain patterns.
We imbue our pets (and other animals) with human characteristics, preferences, and personality traits, even when these things very much do not apply—at least not in the sense we believe they do.
We may even describe the movement and character of shapes displayed in a simple animation in human terms.
Based on historical documentation, this is something we've always done: it's baked into our biologies.
It's been posited that we anthropomorphize because our cognitive apparatuses are optimized for noticing, processing, and remembering details about other human beings.
This may be a survival adaptation for members of a highly social species like humans: those who can keep up with the goings-on around the family group and larger tribe might be more likely to survive and pass on their genes.
Thus, over time, it became prudent, and then eventually unavoidable, to maintain a sense of who's who, who does what, how people feel and are likely to respond to various stimuli, and so on.
Our propensity to imbue non-human entities—from animals to random patterns to weather conditions—with these same characteristics, then, may also be survival-related.
When those human-tracking superpowers are recalibrated for keeping tabs on the weather and on how local fauna behave, we're more likely to notice worrying changes in weather trends, and more likely to successfully cohabitate with dangerous animals (or befriend those that might be useful or provide us with companionship).
The ELIZA effect (named after a 1960s-era chatbot developed at MIT) is a specific type of anthropomorphism related to the behaviors of computers and similar technologies.
So if an ATM displays the phrase "thank you" after we use it to get some cash from our bank account, we might perceive this as the machine actually thanking us—being happy that we decided to use it for our cash-acquiring needs.
But of course, the machine is simply working through a script provided to it by a programmer. It was instructed to display those pixels after a series of other tasks are completed. A user might interpret those pixels as words, and those words as a "thank you" similar to the gratitude a human might express, but that would be a naive assumption unbacked by reality.
The ELIZA effect is worth understanding in part because of how personal many of our technologies have become.
Interactive gadgets are ubiquitous in the modern world, and when our voice assistants engage us with seeming small-talk, our ATMs thank us, and online chatbots help us with customer support issues, we may be tempted to engage with them as if they're real-deal consciousnesses, despite there being no consciousness behind these interactions.
We can be fooled, in other words, by increasingly sophisticated code if we're not careful, and that can result in manipulation.
We can likewise be fooled by humans who are acting in accordance with pre-scripted rules and algorithms (which is arguably a sub-type of the ELIZA effect, as many customer support workers are not allowed to deviate from a provided script for efficiency or legal reasons, and thus serve essentially the same purpose as the screen on the ATM displaying the words "thank you").
In both cases, we can be made to feel we’re having an actual interaction with someone who’s saying things we want to hear, but that interaction might primarily serve the purposes of a company that wants to sell us something, or which wants us to feel like we’re being heard and taken care of, when that is not actually the case.
This effect also has implications for the pursuit of general artificial intelligence (the kind that's intelligent in the same way we think of humans as being intelligent), because it's not just possible, but likely, that many of us will perceive intelligence in non-intelligent things—and this is ever-more the case as those seeming intelligences become more complex and versatile (despite still not being truly intelligent or conscious in the human sense).
Paid Brain Lenses subscribers receive twice as many essays and podcast episodes each week. They also fund the existence and availability of all the free stuff.
You can become a paid subscriber for $5/month or $50/year.
You can also support all my work (and receive gobs of bonus content) via Understandary.