Mistruths
In the world of communication theory and counterintelligence, misinformation is information that’s false, but unintentionally so, disinformation is false and purposefully created and disseminated to manipulate, confuse, or harm, and malinformation is fact-based, but shared in such a way that it becomes false because of a change in context or lack of clarity.
So if I, without malice, re-share something that’s false that someone I know posted on Facebook (the poster likewise not realizing it’s false), that’s misinformation.
If I create and share a post on Facebook that I know is false, intending to mess with my political enemies or to create an incorrect impression about a competitor’s product or a movie I don’t like, that’s disinformation.
If I edit a video clip of a public figure to make it seem like they said or did something they didn’t, maybe removing vital context or slowing it down to make it look like they’re unwell or drunk, that’s malinformation.
Over the past decade or so, an abundance of research has been conducted on the idea that this trio of false information categories are not just common in online spaces, but can also be immensely influential on the understandings and opinions of those who encounter them.
Our online social spaces are flooded with nonsense, in other words, and a bunch of research suggests that all this nonsense is influencing life in the real world because people are buying the lies and mistruths they’re encountering all day, every day on these platforms.
There have been efforts to pre-bunk and otherwise inoculate the population against these sorts of informational distortions, predicated on the idea that people are, on average (if not individually), somewhat gullible. We have no reason to suspect we’re being lied to, in other words, so why wouldn’t we just slurp up whatever we’re told by folks we follow across various channels: those we know firsthand, and influential personalities we respect?
A counter-narrative that suggests this might not be the case is emerging, though, based on some newer research and reinterpretations of existing research that question the tenets of this assumption.
In essence, this opposing theory says people aren’t as gullible as tends to be assumed by researchers and other experts, that misinformation (of a sophisticated, effective variety) is actually somewhat rare in the Western world (where most of this research is being done), so people encounter a relative paucity of it (at least compared to legit news and other information sources), and it suggests there’s some evidence that concerns about large numbers of people becoming wild-eyed conspiracy theorists is actually a misreading of the room: it’s not that people are falling down falsely premised rabbit holes and changing their minds about things, it’s more that they’re seeking out information that reinforces their existing biases (however fact-based or non-fact-based those biases might be).
This is anything but a slam-dunk collection of assumptions, as there are documented cases of large numbers of people at least seeming to believe things that are demonstrably untrue.
Though one of the explanations for that seeming dynamic is that many of us will act as if we believe untrue things because those around us—people of our perceptual, social tribe—seem to believe them, and these seeming beliefs thus becomes social signals; not something most people in that group believe in the way they believe in gravity or that the sky is blue, but instead a sort of filtering mechanism that allows them to easily differentiate friend from foe.
There’s also a chance that misinformation/disinformation/malinformation is impacting the root systems of discourse and knowledge because it’s much easier to spout mistruths than it is to debunk them: it takes longer to both aggregate and disseminate actual information than to come up with falsities, because of the nature of these two types of communication.
In an era in which producing realistic-sounding words, strung together in a human-seeming way using AI-powered tools, then, it could be that in not too long the vast majority of information bits being communicated via these channels are unbacked falsehoods, which would lend more weight to the claim that these are truly dangerous elements, capable of altering the fundaments of our base-level understandings of the world. This would, in turn, make it difficult to check and test the legitimacy of possible falsehoods moving forward, which would leave us with fewer concrete data points and less-stable ground to stand on when attempting to distinguish reality from non-reality.
What’s annoying but perhaps not debilitating for our current collective discourse, then, might become something far more substantial and dangerous in the future, even if that counter-narrative about the role of various mistruths within our contemporary communication channels is accurate.