Mental Models
Free essays and podcast episodes are published every Tuesday, and paid subscribers receive an additional essay and episode on Thursdays.
When thinking about a particular concept and attempting to derive meaning from it, understand how it works, or to figure out how it overlaps or contrasts with other concepts and systems, we often utilize mental models.
A mental model is an abstraction of the thing being modeled; a conceptualization.
Like a map, a model isn’t a complete collection of facts and details about a concept, because that would actually be less useful than a simplified version of the same for many practical purposes.
These models help us achieve a sense of a thing so that we can more efficiently place it into broader context and/or concentrate on higher-level concerns, using a concept as a foundational basis.
In practice, this might mean that I have a mental model of a city as a collection of buildings, with a bunch of infrastructure—roads, electrical cables, water pipes—connecting those buildings.
That mental model is far from complete, but for some purposes, it’s correct enough to help me understand how some of these components work together, how making changes to one part of a city might influence another part of a city, and so on.
Mental models are often useful, then, because they’re not perfectly accurate or complete.
Bonini’s paradox explains this utility, explaining that a map that is a perfect representation of a geographic region would be no more useful than the region itself; it would, essentially, be that geography.
This truncation, simplification, and filtration of information, then, is often desirable, because it allows us to set aside the holistic complexity of a system or object so that we might derive other information from it: information that would, at times, be difficult to perceive or understand if we were burdened with the totality of all there is to see and process.
This method of understanding, though often useful, also has serious downsides.
My mental model of a city that focuses on the buildings and pipes is useful for some purposes, but mostly useless for all other purposes. It helps me see and understand some facets of a system, but it blinds me to every other connection, purpose, and meaning within the same.
The utilization of such a mental model can also bring me into conflict with, or cause me to misunderstand, someone who holds another, distinct mental model of the same system.
If I work alongside a group of other people, all of us hoping to improve our hometown’s economic, social, and infrastructural prospects—to invest in our city so that it performs and serves us all better in the future—I might propose plans based on what I know about the buildings and power grid, while my colleagues might base their recommendations on other understandings of the city, entirely.
One might see cities as networks of humans and households, another might perceive cities as economic hubs and systems of value-exchange, while yet another might think of cities as natural environments that interact and contrast with both internal and surrounding ecosystems.
Every single one of us would be both right and wrong. The most complete understanding of a city would encompass all of these mental models and far more, besides. But within the context of our respective points of view, all these other people, though well-intentioned, may seem to be focusing on meaningless externalities.
The overlapping of each of these models would give us a more complete understanding of how things actually are, but the complexity introduced by creating more accurate models often reduces our functional understanding of individual systems, and consequently reduces our ability to make decisions.
Superficial grasps of complex systems can be useful, then, in that they can help us understand fundamental cause and effect, connectivity, and impact within the systems they simplify.
But they can also be harmful, at times, especially if we forget that they are mere models, not complete knowledge.
Models are not holistic representations of reality, and thus, by definition, they allow us to focus on a few attributes, functions, and outcomes of a system, at the expense of all the others.
Fortunately, research into perception and thinking indicates that we probably use mental models alongside other methods of understanding the world—though it’s generally assumed that we use something like these models much of the time, especially when making quick decisions and assumptions, and when trying to understand complex things, behaviors, and interactions.
Our mental models seem to become more accurate and detailed, while still maintaining their utility as comprehension aides, as we become more aware of other models: other ways of looking at things. And this is most easily achieved through general learning and education, broad and varied real-world experience, and the intentional, internal processing of what we learn, as a result.
Enjoying Brain Lenses? You might also enjoy my news analysis podcast, Let’s Know Things.
There’s also a podcast version of Brain Lenses, available at brainlenses.com or wherever you get your podcasts.