Complexity Theory
Free essays and podcast episodes are published every Tuesday, and paid subscribers receive an additional essay and episode on Thursdays.
In the world of human-computer interaction—the study of how humans engage with computers—the Law of Conservation of Complexity states that every possible application has an inherent amount of complexity that cannot be removed or hidden.
This concept was developed by a man named Larry Tesler, who, among other things, worked on the first object-oriented programming language, the copy-and-paste function, and the concept of What You See Is What You Get user-interfaces; all fundamental components of modern computer software.
Tesler’s also considered to be the coiner of the term “user-friendly” in regards to software, as he was very keen to make computers more mainstream, useful, and usable by anyone. He proposed that you can choose to make the software less capable and therefore easier to understand and use, or you can choose to make it less simple, less easy to use, but a lot more capable.
He argued that reducing the complexity of software generally pays higher dividends than the alternative.
If you can work a little harder up front to simplify a program so that more people can use it—and if you can remove things that didn’t matter anyway—that’ll make the tool you’re building more accessible to and usable by more people.
Software needs to be complex, though, and this is where specialized design fields like user-experience and user-interaction design come into play.
If you’re going to have complex features, it’s prudent to ensure those features are as usable as possible: all the buttons in the right places, the grammar of your programming language consistent and logical. It’s also ideal to aim for complex rather than complicated.
Complex means that there are a lot of components interacting with each other within a given system.
Complicated means that a system is difficult to learn about, understand, and/or use.
Ideally, a complex system has many parts that fit together just so, resulting in new, emergent benefits that wouldn’t exist without that complexity.
A complicated system can be complex, but it can also be simple: it can have few components and still be incomprehensible.
Simple is not always better, in other words, if that simplicity is less accessible or useful than its complex-but-comprehensible alternative.
Interestingly, it’s been shown that many of us tend to assume complexity over randomness when we encounter the latter, and it’s thought that this might be the case because otherwise we would feel helpless.
This bias seems to exist in other animals, as well, and often manifests in what we might call superstitious or cargo cult-like behavior in creatures that are able to notice apparent patterns, and then extrapolate meaning based on these misunderstood bits of data.
In general, this is a fairly productive tendency, as it means we look for meaning in our environments, which can eventually lead to understanding.
In some cases, however, it can lead to negative outcomes.
Sometimes there is either no meaning—we’re finding seeming-patterns in random noise, and thus, our assumptions about what’s happening are flawed—or there is no actionable meaning: we’ve noticed a phenomenon, but have no way of measuring or understanding it; not yet, at least.
You could argue that everything is potentially comprehensible, if we only had the proper tools and knowledge. This is probably true, in the sense that we may, someday, be able to compute even the most random-seeming behaviors and events: how a person will respond to a startling stimuli, which way the wind will blow, how a coin-flip will land every single time.
Deriving meaning from the cloud of data surrounding such seemingly simple situations, though, would require that we understand most or all of the factors leading up to those situations, which at a certain scale would imply perfect knowledge of not just the circumstance in question, and not just the physical locality of that event, but everything: the whole universe, seen and unseen.
Only then could we say with certainty how a coin-flip would land every time, because only then would we be able to work every possible factor into the outcome of that flip. The same is true with human instincts and weather-patterns: the amount of information required to predict such simple-seeming things is astronomical.
This leads us to the concept of complexity theory, which states, in essence, that a complex system is a system with emergent behavior: the interaction of such a system’s components leads to outcomes that cannot be directly attributed to the sum of its parts.
This can refer to the microbiome of creatures, including humans, that seem ridiculously, almost unfathomably complex compared to the bits and pieces that seem to make up the larger systems we recognize, but it can also refer to devices like computers, which, if you look only at the physical goods and electricity consumed, would not seem capable of doing complex mathematics, much less generating powerful software and other tools.
Nonetheless, these simple parts add up to far more than their ingredients, even though the complexity is abstracted away by our engagement with those higher-functions: we communicate with other human beings, not with their microbiota, and we use computers, not the bits of silicon, copper, and plastic of which they’re made.
Our perception of the world around us is thus doubly obscured, in part because of our default approach to the complex, and in part because of our own efforts to abstract away complexity when we find it, making it more user-friendly in some cases, and simplifying it out of existence, in others.
The non-abstracted, non-simplified complexity remains, but it typically lives beyond our perception: we perceive it as randomness or we fail to perceive it at all.
Enjoying Brain Lenses? You might also enjoy my news analysis podcast, Let’s Know Things.
There’s also a podcast version of Brain Lenses, available at brainlenses.com or wherever you get your podcasts.