Material Perception
The average sighted person can tell a great deal about the nature of an object or substance just by looking at it.
Show someone a zoomed-in photo of an elephant’s skin or a woolen sweater or a crinkled-up piece of wax paper, and there’s a good chance they’ll be able to imagine, often with great accuracy, what it would feel like to touch and hold and interact with that surface.
Said person would also likely have a decent, intuitive sense of how fragile or resilient that material would be, whether it would be wet or dry, warm or cool to the touch, pleasant or unpleasant—all sorts of data that range from general to ultra-specific, in some cases computed after a mere glimpse at a perhaps unfamiliar material.
The study of “Material Perception” is relatively new: a seminal paper on the subject was published back in 2001, and before that there had been relatively little focused investigation into how our brains and nervous systems are able to perform this particular magic trick.
Since that paper, a far larger body of research has emerged, including mappings of so-called “dimensions” of material characteristics that (it’s been posited) our brains use as a sort of shorthand to determine what the things we see would feel like, were we to touch, hold, and otherwise engage with them.
Prior research established that humans can capture a remarkably large volume of data about the things we see after a short glimpse of them.
One study limited viewing to 40 milliseconds, which is .04 seconds, and found that subjects had no trouble categorizing images they glimpsed during that fraction of a second by material category, even when the images were blurred or otherwise degraded to make recognition more difficult.
Some research has looked into the possibility that our brains use color and light data to quickly make assumptions about the materiality of the things we see, alongside secondary visual cues like glossiness and roughness (our perception of which is reliant on our initial perception of how light is reflected by the surface in question).
Other studies have asked whether our brains are actually creating something like “statistical generative models” to rapidly collect and compare multiple images of an object, allowing us to make assumptions about the nature of a material based on how it appears in slightly different lighting contexts.
Interestingly, other researchers have found that subjects have little trouble making tactile assumptions about materials that are replicated by dots that are animated to replicate the movement of, for instance, a cotton garment blowing in the wind, or a blob of jelly wobbling after the table on which it’s positioned is bumped (demonstrating “material motion”).
This suggests that while light probably plays at least some role in our material-intuiting capacity, its movement (or lack thereof) may also help us determine what said material is like.
At the moment, then, we have a lot of theories as to what might be contributing to our remarkable capacity to quickly gauge and intuit the properties of the large number and variety of materials we interact with on a daily basis, but we don’t know for certain which collection of possible cognitive routes are the most vital or dominant ones, and we don’t yet know precisely how all this data is aggregated and parsed with such practical (and reliable) rapidity.