People familiar with education or with the cognitive load theory will recognize this instantly: people perceive objects differently depending on their prior knowledge and experience with that object. We have known the importance of prior knowledge for ages with the typical example of the chess board. A person who doesn’t know how to play chess will only see a chess board with pieces, an experienced chess player with a lot of prior knowledge of the game will see moves.
But this new study adds a bit of extra insight, suggesting that different brain areas are being used.
From the press release (relevant part bold by me) :
Researchers at the George Washington University have gained important insight into how the human brain processes an object in the visual system and where in the brain this processing takes place. Their study, “Mugs and Plants: Object Semantic Knowledge Alters Perceptual Processing with Behavioral Ramifications,” shows people perceive objects differently depending on their prior knowledge and experience with that object.
The findings could have important implications in applied settings such as medical displays, cognitive assistants, and product and environmental design, according to the researchers.
“Since the way we perceive objects determines how we interact with them, it is important to visually process them quickly and with high detail” Sarah Shomstein, a professor of cognitive neuroscience at GW said. “However, the way our eyes perceive and process an object can be different depending on what we know about this object. Our study shows, for the first time, that if we recognize an object as a tool, we perceive it faster but with less detail. If we recognize an object as a non-tool, we perceive it slower but with higher detail.”
To determine how the human brain processes an object visually, Shomstein and Dick Dubbelde, a recent PhD graduate at GW and co-author on the study, showed participants several images of objects that can be easily manipulated by hand such as a coffee mug, snow shovel or screwdriver, and several images of objects that are infrequently manipulated by hand, such as a potted plant, a picture frame or a fire hydrant. For half of the experiment, a small gap could be cut out of the bottom of each object. For the other half of the experiment, the objects could flicker on the screen. The team asked participants to report the presence or absence of a gap or the flicker, which helped the researchers figure out the speed and detail of object processing, and also which regions of the brain were being used to process the object.
Researchers found that objects usually manipulated by your hands are perceived faster than non-manipulable objects, making it easier to see the flickering. Alternatively, objects that we usually do not manipulate are perceived with greater detail than manipulable objects, making it easier to see the small gaps.
“The differences in perception between ‘mugs’ and ‘plants’ in both speed and detail of perception means that these objects are sorted by the visual system for processing in different brain regions,” Dubbelde said. “In other words, your knowledge of the object’s purpose actually determines where in the brain object processing will occur and how well you will perceive it.”
The study also showed that if you interfere with object recognition by making it harder to recognize an object as either manipulable or not manipulable — for example, by turning it upside down — then the differences in the speed and detail perception of the objects disappear.
Shomstein and Dubbelde note that this study could possibly explain individual differences in object perception and underscores that what you know and what your personal experience is with any particular object has direct consequences for perception.
Abstract of the study:
Neural processing of objects with action associations recruits dorsal visual regions more than the neural processing of objects without such associations. We hypothesized that because the dorsal and ventral visual pathways have differing proportions of magno- and parvocellular input, there should be behavioral differences in perceptual tasks between manipulable and nonmanipulable objects. This hypothesis was tested in college-age adults across five experiments (Ns = 26, 26, 30, 25, and 25) using a gap-detection task, suited to the spatial resolution of parvocellular processing, and an object-flicker-discrimination task, suited to the temporal resolution of magnocellular processing. Directly predicted from the cellular composition of each pathway, a strong nonmanipulable-object advantage was observed in gap detection, and a small manipulable-object advantage was observed in flicker discrimination. Additionally, these effects were modulated by reducing object recognition through inversion and by suppressing magnocellular processing using red light. These results establish perceptual differences between objects dependent on semantic knowledge.
One thought on “What you know changes how you see things (bis)”
In my PhD thesis (and articles therein) I wrote in 1991(!): What you know determines what you see and not the other way around