Morgan Barense, PhD

Professor, Canada Research Chair
Department of Psychology
University of Toronto
(September 10, 2019)

When seeing becomes knowing: How the brain integrates perceptual and conceptual information

A fascinating aspect of the science of perception is not just how the brain makes sense of visual inputs, but how there is an understanding of the concepts behind those inputs. How does the concept of an object, such as the role that object might play, differ from that of the visual perception of an object? Dr. Barense discussed her work trying to parse the perceptual from the conceptual, and explained that in a particular brain area, the perirhinal cortex, neurons represent both the visual and conceptual aspects of an object.

Morgan BarenseHow do we know what we see? The problem of how we understand the nature of the world external to our mind is one of the oldest and most difficult in philosophy and cognitive science. Dating as far back as Plato, philosophers, neuropsychologists, and cognitive scientists have wrestled with a question central to human cognition: When we interact with an item in the world, how do we come to know what it is? This a complicated problem for the mind to solve because objects that look similar, such as a gun and a hairdryer, may do different things, whereas objects that look different, such as tape and glue, may have similar roles. That is, because there is not a 1-to-1 relationship between how an object looks and what an object does, adaptive behaviour requires a fully-specified object representation that integrates perceptual and conceptual information. However, the precise relationship between perceptual and conceptual object representations in the brain is poorly understood.

I discussed recent work from my laboratory that has taken an innovative methodological approach to offer new insights into this question. We created a novel word stimulus set in which we selected pairs of object concepts that look alike but have different functions (e.g., hairdryer and gun), and other pairs of object concepts that do not look alike but have similar functions (e.g.,  bullet and gun). We first created two behavioural models that captured the visual and conceptual similarity of the object concepts, using data from 2,785 separate individuals who independently rated either the objects’ visual similarity or described their conceptual features. We then used these models to characterize the patterns of brain activity of a separate group of participants who were scanned during two separate tasks that involved judgments about either the objects’ visual properties (e.g., is it round, light in colour) or their conceptual properties (e.g., is it manmade, pleasant). We used representational similarity analysis (RSA) to relate patterns of brain activity on the two tasks to the behavioural models of visual and conceptual similarity. Remarkably, despite the fact that we systematically dissociated visual and conceptual features, we found that one region in the brain – the perirhinal cortex – coded both the visual and conceptual similarity of the objects regardless of whether the task involved visual or conceptual judgments.

One of the most interesting findings from this work was that the similarity structure in the perirhinal cortex was transiently reshaped to reflect task goals. That is, we found that although the perirhinal cortex always coded both visual and conceptual similarity, visual similarities were more strongly coded during the visual task context, and conceptual similarities were coded more strongly during the conceptual task. This suggests that attentional control flexibly reshaped the multidimensional representational structure in the perirhinal cortex to meet task demands. To address this directly, we conducted a neuropsychological study to determine the behavioural consequences of damage to perihrinal cortex. Using a similar stimulus creation approach as our past work, we developed a discrimination task involving stimuli in which visual and conceptual similarity are not linked. In this task, participants were asked to match the top referent word (e.g, “gun”), to either the most visually similar object concept (e.g., “hairdryer”) or the most conceptually similar object concept (e.g., “bullet”). We found that focal damage to the perirhinal cortex did not impact conceptual and visual knowledge in the absence of interference (competing target removed), but it did impair performance when there was conceptual and visual semantic feature competition among choices. These results reveal a novel semantic deficit in a case with perirhinal damage, suggesting that this structure is necessary to dynamically emphasize task-relevant object features.

We live in a complex environment that bombards us with interfering sensory inputs across many domains. The ability to integrate information from multiple modalities and make sense of the sensory cacophony is critical to survival. The data I presented in my talk indicate that perirhinal cortex represents both the visual and conceptual attributes of objects, and it does so in a flexible manner that allows for seamless discrimination between objects whose visual and conceptual features are orthogonal. When these representations are damaged, as occurs in MTL amnesia, we observe vulnerability to interference from competing visual and conceptual information because these representations cannot be reshaped in a task-appropriate manner. In sum, at the level of perirhinal cortex, it may not be possible to fully disentangle visual and conceptual processing. That is to say, for the computational operations conducted in perirhinal cortex, seeing implies knowing.