In dark, researchers get a glimpse of how brain sees
Internal model makes it more efficient than computers in interpreting images
Imagine browsing through a friend’s holiday pictures: a trekking path crossing a stream in the Alps, a snapshot of a relaxed afternoon in a Budapest spa, and, among the other pleasant and colorful images, a pitch-black photo.
Recognizing the subject of these photos is a complex but fascinating process -- textures of light and color reach the eyes, where they are converted to electrical impulses by receptors on the retina. From there, the information is sent to the visual cortex, the part of the brain dedicated to visual processing. Neurons in the visual cortex change their response patterns according to the content of the visual input; for example, the first picture would evoke patterns of activity that correspond to mountains, trees, and alpine villages.
The one black photo, however, proves to be mind-provoking. One would not expect it to prompt neural activity. Naively, without visual stimulation one would expect very little activity in the visual cortex. Surprisingly however, when neural activity in the dark (so-called spontaneous activity) was first analyzed, researchers found instead very strong, coordinated neural responses. This observation puzzled neuroscientists: Why would the brain waste precious resources to represent a dark image? Does the stimulus matter to the brain at all? The answer to these questions turns out to concern the role of the visual system in processing not what there is, but rather what there may be, in an image.
To gain an intuition on why this is the case, one must consider that, despite our effortless ability in interpreting an image, the image alone is not enough for us to understand its content.
A closer examination of the photograph reveals many difficult problems that need to be solved: The hiker in the foreground is higher and occupies a larger area than the houses and the trees in the background, yet she is in reality smaller than them. Her dog is partially hidden by one of her legs and is thus visually split in two halves, yet we perceive it as a single animal. These and countless other examples reveal that our visual system needs to complement the information contained in the picture with an internal model of the world in order to fill-in the missing information and find the interpretations for the image that are most consistent with reality.
In a study published Jan. 7 in the journal Science, we and our colleagues Mate Lengyel and Jozsef Fiser, from Brandeis University and the University of Cambridge, UK, proposed that neural activity in the dark might be a hallmark of this internal model. Intuitively, we might imagine reducing the brightness of the photograph. As the details begin to fade, the visual system needs to make an increasing use of its internal model to make sense of the information it receives, and most of its activity will be dominated by the internal model.
The researchers reasoned that if this is the explanation behind spontaneous activity, the patterns of neural activity in the dark should correspond to possible contents of images, and thus be similar to the patterns of neural activity in response to natural images, but not to those evoked by stimuli that are very unlikely in real situations.
In our study, we analyzed neural activity in the primary visual cortex -- the first stage of visual processing in the cortex -- of ferrets at different ages that were either sitting in darkness, or watching natural scenes and artificial patterns on a computer screen. In young animals, that had little or no experience of the world, neural activity in darkness was found to be very different from visually stimulated neural activity. As the age of the animal increased, however, spontaneous activity became increasingly more similar to neural activity recorded in response to visual stimuli. Moreover, spontaneous activity was more similar to the response to natural scenes than to the artificial stimuli, just as the researchers predicted. Thus, the activity of neurons in the dark is strong and complex as the brain considers possible natural scenes that are compatible with the blank, featureless input stimulus.
The internal model is what makes the brain so much more efficient than computers in interpreting complex images. Progresses in understanding spontaneous activity promise to be helpful in understanding how the brain is able to make sense of images so efficiently. Looking at neural activity in the dark, we might be able to understand some of the ingredients that make all the difference between human and machine.
Pietro Berkes is a postdoctoral research fellow at the Fiser Lab in Brandeis University's Volen Center for Complex Systems. Grego Orban is a researcher in the Computational and Biological Learning Lab at the University of Cambridge.