Seeing the meaning of visual signals

March 01, 2019

Marieke Mur, Assistant Professor, Department of Psychology

Marieke Mur has joined the Department of Psychology, and the Brain and Mind Institute.

Using fMRI and computational modelling, Mur researches how the brain extracts meaning from incoming visual signals, and how it flexibly integrates that meaning with ongoing behavioural goals to produce appropriate responses. For example, when we are driving, our brain interprets the scene around us: we may recognise pedestrians, other cars, and perhaps a yellow traffic light. How we respond to these objects depends on their meaning and on the current situation and goals. For example, if we are in a hurry, we may decide to drive through, otherwise, we may stop.

Over the past decade, Mur and other researchers have developed novel brain imaging analysis methods for measuring the patterns of activity that different objects elicit in the brain – leading to the discovery that the visual cortex acts as an object classifier – generating patterns of activity that distinguish biologically meaningful categories of objects. For instance, the visual cortex generates very distinct patterns of activity for animate versus inanimate objects, e.g. pedestrians versus cars.

Mur’s recent work suggests that the visual cortex is especially sensitive to features of a visual stimulus that provide clues about category membership. As an example, Mur points to an apple and a face, which are both round, but the presence of facial features causes the brain to interpret the stimulus as a face, and act accordingly.

Precisely how the brain accomplishes this feat remains unclear. “We know where it is happening, and we know what is happening, but now we want to know how it is happening,” said Mur.

To understand how this process occurs, Mur uses deep neural networks. Deep neural networks are computational models of visual processing that are loosely inspired by the brain. They consist of units (‘neurons’) that are organised in multiple layers (‘brain regions’) and connected with weights that can be modified by training (‘synapses’). Deep nets can be trained to categorise objects, and once trained can match new object images into existing categories. In recent years, with increased computational power and many images available for training, deep nets have come to rival human observers at object recognition. They are therefore increasingly used for object recognition in computer and artificial intelligence applications, such as self-driving cars.

Researchers have begun to use deep nets to understand how the brain transforms incoming visual signals into meaningful object representations. Deep nets may provide a good model of the neural computations that support this transformation. To determine how good of a model deep nets are, researchers assess whether their internal representations match the object representations computed by the brain.

“We want to see if it is the same so we can build a plausible model of how visual processing in humans works,” said Mur. “The better the model, the better we understand how the brain works.”

Mur plans to work on these questions with researchers from the Department of Psychology, and from other faculties, including Jörn Diedrichsen, a professor in the Department of Computer Science.

“Western is a great place to be, because there is an institutional focus on neuroscience and a lot of opportunity for interdisciplinary work,” said Mur. “I want to bridge cognitive neuroscience and computer science.”

In her future research, Mur will examine how the brain flexibly integrates visual meanings with behavioural goals. Much of these functions take place in the frontal cortex.

“My research is extending to include higher order cognition,” said Mur, “and that is another exciting reason to be at Western.”