How does the brain navigate? Artificial intelligence can provide new insights
picture from the research paper

How does the brain navigate? Artificial intelligence can provide new insights

Published:

Researchers have developed an artificial intelligence model that recreates how the brain represents space. Among other things, this could lead to smarter robots and a better understanding of memory and navigation.

This article was originally published in forskning.no in Norwegian, and has been translated with Google Gemini and reviewed by a communications advisor.

How do we actually find our way? The answer lies in a group of nerve cells deep inside the brain: so-called place cells and grid cells. Researchers at Simula Research Laboratory have developed neural networks that mimic how such cells can arise – and what they are used for.

"When we move in an environment, certain brain cells are only activated in specific places. This is what we have tried to model to better understand what the brain does when we navigate," explains Mikkel Lepperød.

The research, carried out by Markus Pettersen, Mikkel Lepperød (both Simula) and Frederik Rogge (UiO), was recently published at NeurIPS, the world's leading conference in artificial intelligence and machine learning.

From experiments on rats to computer models

The idea that the brain creates mental maps dates back to the 1940s. Experiments on rats at the time showed that animals not only react to stimuli, but also form a spatial understanding that allows them to find shortcuts.

Later, in 2014, the discovery of place cells by John O’Keefe and grid cells by the Moser couple led to the Nobel Prize in Medicine and Physiology. The Mosers' groundbreaking work has spread nationally and internationally, where researchers who worked closely with them – such as Torkel Hafting and Marianne Fyhn – have continued and developed the findings.

Place cells in the Hippocampus are characterized by usually only activating when one is in specific places in a room. For example, when you are sitting on the sofa in your living room. In the brain, there are hundreds or thousands of such cells, which are believed to collectively form a kind of cognitive map of the world around you.

Research has shown that they can also react to context signals such as smell, and that they can "remap" – i.e., swap out the cognitive map. Nevertheless, there is still much we do not know about how these cells develop precisely or fire as they do.

Lepperød was trained at the Hafting-Fyhn lab at UiO, which at the time worked on the discovery of grid cells in the Moser lab at NTNU. He continues this work in artificial networks at Simula.

"It is easier to observe the patterns and behavior of these cells in artificial networks. In the brain, you will always be able to measure that they are present, but not necessarily what they are actually used for," says Lepperød.
The observations from the model are compared with experimental data from brain research.

"The computer and the brain seem to solve the same task in the same way. Even when the machine has not received clear instructions on this."

Recreating the brain's place cells – with some surprises

The researchers chose an unconventional method. Instead of programming in known functions from the brain, they gave a neural network a simple navigation task: "The model 'walks' in a simulated space. The only task is to keep track of its location," explains Pettersen.

They have trained the network to become really good at "guessing" its position in a specific way.

"Places that are close to each other in space activate 'brain' states that are similar to each other – places that are far from each other activate states that are different from each other," says Pettersen.

The really interesting thing was what emerged during this process: an internal representation in the AI model that strongly resembles the place cells in the brain.

"This is how place cells in the brain also behave. The results suggest that these cells are a natural solution for keeping track of one's location, based on distances."

In addition to explaining classical properties of place cells, the model also reproduced properties that are less intuitive.

"For example, we found cells that were not only active in one place, but in several places – as if the brain is saying 'you are here – or here'. That might sound ambiguous, but it makes sense in the model," he says.

The position is not represented in a single cell, but spread across many hundreds of cells, all of which encode a small part of the assumed position. When looking at a whole series of cells simultaneously, it therefore becomes completely clear where in the room one is.

AI that navigates without training

One of the model's most exciting properties is that it can adapt to entirely new environments without requiring training. Just as the brain is able to navigate new environments from the very first moment.

"Just as you can add any numbers after learning the general rule for addition, the model has learned a general understanding that can be reused to navigate in other spaces," Lepperød explains.

The robot vacuum cleaner and the cognitive map

So what is the benefit of such a cognitive map? It means that one can find shortcuts and form knowledge that is also useful in new, unknown spaces.
Imagine a robot vacuum cleaner trained to clean specific rooms. Then you renovate, and one of the rooms changes shape.

"A traditional robot trained on specific instructions will become confused. But one that uses a 'cognitive map' will understand that 'OK, here's a new wall, but I know there's a room behind it'. Then it can find shortcuts completely on its own," says Lepperød.

More than just spatial navigation

Although the model is designed to mimic spatial navigation, the principle can probably be transferred to other areas.

"There is much to suggest that the brain uses the same structures to navigate in more abstract spaces, such as memories, concepts or language."

Just as one can create a mental map of a house, one can also create a "mind map" of concepts that are similar to each other. This makes the model relevant for several fields – from memory research to the development of better, more generalizing AI systems.

Next step is to open the black box

An important goal for the researchers has been interpretability. Until now, the model has been a form of "black box" where the researchers do not have full control over what happens inside. Now the team is working on making the model completely transparent.

"We believe we can create a completely understandable version of the model – a mathematical framework where we know all the building blocks. This will be the next step," says Lepperød.

If they achieve that, they can probably develop models that do not require any form of training.

At the same time, they are investigating whether the model can be used for more classical AI tasks, such as image recognition or language understanding – especially where distance and similarity between concepts play a role.

References:

Markus Pettersen, Frederik Rogge, Mikkel Elle Lepperød: Learning Place Cell Representations and Context-Dependent Remapping. Advances. Neural Information Processing Systems 37, 2024. (Summary)

Contact