Andrej Bicanski
  • Home
  • Publications
  • Research
  • Code
  • WRITING
  • CONTACT
  • LINKS

Research


Scientists:   "Why is there so much misunderstanding of science?"
Grad student:   "I want to do some public outreach."
Scientists:   "What a waste of time."



Brief Bio
I obtained a Master in Physics (or "Diplom" in German) from the University of Heidelberg. For my master thesis I spent a year at the Universitat Pompeu Fabra (UPF) in Barcelona. I obtained a PhD from the École Polytechnique Fédérale de Lausanne (EPFL), and then mover to work at the Institute of Cognitive Neuroscience (ICN) at University College London. Since September 2021 I am a lecturer in Psychology at Newcastle University.



Picture
Human brain with highlighed hippocampus (blue), a brain structure crucial for memory. Source: Wikimedia Commons, Henry Gray (1918) Anatomy of the Human Body (public domain image).
Current Work
My research focuses on the computational mechanisms underlying spatial navigation and spatial memory. I am particularly interested in how behavior and cognition can be related to mechanistic computational models, and how current ideas about spatial memory (e.g. the role of place cells and other spatially selective cell types) can be extended to episodic memory in general. Other topics of interest include motor pattern generation, biologically inspired robotics, and large-scale brain models (in particular computational models of high-level cognition).

See below for descriptions of recent work

- Recognition memory via grid cells
- A neural model of spatial memory and Imagery
- Reference frame transformations for head direction (to be added)
- Navigation in cluttered environments (to be added)
- Models of spinal central pattern generators (to be added)



 

Recognition Memory via Grid Cells

Models of face, object, and scene recognition traditionally focus on massively parallel processing of low-level visual features, with higher order representations emerging at later processing stages. However, visual perception is tightly coupled to eye movements (saccades), which necessarily occur in sequence. That is, the focus of the eye is moved from one part of a stimulus (say the nose on a face) to another (e.g. the left eye), then another, and so forth. Our recent model shows that grid cells (the same type of neuron which has hitherto mainly been studied because of its involvement in spatial navigation and memory) could enable the calculation of eye-movement vectors in service of recognition memory for familiar stimuli. Three ideas/findings underly this model.

1. Sequences of eye movements in service of perception
The idea that sequences of saccades (rapid target-driven eye movements) might underlie complex pattern recognition is an idea with a long history in neuroscience. Though many researchers have worked on the topic over the past decades, in my case I was introduced to this idea years ago by Dietrich Doerner's wonderful book Bauplan für die Seele (Blueprint for the soul). It is a popular science book in the vein of Braitenberg's Vehicles, though it goes a lot further. Sadly, no English translation exists but I highly recommend this book to anyone capable of reading German.

To the best of my knowledge there was hitherto no explicit neural model which explained how the computations (in service of hypothesis-driven saccades for recognition memory) might be implemented at the level of single neurons. This is where grid cells come in.

2. Grid cells and vector navigation
Grid cells are so-called 'spatially selective cells'. A single cell fires at certain locations in an environment (as an animal explores it), signalling e.g. position (see grid cell figure). However, grid cells signal position periodically (in space, on a hexagonal grid). Nevertheless multiple grid cells together can encode positions across a large environment uniquely (despite the periodicity of individual cells).

Grid cells probably underlie path integration (the ability to maintain a positional estimate as one moves). However, multiple computational studies have shown also that grid cells can be used to calculate movement vectors in various ways. If there are activity patterns in the grid cell population specific to the current/starting location, and a distant goal location, those two activity patterns can be used as input to a separate neural network which calculates the distance and direction towards the goal location.

3. Grid cells in vision
Grid cells are predominantly known from spatial navigation and memory studies. However, grid cells have recently also been found to be active for visual tasks. Rather than responding to the position of an animal in space, they can respond to the trajectory of the eyes, i.e. the location of the focus of the eyes as a visual scene or image is scanned with eye-movements.

From spatial to recognition memory via grid cells
These three ideas/findings (sequences of saccades in service of perception, gird cells calculate vectors, grid cells map the visual world) together prompted the development of the model. I propose that the visual system focuses on salient features of a stimulus (say a face) in sequence and maps those sensory features (e.g. the eyes, nose, lips, etc) to grid cell activity patterns. That is, the locations of these features relative to each other are memorized. This happens in a learning phase, when the model is exposed to a stimulus for the first time. The relational aspect is key (locations memorized relative to each other), because the distance among facial features is highly specific for each individual and goes beyond differences between individual features (e.g. the difference between two noses belonging to two distinct faces). In addition, the mapped features are not only associated with locations in the visual field (via grid cells), but also to a cell which indicates the abstract identity of stimulus (e.g. one for each stimulus we know). This 'identity neurons' can be thought of as similar to concept cells found previously in the hippocampus.

Eye-movements confirm hypotheses
Once the salient features of a stimulus have been memorized (in the training phase) that stimulus can be recognized at a later time. We assume the first salient feature that the visual system focuses on is selected by the attention system. The feature is then compared to the stored features (by sensory neurons). A cell that represents the memorized feature for which the match to the sensory input is strongest activates its associated grids cells (corresponding to the starting point of the next eye-movement). The associated stimulus identity neuron is also activated. That is, at this point the brain has 'formed a belief' about the identity of the stimulus (together with competing beliefs for less activated identity cells). This hypothesis then determines (via the active stimulus identity cell) the next feature to be visited, which is associated to its own grid cell pattern (yielding the endpoint/goal of the next eye-movement). The system thus predicts the next sensory feature to be found at the end point of the next eye-movement. If that stimulus identity cell which has instructed  this eye-movement stays the most active, the hypothesis has been confirmed and so forth for successive eye-movements, until sufficient confidence regarding the identity is reached.
Picture
Bauplan fuer die Seele, Dietrich Doerner; Publisher: Rowohlt Taschenbuch Verlag ISBN-10: 349961193
Picture
Grid cells in medial entorhinal cortex (MEC) exhibit periodic, hexagonally arranged firing fields, originally characterized as spatially selective cells in rodent experiments. Right: spiking locations (red dots) superimposed on a rodent’s trajectory during foraging; Bottom-left: a stereotypical, smoothed firing rate map. Adapted from Barry and Bush 2012, Creative Commons Licence
Picture
Illustration of a rodent performing vector navigation to visit memorized locations.
Picture
Eye-movement trajectories recorded during scanning of a face. Our model proposes that just like navigation trajectories, saccade trajectories may be encoded by grid cells. Image credit: Wikimedia Cmmons; Creative Commons License
Picture
Left: a sequence of saccades (red lines) among salient facial features (centers marked by cyan circles). Right: the activity among stimulus identity cells as the number of fixations (saccades) increases. Spock is recognized (dark blue line). Other lines represent competing hypotheses (stimulus identity cells for other stimuli activated due to partial sensory matches). Image credit: Spock; public domain image.

 

A neural model of spatial memory and Imagery

Episodic and Spatial Memory
Recalling life events can be thought of as ‘re-experiencing’ them in imagery. For example, having met someone at the train station, one can later conjure up a mental image of that scene (e.g. the person at a given distance and direction from oneself, against the backdrop of the train station, a train to you left, etc). That is, the individual elements of the scene (the person, the train ...) and their spatial relationships are part of the memory. But how is this processed implemented at the level of single neurons and interacting neural ensembles?

A Computational Model of Spatial Memory and Imagery
We have built a computational model that shows how the neuronal activity across multiple brain regions underlying such an experience could be encoded and subsequently used to enable re-imagination of the event. The model provides a mechanistic account of spatial memory and imagery, including cognitive concepts such as ‘episodic future thinking’ and ‘scene-construction’. It also explains how different types of brain damage might differently affect these types of cognition, e.g. producing different aspects of amnesia.

Egocentric-Allocentric Transformations
An important aspect of the model is that when we perceive a scene our sensory experience is encoded ‘egocentrically’ by neurons responding to objects according to their location ahead, left, or right of oneself. However, neurons in memory-related brain areas such as the hippocampus represent our location ‘allocentrically’ relative to the scene around us, as observed in the activity of place cells, head direction cells, boundary- and object-vector cells, and grid cells.

The model explains how egocentric representations are transformed into allocentric representations which are then memorized in the connections between cells in and around the hippocampus. Importantly, this transformation can also act in reverse. That is, the activity patterns of neurons encoding elements of a scene in egocentric terms are re-instantiated by synaptic connections originating in memory-related brain areas, as opposed to having been driven by sensory inputs during perception. Thus memories can drive imagery of a scene corresponding to the original event. The same mechanism can also be used to imagine scenes from viewpoints that have not been experienced, such as those that might correspond to a future event.
Picture
Picture

 

Reference Frame Transformations for HEAD DIRECTION

To be added soon
Proudly powered by Weebly
  • Home
  • Publications
  • Research
  • Code
  • WRITING
  • CONTACT
  • LINKS