Using computer modelling to understand fear learning

The complexity of the brain calls for approaches that complement biological studies.
Published in Neuroscience
Using computer modelling to understand fear learning
Like

Using computer modelling to understand fear learning

The study

Since the 1990s, the study of fear has been considered an ideal model to understand how associative learning occurs. The experiments are relatively simple to conduct, and it was initially believed that the neural circuitry involved was also simple. It turns out, however, that the neural circuitry is far more complicated than first envisioned. This complexity is a major incentive for the use of computational modelling to understanding how fear learning occurs.

In a typical laboratory recreation of fear learning, an auditory tone is played prior to the delivery of a mild electric shock to the foot. The animal soon learns to associate the tone with danger, and subsequent presentations of the tone—without the foot shock—become sufficient to evoke fear. The dominant early model for fear learning held that neurons carrying foot shock and tone information converged in the lateral amygdala (LA), where associative learning occurred.

A host of data now shows that this formulation was a vast oversimplification. Plasticity underlying learning doesn’t just occur in the LA, but also upstream and downstream of that site; the absolute necessity for coincident activity evoked by the tone and foot shock is being questioned; non-amygdala structures such as the hippocampus and prefrontal cortex critically regulate how fear is learnt and extinguished; and as with other neural processes, fear learning incorporates changes at different scales of investigation—molecules, synapses, cells, networks, behaviour—making an experimental synthesis impossible with current technologies.

Computational modelling is well-suited to managing this unwieldy complexity. Nair and colleagues advocate for what they call biologically based neural circuit modelling (Box 1 in article). This is a multi-scale model incorporating cell-type specific ion channel complements, short- and long-term synaptic plasticity, and network connectivity based on experimental data. Where data is sparse, parameters are varied within experimentally observed bounds until they can recreate the phenomenon of interest. Such a model overcomes the problem of scale encountered with current experimental techniques, as all factors can be simultaneously assessed. A good model will reproduce biological phenomena that it was not explicitly set up to recreate, and can also generate insights into mechanisms for subsequent experimental testing. In this way, a two-way dialog between experiment and model can drive the field forward.

The bigger picture

Nair and colleagues argue that biologically based computational models can help produce a more complete understanding of fear learning, bridging data across multiple scales and helping to make sense of a complex system. If we are to understand learning, it certainly won’t be through experiments alone. Mathematicians, data scientists and computational modellers will be needed to help interpret the data, develop new theories and provide insights that experiments and intuition cannot. Beyond investigating the biological mechanisms, the study of learning will also require us to harness links with policy makers, teachers and parents to implement changes to educational practice, to ensure the rationale for changes is understood, and to make sure there is broad support for them. There’s no question it’s a big task, and it won’t be achieved by working alone.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in