Heteroclinic networks – probing the hidden influence of unstable fixed points

The stable fixed points of a dynamical system attract obvious interest as potential resting points, or final states. Without continued forcing, for example, a physical pendulum will ultimately end up hanging motionless in the downward position. The inverted position for a pendulum – an unstable fixed point – would seem to be of less practical interest, as only persistent external forcing can make the pendulum remain almost upright. Similarly, no one anticipates finding a pencil balanced upright on it tip.
Yet in many systems, unstable fixed points still exert a subtle but powerful influence over the character of more complex trajectories. These points and special trajectories that link them together – known as heteroclinic networks – may offer clues to understanding the behaviour of systems that spend long periods in one of several quasi-stable states, but occasionally make rapid transitions between them. In recent research with several colleagues, LML Fellow Claire Postlethwaite of the University of Auckland has explored the power of this framework in understanding problems in areas ranging from population dynamics and noisy computing to the dynamics of the human brain.
Unstable fixed points typically have both stable and unstable manifolds. Generically, trajectories nearby the fixed point approach it along some directions, and move away along others. Trajectories of special importance move precisely on the unstable manifold of one unstable fixed point, and then approach another fixed point along its stable manifold. Such “heteroclinic orbits” link pairs of unstable fixed points, and in a system with many such pairs, heteroclinic orbits may form a pathway, A → B → C → … → A, beginning and ending on the same point. This might be called a heteroclinic network. Nearby such a path, there exists an infinite set of  system trajectories that pass repeatedly close to the fixed points A, B, C, etc., and dwell nearby each one for a long time, before eventually moving on. The dynamics are sporadic, and trajectories jump rapidly from one quasi-stable state to another.
In a paper last year, Postlethwaite and Alastair Rucklidge (University of Leeds) noted that this concept is useful in the analysis of microbial populations with a cyclic structure of dominance, akin to the simple game Rock-Scissors-Paper. In the game, Scissors cut Paper, Paper wraps Rock and Rock blunts Scissors. Cyclic dominance of this kind arises among some competing populations of three species of microbes, and also between three colour-morphs of a particular species of lizard. A dynamical model of such competition has three fixed points, each having only one species surviving. For species able to move in space, analysis finds the possibility of spiral waves that travel outward from a point. As the waves passes through a region, each of the three microbial species comes to dominate in turn, before giving way to another. Using the perspective of heteroclinic networks, Postlethwaite and Rucklidge were able to derive results on the waves’ wavelength and speed.
But applications of this idea may be broader. In a paper published earlier this year, Peter Ashwin (University of Exeter) and Postlethwaite showed how the notion of heteroclinic orbits may prove useful in providing a model of computation within the brain, where neurons carry out computations in an environment that is far noisier than today’s digital computing systems. We understand the mathematics and engineering of finite-state computating machines very well, as we do many aspects of the central nervous systems of animals. Yet it remains unclear how neural systems make finite-state computations in a reliable way, while remaining sensitive to very weak environmental inputs. In their work, Ashwin and Postlethwaite developed neurally-inspired networks composed of simple nonlinear elements that can be made to function as a finite-state computational machine, and showed that the dynamics of the network, in one regime, were extremely sensitive to inputs: arbitrarily small perturbations. Surprisingly, in some regimes, increasing the strength of noise actually leads to more, rather than less, accurate computation, suggesting that noisy network attractors may be useful for understanding neural networks.
This groundwork may help pave the way to more specific insights into brain dynamics, which Postlethwaite and Ashwin, together with collaborators, are pursuing now. In 2017 they were awarded a grant to explore modelling cognitive functions as dynamics in systems with heteroclinic networks. While the brain is strongly affected by the presence of noise, however, the existing theory of heteroclinic networks is largely deterministic. The overall aim of their current work is to understand how microscopic noise affects the macroscopic properties of heteroclinic network dynamics, and to use this knowledge to construct and evaluate heteroclinic network models of cognitive functions.
Among other things, Postlethwaite and Ashwin hope to model two specific phenomena that look promising as candidates for the approach. One area involves so-called EEG microstates, which appear in EEG data as quasi-stable equilibria of the brain. In resting patients, EEG data consistently shows switching between four such microstates. One hope is that the perspective of heteroclinic networks may help to explain what triggers of transitions between states. Another area of interest is cognitive task switching, which psychologists know causes associated delays in mental processing. Here research using the notion of heteroclinic networks may shed light on how the switching costs may be linked to memory-type effects within the network dynamics.
Links:
Papers mentioned are available at
http://iopscience.iop.org/article/10.1209/0295-5075/117/48006/meta
https://ieeexplore.ieee.org/document/8331267/
 

Leave a Reply

Your email address will not be published. Required fields are marked *