Summer School 2019 Projects

There are 10 research projects taking place at this year’s summer school.

  1. Social balance and polarization
  2. Noise-induced bifurcations
  3. Anderson localization on random graphs
  4. Analysis of interaction among discrete events with the nonparametric Hawkes self-exciting model
  5. Optimal leverage in high frequency trading data
  6. Stochastic thermodynamics of Active Non-Markovian Processes
  7. Anomalous diffusion in deterministic and stochastic dynamics
  8. Quantifying Ignorance – Evaluation of Forecasters using Bets
  9. Inequality, poverty. Mobility and mixing
  10. A dynamical system model of atmospheric mid-latitude jet dynamics

Project 1. Social balance and polarization
Supervisors: Fabio Caccioli, Imre Kondor and Matteo Marsili

Social balance theory posits that agents (human beings, groups or states) strive to eliminate imbalanced arrangements in their relationships. An imbalanced arrangement e.g. among three agents is when one of them has two friends who detest each other, or when all three of them are mutually hostile. It often happens that in such a situation one of the three links flips over, resulting in a balanced arrangement. If we assume, for simplicity, that the agents are placed at the nodes of a complete graph (everybody knows everybody else) and the signed links of the graph represent their relationships, then the gradual elimination of imbalanced triangles eventually leads to a final arrangement where either everybody has a friendly relationship with everybody else, or the agents organize themselves into two clicks or factions, such that within a given faction every agent is a friend of every other member, but they are enemies of each member of the other click. One can ask interesting questions already at this level. For example: how does the final distribution of agents between the two opposing clicks depend on the initial distribution of friendly/hostile relationships? What is the probability ending up with two nearly equal groups, as often observed in present day societies?

Up till now we have assumed that the dynamics of links is autonomous, changes are driven solely by the configuration of the links themselves. Let us assign a more active role to the agents now: assume they are confronted by a binary choice as in a national election in a two-party system, or in a referendum on a clearcut alternative.  According to recent experience, an individual’s vote will significantly influence her relationships, she may revise her links to become friends with like-minded citizens, and turn hostile to the supporters of the opposing view. Democracies are not supposed to work like this, but it is obvious that in some countries similar divisions cut through friendships, even families in today’s political atmosphere. If such a mechanism prevails, society will immediately fall into two hostile groups. Of course, this mechanism can act less decisively, just modifying rather than completely determining the new system of relationships, with the final outcome depending on the relative strength of the initial relationships and the influence of political division.

There are other important factors at play. Individuals may have personal convictions, family traditions, etc., which may influence or even determine which way they vote, but can also serve to shield the individual against the pressure from the other agents.

Studying models built along similar lines may help one conceptualize the present day polarization of societies, but on a more general level they may provide simple examples of networks which not only transmit interaction between the nodes, but also become reorganized as a result of these interactions.

The project aims to test the behaviour of these simple social balance models on moderate size networks by numerical simulation. Accordingly, the successful candidate(s) will have a good knowledge of coding, and prior experience with Monte Carlo simulations will be highly valued.

Bibliography
1. Norman P. Hummon, Patrick Doreian: Some dynamics of social balance processes: bringing Heider back into balance theory, Social Networks 25 (2003) 17–49
2. T. Antal, P. L. Krapivsky, and S. Redner: Dynamics of social balance on networks: Phys. Rev. E (2005) 72, 036121
3. T. Antal, P.L. Krapivsky, S. Redner: Social balance on networks: The dynamics of friendship and enmity, Physica D 224 (2006) 130–136
4. G.C.M.A Ehrhardt, M. Marsili, F. Vega-Redondo: Phenomenological models of socioeconomic networks dynamics, Phy. Rev. E 74, 036106 (2006)
5. Seth A. Marvel, Steven H. Strogatz, and Jon M. Kleinberg: Energy Landscape of Social Balance, PRL (2009) 103, 198701
6. Seth A. Marvel, Jon Kleinberg, Robert D. Kleinberg, and Steven H. Strogatz: Continuous-time model of structural balance, PNAS, (2011) 108 (5) 1771-1776
7. M. Sadilek, P. Klimek,, S. Thurner: Asocial balance – how your friends determine your enemies, etc , J Comput Soc Sc (2018) 1:227–239


Project 2. Noise-induced bifurcations
Supervisors: Jeroen Lamb, Stefano Luzzatto, and Yuzuru Sato

Random dynamical systems are characterized by equations of motions that have some dependence on a probabilistic process. Their features are to be understood on a balance probabilistic with nonlinear deterministic insights.  Of particular interests are bifurcation that arise as a function of changes in the (strength or amplitude of) noise. The project will address one or more simple “toy” examples at the boundary of current understanding.

For instance, Matsumoto and Tsuda in [1] described an example of a one-dimensional dynamical system, which in the presence of noise after passing a critical threshold, displays a transition from chaos (positive Lyapunov exponent) to order (negative Lyapunov exponent). The dynamical mechanism that causes this transition remains to be understood. Also, Sato et al [2] identified a 2-step transition from order to chaos in a random logistic map with varying noise amplitude. There remain various questions concerning a more differentiated description of dynamics in the intermediate phase between order (uniformly attracting random dynamics) and chaos (characterized by a positive Lyapunov exponent causing sensitive dependence on initial conditions).

The project will focus on combining topological/geometric and probabilistic/measure-theoretic points of view to gain an understanding of noise-induced bifurcations in these examples.

Bibliography
1. Matsumoto and Tsuda, J. Stat. Phys. 31 (1983), 87-106
2. Sato et al, arXiv:1811.03994


Project 3: Anderson localization on random graphs
Supervisors: Antonello Scardicchio and Fernando Metz

Anderson localization (AL) is the absence of diffusion in a quantum system due to a random potential. In his original work [1], Phillip Anderson considered a simple model for quantum diffusion, in which an electron can hop between the sites of a lattice. In the case of a spatially periodic potential, the wave-functions describing the propagation of the electron are plane waves and the probability of finding the electron at different sites is constant. The electron diffuses freely throughout the environment. In the case of a spatially random potential, Anderson has predicted that the wave-functions can be eventually finite only on a few sites, and the particle becomes trapped on a finite region. He has won the Nobel prize for his groundbreaking work and this phenomenon, which characterizes the absence of diffusion in a quantum disordered system, is called Anderson localization.

Anderson localization arises from the interplay between the wave nature of quantum propagation and the randomness of the environment [2]. Being essentially a wave phenomenon, its importance extrapolates the realm of quantum physics. Indeed, AL has been experimentally observed in other contexts, which include localization of light and acoustic waves [2].

In his original work, Anderson considered a model in which the particle hops along the sites of a cubic lattice defined in the three-dimensional space. This class of models leads to serious theoretical difficulties, and most of the progress in the study of AL has relied on numerical approaches. The aim of the present project consists in the study of AL on a simpler geometry: a random graph. A random graph is defined as a collection of points randomly connected by edges, where the standard notion of distance is absent and some of the problems arising in finite dimensional geometries are avoided. The student will investigate different aspects of the localization phenomenon on a random graph using analytical as well as numerical techniques. Although much work has been devoted to the study of AL on graphs, there are still many interesting open questions [3], and this class of models has experienced recently a burst of activity due to their connection with localization in the Fock space of many-particle quantum systems.

Bibliography
1. Absence of diffusion in certain random lattices, P. W. Anderson, Phys. Rev. 109, 1492  (1958).
2. Fifty years of Anderson localization, A. Lagendijk, B. Tiggelen, and D. S. Wiersma,  Physics Today 62, 8, 24 (2009).
3. Perturbation theory approaches to Anderson and many-body localization: some lecture notes, A. Scardicchio and T. Thiery, lecture notes given during the Topical School on Many-Body-Localization (2016).


Project 4.  Analysis of interactions among discrete events with the nonparametric Hawkes self-exciting model 
Supervisor:  Jiancang Zhuang

Point process has been widely used in the analysis of series of discrete events/particles in nature or social sciences. The Hawkes self-exciting models becomes the most popular model in seismology (the ETAS model), criminology, terrorists’ behavior analysis, interactions in social network, financial data, and genomes or neuronal activities, because of its capacity of investing the clustering effect and positive interactions among individual events/particles. The idea of stochastic reconstruction of the ETAS model came from Zhuang et al (2004) and then developed into a nonparametric clustering model for earthquakes by Marsan and Lengliné (2008). In crime analysis, Mohler et al. (2011) used this nonparametric method in the model construction and estimation for the occurrence of burglary crime in Los Angeles, and their model has been adopted as a commonly used methods in crime data modelling. Zhuang and Mateau (2018) developed a semi-parametric estimation method for the functional model formulation, in which the periodic background rate is included. This summer school project aims at constructing finer models for such data analysis. Especially, extending the semi-parametric Hawkes model to a multivariate version, by incorporating new dimensions of the study data and influences from external covariate processes, is of special interest.

Requirements
1. Good knowledge in probability and statistics.
2. Programming skill in R, Fortran, or C/C++.

Bibliography
1. Zhuang J., Ogata Y. and Vere-Jones D. (2002). Stochastic declustering of space-time earthquake occurrences. Journal of the American Statistical Association, 97: 369-380.
2. Zhuang J., Ogata Y., Vere-Jones D. (2004). Analyzing earthquake clustering features by using stochastic reconstruction. Journal of Geophysical Research, 109, No. B5, B05301, doi:10.1029/2003JB002879.
3. Mohler G. O., Short M. B., Brantingham P. J., Schoenberg F. P. & Tita G. E. (2011) Self-Exciting point process modeling of crime, J. Amer. Statist. Assoc., 106:493, 100-108.
4. Marsan D., Lengliné O (2008) Extending earthquakes’ reach through cascading, Science 319 (5866), 1076-1079.
5. Zhuang, J. and Mateau, J. (2016). A semi-parametric spatiotemporal Hawkes-type point process model with periodic background for crime data. Journal of the Royal Statistical Society, Ser. A.


Project 5. Optimal leverage in high frequency trading data
Supervisors: Alex Adamou, Yonatan Berman, Mark Kirstein, and Ole Peters

Common investment advice has it that stock markets outperform less volatile investments in the long run. If this is true, why not borrow money and leverage stock market investments?

The answer can be found in a 2011 publication [1]. Here, it was predicted that the leverage that optimizes time-average growth in a portfolio containing a riskless and a freely traded risky asset should be close to 1, meaning borrowing won’t help long-term.

We later confirmed this prediction for various indexes using daily data. One message that emerges is that, at least on short time scales, changes in price are dictated by a simple stability argument. If the prices of two assets were to change smoothly but at different rates, it would be optimal to leverage up on the faster-growing asset and short the slower one – at infinite leverage. This creates an instability, wherefore stability requires fluctuations.

In the project we want to test this prediction using data from IEX (https://iextrading.com/). This is an exchange where trading frequency is limited by a delay of 350 microseconds in accessing the exchange servers. This is done to prevent unwanted behavior by high-frequency traders. You can read more about it in Michael Lewis’s book “Flash Boys” (https://en.wikipedia.org/wiki/ Flash_Boys). IEX makes its data freely available, and we want to test various predictions of leverage efficiency.

The student will have to be comfortable working with data and coding simple models. As preparation the following material should be studied – chapter 5 in [2] and [1], and it would be useful to get familiar with IEX’s data interface.

Main goal
In this project we will study the optimal leverage criterion in very short time scales using high frequency trading data.

Bibliography
1. O. Peters. Optimal leverage from non-ergodicity. Quantitative Finance, 11(11):1593–1602, 2011.
2. O. Peters and A. Adamou. Ergodicity economics lecture notes, June 2018.


Project 6. Stochastic Thermodynamics of Active Non-Markovian Processes
Supervisors: Rosemary Harris and Edgar Roldan

Amongst theoretical physicists there has been much recent interest in the energetics of small systems where fluctuations in heat and work are especially significant. At the level of probability distributions, a series of intriguing symmetry properties and inequalities turn out to be universally applicable, at least for Markovian processes. However, in many real systems (such as the “run and tumble” active motion of bacteria, pictured below), memory effects play an important role. Understanding fluctuations in non-Markovian processes is therefore a hot topic! We sketch below some possible directions which could be taken but the precise project (and the balance between theoretical and computational work) will depend on the background and interests of the student.

Project 6 Summer School 2019

Source: http://biologicalexceptions.blogspot.com/2014/09/bacteria-can-really-get-around.html?m=1

Work Plan

Records in non-Markovian systems: Perform numerical simulations of a minimal stochastic model of a run-and-tumble active swimmer describing the motion of a bacteria. Start by simulating solutions of a system of stochastic differential equations that contains a parameter which, when close to zero, gives motion approaching a standard Brownian diffusion.

  • Student should play with the parameters to understand when the dynamics becomes non- Markovian and how extreme values and the dispersion of the motion changes with the parameter values.
  • Next, we will extract from these simulations, statistics of extreme values (maxima, minima) and the variance of the position of the bacteria. Student should learn how to export the data into a text file, plot the data and compare the results with previous knowledge on extrema and uncertainty statistics. There is scope for new theory based on this.

Stochastic thermodynamics of non-Markovian systems: Include in the numerical simulations the possibility to evaluate the stochastic heat and work exchanges in non-equilibrium conditions. Then do the same things as above. Questions:

  • Do universal statistics of heat extrema constrain the motion of non-Markovian systems?
  • What is the cost (heat dissipation) of reducing the uncertainty in the navigation path of a single bacterium?

Ultimately we aim to (i) understand whether the presence of “memory” in the dynamics of non-Markovian systems is beneficial to reduce unwanted extreme excursions against the net flow, or if the contrary, if increasing the amplitude of extreme events is helpful for bacteria to navigate during chemotaxis; (ii) study what is the energetic cost of reducing the uncertainty in navigation of non-Markovian systems.

Preparatory reading
Depending on the project chosen and the prior knowledge of the student, some/all of the following could be useful.

  • [1], or similar, for general background on stochastic processes and Markov chains; [2, 3] for some ideas about how to test a time series for Markovianity.
  • [4] for a review on extreme-value statistics of correlated stochastic processes; [5, 6, 7] for examples on how to calculate distributions of records.
  • [8] (Ch. 4 and 5) to familiarise with stochastic thermodynamics for Langevin systems and [9] with stochastic thermodynamics for Markov chains.
  • [10, 11] for introductions to active matter and run-and-tumble motion.
  • [12, 13] for some of the latest results on uncertainty relations and [14] on extreme-value statistics in stochastic thermodynamics.

Bibliography
1. G. R. Grimmett and D. R. Stirzaker, (2001), Probability and Random Processes, O.U.P.
2. F. Bickenbach and E. Bode, (2001), Markov or not Markov – this should be a question, Kiel Working Papers 1086,  https://EconPapers.repec.org/RePEc:zbw:ifwkwp:1086.
3. B. Chen and Y. Hong, (2012), Testing for the Markov Property in Time Series, Econometric Theory 28(1), 130–178.
4. C. Godreche, S. N. Majumdar, and G. Schehr, (2017) Record statistics of a strongly correlated time series: random walks and L ́evy flights. J. Phys. A, 50(33), 333001.
5. G. Wergen, M. Bogner, and J. Krug (2011), Record statistics for biased random walks, with an application to financial data Phys. Rev. E, 83(5), 051109.
6. S. N. Majumdar and A. Pal, (2014) Extreme value statistics of correlated random variables. arXiv:1406.6768.
7. P. Vivo (2015) Large deviations of the maximum of independent and identically distributed random variables Eur. J. Phys., 36(5), 055037.
8. K. Sekimoto (2010) Stochastic energetics. Springer.
9. C. Van den Broeck and M. Esposito, (2015) Ensemble and trajectory thermodynamics: A brief introduction. Phys. A, 418, 6-16.
10. S. Ramaswamy (2017) Active matter J. Stat. Mech., 054002.
11. T. Demaerel and C. Maes, (2018) Active processes in one dimension Phys. Rev. E, 97(3), 032604.
12. A. C. Barato, and U. Seifert (2015) Thermodynamic uncertainty relation for biomolecular processes. Phys. Rev. Lett., 114(15), 158101.
13. J. M. Horowitz and T. R. Gingrich (2017) Proof of the finite-time thermodynamic uncertainty relation for steady-state currents. Phys. Rev. E, 96(2), 020103.
14. I. Neri, E, Roldan, and F. Julicher (2017) Statistics of infima and stopping times of entropy production and applications to active molecular processes. Phys. Rev. X , 7(1), 011019.


Project 7. Anomalous diffusion in deterministic and stochastic dynamics
Supervisors: Rainer Klages, Yuzuru Sato and Stefano Ruffo

This project is at the interface between dynamical systems and stochastic theory. De pending on the scientific interests of the student it may focus on one of the following three sub-projects:

  1. Superdiffusion in periodic lattices. A point particle moves in a periodic lattice of overlapping Fermi potentials. This simple Hamiltonian system exhibits non-trivial diffusive dynamics under variation of control parameters [1-2]. The project explores superdiffusion in this model, where particles move ‘faster’ than ordinary Brownian motion [3]. First install an available software package for per- forming computer simulations [4]. Then extract relevant physical quantities from your simulations like the mean square displacement. On this basis construct a stochastic model reproducing the Hamiltonian superdiffusive dynamics.
  2. Search efficiency in anomalous diffusion. Anomalous diffusion denotes a spreading of particles different from ordinary Brownian motion, where the mean square displacement grows linearly in the long time limit [5]. In [6] a simple stochastic model was studied, which consists of a combination of subdiffusion (spreading less than linear in time) and superdiffusion. This model should be used to understand computer simulation results of active Brownian particles modeling biological dynamics [7]. First learn about continuous time random walk theory [5] by reproducing the analytical calculations in [6]. Then perform simulations of the model in [6] by the method in [5] for computing the efficiency [7] to find a target. Simulations results should be matched to stochastic theory.
  3. Anomalous diffusion in random dynamical systems. A combination of different deterministic dynamics by sampling randomly between them in time is called a random dynamical system. This project continues recent research [8] in which it was shown that a certain class of random dynamical systems exhibits subdiffusion. An interesting open question is whether anomalous diffusion can be achieved along the same lines by using different types of perturbations. For this purpose straightforward computer simulations of diffusion in simple one-dimensional maps need to be carried out. The numerical results should be matched to analytical predictions by continuous time random walk theory [6].

The student will learn methods of stochastic theory by matching analytical results to data from computer simulations. Projects 1 and 3 also relate to dynamical systems theory.

Bibliography
1. R.Klages et al., preprint arXiv:1811.06976
2. R.Klages et al., preprint arXiv:1811.11661
3. R. Klages, G.Radons, I.M.Sokolov (Eds.), Anomalous transport: Foundations and Applications  (Wiley-VCH, Weinheim, 2008)
4. J. Solanpaa, P. Luukko, and E. Rasanen, Comp. Phys. Commun. 199, 133 (2016) [5] J. Klafter, I. M. Sokolov, First steps in random walks (Oxford, 2011)
6. G. Zumofen, J. Klafter, Phys.Rev.E 47, 851 (1993)
7. G. Volpe and, G. Volpe, PNAS 114, 11350 (2017)
8. Y. Sato, R.Klages, preprint arXiv:1810.02674


Project 8. Quantifying Ignorance – Evaluation of Forecasters using Bets
Mark Kirstein, Yonatan Berman, Alexander Adamou, and Ole Peters

Crane [1] suggests three measures to evaluate the statistical performance of forecasters

  1. unbiasedness,
  2. accuracy and
  3. the profit or loss produced when the individual probability forecasts are used in an ensemble or a series of betting games.

If the average profit across an ensemble is used as a measure of profit, and if the ensemble’s size tends to infinity, then this criterion maximises the expectation value of profit. Used as a behavioural protocol this criterion usually quickly leads to ruin because it is insensitive to fluctuations in wealth. Profit in a series of bets is usually interpreted as the time- average growth rate of the wealth of a gambler. Computing it requires knowledge of the  gambler’s wealth and knowledge of a dynamic, i.e. it has to be specified what it means for a gamble to be repeated. Often one assumes multiplicative repetition, in which case the behavioural protocol of maximising time-average growth has many desirable properties – there is a reasonable trade-off between fluctuations and likely profit, and ruin is avoided. The measure of the betting performance implements Crane’s Fundamental Principle of Probability (FPP), which states:

If you assign a probability to an outcome happening, then you must accept a bet on the other side (that the outcome will not happen) at the correct implied odds [2].

In this project we will study probabilistic forecasts (e.g. of sports, elections, security exchanges, etc.) and evaluate them based on the implied betting performance of the forecaster over time, using different assumptions about dynamics and wealth. We will study how the acceptable odds depend on wealth and bet size, and on our uncertainty about the probabilities.

Aim of the project
We would like to understand the relationship between ergodicity economics [3] and its focus on time-average growth rates and Crane’s Fundamental Betting Principle.

Bibliography
1. Crane, Harry (2018). ‘Polls, Pundits, or Prediction Markets: An assessment of election forecasting’. In: url: https://www.researchers.one/article/2018-11-6
2. Crane, Harry (2018). ‘The Fundamental Principle of Probability: Resolving the Replication Crisis with Skin in the Game’. In: url: https://www.researchers.one/ article/2018-08-16
3. Peters, Ole and Alexander Adamou (2018). Ergodicity Economics. Lecture Notes. 5.0, 2018/06/30. url: https://ergodicityeconomics.com/lecture-notes/


Project 9. Inequality, poverty, mobility and mixing
Supervisors: Yonatan Berman, Mark Kirstein and Alex Adamou

Economic inequality is a hot topic. Often it is conflated with similar topics like poverty and social mobility. While inequality is measured as the static property of a resource distribution, mobility depends on dynamics.

We can link these concepts by understanding ergodicity. An individual’s relative income or wealth is ergodic if, over time, it samples all parts of some stable distribution. So ergodicity implies mixing and mobility, while knowledge of the stable distribution – if it exists – tells us about inequality.

Rising inequality in many countries has made researchers curious about its economic consequences. One concern is equality of opportunity, where mobility matters. For instance, if children’s economic outcomes depend strongly on those of their parents – in other words, if intergenerational mobility is low – then high inequality among parents can suppress the potential of poorer children. The link between inequality and intergenerational mobility has been studied recently [1, 2, 4, 5].

In this project we will take a new approach to such problems by studying mobility, inequality, and poverty in stochastic growth models. Our starting point will be geometric Brownian motion (GBM), which is a general model of the dynamics of quantities which grow multiplicatively and randomly. In it, the fractional change in quantity xi over time period t is a normally-distributed random variable of mean µ∆t and variance σ2t,

Project 9 Summer School 2019

where i=1…N, for a population of size N. If wealth follows GBM, then relative wealth is non-ergodic because its distribution does not stabilise: it is an ever-broadening lognormal. GBM represents a winner-takes-all economy, in which almost all resources are eventually owned by almost no people. This partitioning of the economy into mutually inaccessible classes of “winners” and “losers” predicts mobility measures lower than observed real economies.

To address this, we will also study reallocating GBM (RGBM) [3], in which a simple resource reallocation mechanism is added to GBM. When reallocation is from richer to poorer, RGBM predicts stable and realistic resource distributions and ergodic relative wealth. In practical terms, this means that, if we wait long enough, we get to spend some time as a billionaire! We will study mixing in GBM and RGBM; ask how much churn these models predict; and try to connect these predictions to reality.

Main goal
In this project we will study mixing in GBM and RGBM models. We will see whether they allow for realistic levels of social mobility and what relationship between inequality and social mobility this yields.

Bibliography
1. D. Aaronson and B. Mazumder. Intergenerational economic mobility in the united states, 1940 to 2000. Journal of Human Resources, 43(1):139–172, 2008.
2. Y. Berman. Growth, inequality and absolute mobility in the united states, 1962–2014. Mimeo, 2018.
3. Y. Berman, O. Peters, and A. Adamou. An empirical test of the ergodic hypothesis: Wealth distributions in the united states. Available at SSRN, 2017.
4. R. Chetty, N. Hendren, P. Kline, and E. Saez. Where is the land of opportunity? the geography of intergenerational mobility in the united states. The Quarterly Journal of Economics, 129(4):1553–1623, 2014.
5. M. Corak. Income inequality, equality of opportunity, and intergenerational mobility. The Journal of Economic Perspectives, 27(3):79–102, 2013.


Project 10. A dynamical system model of  atmospheric mid-latitude jet dynamics
Supervisors: Davide Faranda, Yuzuru Sato

We have derived [1] a coupled map lattice model for the northern hemisphere mid-latitude jet dynamics by embedding atmospheric data, and investigated its properties (bifurcation structure, stability, local dimensions) for different atmospheric flow regimes. Equipped with the model of the jet defined in [1] we will introduce another important variable in the description of the mid-latitude circulation, namely the jet speed. The jet speed is intimately related to the thermodynamic properties of the atmosphere, and in particular to the Poles/tropics temperature gradients. Its inclusion in the model will be therefore crucial to study the modifications of the jet dynamics under the possible scenario induced by climate change: i) the temperature gradient (and so the jet speed) remains unaltered, ii) the temperature gradient increases (and so do the jet speed), iii) the temperature gradient decreases. These three scenarios are still debated in the climate community. The use of simple models as the one proposed in [1], modified to take into account the jet position, will be certainly a good starting point to understand whether these scenarios favor a more or less wavy jet stream (zonalization vs. arctic amplification of mid-latitude flow). During the summer school, the student will explore the phase space of this model and  perform numerical integrations to assess the role of the temperature gradient on the dynamics.

Bibliography
1. Davide Faranda, Yuzuru Sato, Gabriele Messori, Nicholas Moloney & Pascal Yiou. Minimal dynamical systems model of the northern hemisphere jet stream via embedding of climate data. Earth System Dynamics Discussions, 2018 (url: https://www.earth-syst-dynam-discuss.net/esd-2018-80/)