Summer School 2020 Projects
There are 12 research projects taking place at this year’s summer school.
- Experiments in ergodicity
- Statistics and dynamics of multiplicative random processes
- Extreme values and first-passage times for non-Markovian models
- Stability of ecosystems with a core-periphery structure
- Growth incidence curves in reallocating geometric Brownian motion
- Optimal allocation and risk parity
- Verifying interaction among different types of discrete events by using the multivariate Hawkes process
- Water network probes of collective behaviour
- Time correlation functions for random dynamical systems with anomalous diffusion
- Machine learning techniques for Intermittent systems
- Non-convex optimization and regularization
- Reconstructing Dynamical Systems with Machine Learning
- Attractor reconstruction for dynamical systems from infrequent high dimensional observations in the presence of noise.
- Topological Quantum Matter
- Resolution-regularization in statistical inference and machine learning
Project 1. Experiments in ergodicity
Supervisors: Ollie Hulme (Danish Research Centre for Magnetic Resonance), Ole Peters, Alex Adamou, Yonatan Berman, Mark Kirstein (London Mathematical Laboratory)
As a relatively new field, ergodicity economics (EE) makes a number of falsifiable predictions which deviate both quantitatively and qualitatively from other theories of decision making (e.g. EUT, Prospect theory, Isoelastic utility and their variants). Though some experimental work has begun, very few of these behavioral predictions have been adequately tested, replicated, or generalised. The existence of conflicting predictions, and a paucity of evidence to adjudicate the conflict, is motivation for a series of behavioral experiments that are planned in collaboration between LML and Danish Research Centre for Magnetic Resonance. The experiments will be designed with the aim of discriminating between these candidate theories, as rigorously as possible. Central to EE is the premise that agents should modulate their decision-making strategies as a function of the dynamics that they face. Though other theories are typically not predicated explicitly on dynamics, many will make predictions that depend on dynamics, implicitly via other features that may depend on such dynamics. Thus, the task of deriving predictions from non-EE theories under different dynamics, is not always straightforward. In this project you will work with theorists and experimentalists, to design behavioral experiments that are capable of discriminating between such theories. The focus will be on either risk or time preferences at the level of the individual agent. The risk preference project will largely centre around the protocol of Meder et al. 2019, and will focus primarily on the replication in a larger sample, and generalisation to new dynamics which predict qualitatively different risk preferences. Particular attention will be paid to addressing issues raised by the community, in terms of controlling for moments, the consequentiality of the design, and end-wealth computations. The time preference project will focus on testing the experimental predictions of Adamou et al. 2019, using equivalent strategies as used in the risk preference project. Other experimental ideas can be discussed and developed, subject to the interests of the student, but the main development work will centre on one or both of these experimental questions. Time permitting there will be scope for considering different experimental scales, from smaller cases studies comprising tens of subjects, larger cohorts of hundreds, and larger scale app-based paradigms sampling potentially thousands of subjects.
Goal of the project
The overarching goal of this project is to develop the experimental designs such that they are optimised to maximally discriminate between the key theories of interest. The experimental design, along with all code, as well as synthetic data will be pre-specified as part of registered reports that will be submitted to a journal prior to data collection.
There are several challenges embedded in this endeavour which will shape the work of the project: Model specification. If a decision theory is widely used in the problem domain then we will take it seriously, regardless of theoretical considerations. The challenge is to specify all candidate models in their strongest form, taking measures to strengthen them if necessary. This may be achieved via a close reading of the literature, through your existing experience, and through open dialogue with domain specific experts who ideally, though not necessarily, will be sceptical of EE. This is mirrored by the EE experts on this team specifying the EE model in its strongest form. A working knowledge of theories in these domains will be advantageous, but not strictly necessary; Modeling. The candidate models will be primarily specified as hierarchical Bayesian models, estimated via MCMC techniques, though other techniques can be considered. Working knowledge of Bayesian statistics, models, and their variants will be advantageous; Simulation. Theoretical predictions must be pre-registered and coded prior to data collection, as far as possible. As such, experiments should be iteratively designed with the models, maximising the probability of obtaining compelling evidence for one way or another. Simulations of synthetic agents, derived from candidate theories, can be used to evaluate the effect of experimental design choices on the evidence accumulation rates under plausible assumptions. A strong background in coding will be essential. Expertise in app development will be considered a bonus; Timeframe. Only some of the work described above will be completed during the summer school, so care will be taken to ensure that the project undertaken is achievable under the time constraints.
- EE lecture notes: https://ergodicityeconomics.com/lecture-notes/
- Meder et al 2019: https://arxiv.org/abs/1906.04652
- Adamou et al 2019: https://arxiv.org/abs/1910.02137
- Bayesian Cognitive Modelling: http://faculty.sites.uci.edu/mdlee/files/2011/03/BB_Free.pdf
Project 2. Statistics and dynamics of multiplicative random processes
Supervisors: Mark Kirstein (London Mathematical Laboratory) and Colm Connaughton (Warwick Mathematics Institute and London Mathematical Laboratory)
A discrete time multiplicative random processes is generated by successively multiplying together the elements of a sequence of random variables drawn from a given distribution. Such processes have several applications. The discrete time geometric random walk is an example that is commonly used to model and simulate market fluctuations in finance and economics. Variations of this are also used as models of growth and fragmentation in physics.
For additive processes in discrete time, the central limit theorem tells us that under a broad set of circumstances, the sum of a sequence of N random variables becomes normally distributed as N grows. There is, however, no analogue of the central limit theorem in the multiplicative case. The question thus arises how the product of a sequence of N random variables is distributed as N grows? In a 1990 paper , Sid Redner addressed this question in detail for the case of independent binomial random variables. Some interesting features are worth noting:
- The mean value of the product is exponentially different from the typical value of the product as N grows.
- Short range correlations in the sequence can have a dominant effect on the mean value of the product.
- Higher order moments of the product, like the variance and skewness, all scale at different rates as N grows.
All three of these properties are different from what one might expect based on intuition acquired from applying the central limit theorem to additive processes.
Goal of the project
The goals of this project are to understand the implications of these results for the statistical properties of discrete time geometric random walks and to establish a connection to recent work on the so-called “Hot Hand” paradox , a counter-intuitive property of the frequency of occurrence of particular subsequences in long random sequences.
- Sidney Redner. Random multiplicative processes: An elementary tutorial. Am. J. Phys.,
58(3):267–273, March 1990.
- Sidney Redner. A Fresh Look at the “Hot Hand” Paradox. arXiv:1910.09707, 2019.
Project 3: Extreme values and first-passage times for non-Markovian models
Supervisors: Rosemary J. Harris (Queen Mary University of London) and Edgar Roldan (The Abdus Salam ICTP)
Extreme and first-passage fluctuations of stochastic processes are nowadays attracting much attention in diverse fields of science, e.g., biophysics, finance and climate change. What are the statistics of yearly extreme temperature fluctuations? Can we predict the time when the stock market will reach a certain critical value, i.e., its first-passage-time statistics? An important question is whether there exist universal laws governing the extreme and first-passage phenomena of complex systems in nature.
Prominent recent results in statistical mechanics concern the extreme-value and first-passage-time properties of a variety of random-walk processes [1,2] which can describe for instance the movement of molecular motors . In particular, a counter- intuitive symmetry is generically present for the first-passage-time distributions of biased random walks and can be understood in the mathematical framework of martingales .
The project aims to combine the expertise of the two supervisors by extending this work to non-Markovian models where memory plays an important role [5,6]. Specifically, we plan to focus on the so-called elephant random walk  which has a natural martingale description . The student will start by constructing a biased version of this model, and investigating its extreme value and first-passage-time properties via Monte Carlo simulation before trying to understand the results mathematically.
Goal of the project
To understand whether symmetry properties present in Markovian random-walks extend to non-Markovian models and, if so, under which conditions.
(1) General experience with probability theory / stochastic processes; (2) Knowledge of martingales [recommended]; (3) Familiarity with a computer language which can be used for simulations.
- C. Godreche, S. N. Majumdar, and G. Schehr, J. Phys. A. 50(33), 333001 (2017).
- J. Krug, J. Stat. Mech. 2007(07) P07001 (2007)
- A. Guillet, E. Roldan, and F Julicher, arXiv:1908.03499 (2019)
- I. Neri, E. Roldan, and F. Julicher, Phys. Rev. X 7(1) 011019 (2017)
- R. J. Harris, New J. Phys. 17(5), 053049 (2015)
- M. Shreshtha, and R. J. Harris, EPL 126 (4) 40007 (2019)
- G. M. Schütz, and S. Trimper, Phys. Rev. E 70(4) 045101 (2004)  B. Bercu, J. Phys. A 51(1), 015201 (2017)
Project 4. Stability of ecosystems with a core-periphery structure
Supervisor: Jacopo Grilli (The Abdus Salam ICTP) and Fernando Metz (Federal University of Rio Grande do Sul)
Understanding which properties of ecological interactions make ecosystems stable to perturbations is of paramount importance. Stable stationary states are usually beneficial and associated with a well-functioning system. In a seminal paper, Robert May derived, using random matrix theory, universal criteria for the linear stability of fixed-points modeling ecological communities as randomly coupled differential equations. May’s model relies on the assumption that species interact randomly with each other. In this setup, any ecological system is unstable, provided it contains a sufficiently large number of species. Although this random-matrix approach led to important insights, the results are clearly unrealistic, since real ecosystems are often large and stable.
In contrast to May’s original model, ecological interactions are organized in a network structure, where nodes represent species and links stand for the pairwise interactions among them. In particular, interactions are sparse, which means that a given species influences or interacts with a finite number of others. The stability of large ecosystems, being the interactions sparse or not, boils down to the study of the leading eigenvalue of a certain large random-matrix encoding the interactions among different species – the community matrix.
Goal of the project
The main goal of the present project is to explore how the core-periphery architecture of certain ecosystems affects their stability. Core-periphery networks are characterized by a densely connected subset of species (the core) containing a smaller number of links with the rest of the network (the periphery). Such models are important in ecology because many empirical interaction networks have this feature. The central idea of the project is to compute the leading eigenvalue of the random community matrix as a function of the structural parameters characterizing a simple model of a core-periphery ecosystem. The student will employ analytical techniques from statistical physics and numerical diagonalization routines to study this problem.
- J. Grilli, T. Rogers, and S. Allesina, Modularity and stability in ecological communities, Nat. Communications 7, 12031 (2016).
- F. L. Metz, I. Neri and T. Rogers, Spectral theory of sparse non-Hermitian random matrices, J. Phys. A: Math. Theor. 52, 434003 (2019).
Project 5. Growth incidence curves in reallocating geometric Brownian motion
Supervisor: Yonatan Berman (London Mathematical Laboratory)
There are two ways of evaluating changes in the distribution of wealth in a given population, depending on whether initial wealth levels are taken into account or not. If they are ignored, the new distribution is compared to the original one without considering the identity of wealth owners. The comparison is thus ‘anonymous’. Poorest (richest) in the initial period are compared to poorest (richest) in the final period whoever their identity is. In the other case, the comparison is made between the two distributions conditionally on initial wealth. The comparison thus is ‘non-anonymous’.
It has become common to represent the anonymous distributional change brought by economic growth by the Growth Incidence Curve (GIC) . This very convenient tool simply shows the rate of growth of successive quantiles of the distribution in the initial and final periods. It turns out that there is a relationship between this curve sloping downward and the Lorenz curve of the marginal distribution shifting upward. Non-anonymous Growth Incidence Curves (NAGICs) are the non-anonymous equivalent . Interestingly, in many periods in which the anonymous GICs were generally upward sloping, non-anonymous GICs were largely flat.
With these results in mind we study reallocating geometric Brownian motion (RGBM). This is a simple model of an economy in which wealth undergoes noisy exponential growth. We add to that framework a reallocation mechanism in which a fraction of everyone’s wealth is pooled and shared . The goal of this project is to study the anonymous and non-anonymous growth incidence curves predicted by the RGBM model, and their properties. We will compare these predictions to empirical evidence on growth incidence curves.
The student will have to be comfortable working with data and coding simple models.
- Martin Ravallion and Shaohua Chen. Measuring pro-poor growth. Economics Letters, 78(1):93– 99, 2003.
François Bourguignon. Non-anonymous growth incidence curves, income mobility and social welfare dominance. The Journal of Economic Inequality, 9(4):605–627, 2011.
- Yonatan Berman, Ole Peters, and Alexander Adamou. Wealth inequality and the ergodic hypothesis: Evidence from the United States. Available at SSRN, December 2019.
Project 6. Optimal allocation and risk parity
Supervisors: Mark Kirstein, Yonatan Berman (London Mathematical Laboratory)
Common investment advice has it that stock markets outperform less volatile investments in the long run. If this is true, why not borrow money and leverage stock market investments? The answer to this question can be found in a 2011 publication . Here, it was predicted that the leverage that optimizes time-average growth in a portfolio containing a riskless and a freely traded risky asset should be close to 1, meaning borrowing won’t help in the long run.
Another common investment advice, especially popular in the last decade, has it that when selecting assets in a portfolio it is advisable to allocate risk, usually defined as volatility. This approach is called risk parity . It asserts that when asset allocations are adjusted to the same risk level, the risk parity portfolio is more resistant to market downturns than in other situations. For example, traditional portfolio allocation of 60% stocks and 40% bonds carries about 90% of its risk in the stock portion of the portfolio. In a risk parity portfolio volatility is allocated equally between assets. How is such allocation compatible with the optimally leveraged portfolio? Answering this question is the goal of this project.
First, we will extend the optimal leverage concept of  to an allocation between several risky assets that still optimizes time-average growth. We will extend this result to a case in which the assets are cross-correlated in time. We will then compare the prediction of this approach to risk parity and study how they are related. We will also test the performance of the two approaches with data.
The student will have to be comfortable working with data and coding simple models. As preparation the following material should be studied – chapter 5 in  and .
Goal of the project
In this project we will extend the optimal leverage criterion to optimal allocation (with more than two assets) and compare it to the risk parity approach in portfolio management.
- Ole Peters. Optimal leverage from non-ergodicity. Quantitative Finance, 11(11):1593–1602,
- Sébastien Maillard, Thierry Roncalli, and Jêrôme Teïletche. The properties of equally weighted risk contribution portfolios. The Journal of Portfolio Management, 36(4):60–70, 2010.
- Ole Peters and Alexander Adamou. Ergodicity economics, 2018. 5.0, 2018/06/30.
Project 7. Verifying interaction among different types of discrete events by using the multivariate Hawkes process
Supervisors: Jiancang Zhuang (Institute of Statistical Mathematics) and Max Werner (University of Bristol)
A multivariate Hawkes processes (MHPs) is a point process with several sub-processes of discrete events where each subprocess affects the occurrence rates of the other subprocesses. MHP’s ability to capture mutual excitation among different types of phenomena makes it a popular model in many other areas. For a -dimensional MHP, the intensity function of its th subprocess takes the following form:
where the constant is the background intensity of the th subprocess, counts the number of arrivals in the th subprocess, and is the excitation effect caused by an event at time 0 in the th subprocess to the occurrence rate at time in the th subprocess. Therefore, estimate the excitation response function is the key to learning an MHP model.
In this project, we intend to use MHP to analyze some real datasets. The supervisors have prepared the data of earthquake hypocenters from different regions and are interested in finding the correlations between deep and shallow earthquakes. Some extensions of the multivariate Hawkes processes are possibly considered to be done during this project, including (1) with complicated background rates and (2) with inhibition effect among some types of events. The student is encouraged to bring their own data.
- Good knowledge in probability and statistics.
- Programming skill in R, Fortran, or C/C++.
- Ogata Y., Akaike H. and Katsura K. (1982). The application of linear intensity models to the investigation of causal relations between a point process and another stochastic process, Annals of the Institute of Statistical Mathematics, Vol.34, No.2, B, pp.373-387
- Zhuang J., Ogata Y. and Vere-Jones D. (2002). Stochastic declustering of space-time earthquake occurrences. Journal of the American Statistical Association, 97: 369-380.
- Zhuang J., Ogata Y., Vere-Jones D. (2004). Analyzing earthquake clustering features by using stochastic reconstruction. Journal of Geophysical Research, 109, No. B5, B05301, doi:10.1029/2003JB002879.
- Mohler G. O., Short M. B., Brantingham P. J., Schoenberg F. P. & Tita G. E. (2011) Self-Exciting point process modeling of crime, J. Amer. Statist. Assoc., 106:493, 100-108.
- Zhuang, J. and Mateau, J. (2019). A semi-parametric spatiotemporal Hawkes-type point process model with periodic background for crime data. Journal of the Royal Statistical Society, Ser. A. 182(3), 919-942. doi:10.1111/rssa.12429.
Project 8. Water network probes of collective behaviour
Supervisors: Ali Hassanali, Edgar Roldan, Uriel N. Morzan, Alex Rodriguez, and Asja Jelic (The Abdus Salam ICTP)
Water is the basis of life as we know. Its molecular structure can be described as an intriguing complex system whose specific interactions lead to a unique set of micro and macroscopic properties . Water is an integral component of biomolecular systems and a solvent where key biochemical processes take place. Solvation of molecules in water is thus at the core of key molecular phenomena and essential to understand chemical reactivity and biomolecular functions.
Recent work has revealed the active role of water, e.g. regulating the structure, dynamics, and function of biomolecules. For example, water molecules at pro- tein/water interface, called “hydration water”, are shown to not only thermodynamically stabilize the equilibrium structure of proteins, but also affect their dynamics, e.g. folding process and enzymatic catalysis, that is crucial for their biological function. What is the structure of water surrounding proteins and how it drives and controls protein equilibrium and nonequilibrium dynamics is one of the most important questions in biophysics. While much effort has been devoted to studying motion and dynamical properties of individual water molecules, there are only qualitative descriptions of the structure of water network and emergent collective phenomena.
In this project, we will use both molecular dynamics simulations combined with smart data- mining techniques  as well as analytical tools of statistical physics  to elucidate the dynamics of the collective behavior of interacting water molecules. In particular, we will apply techniques borrowed from network theory [3,4,5] to the analysis of the hydrogen bond networks of water molecules. The aim of this analysis is to establish a connection between the dynamics of the system and the underlying topological properties of the water network.
Goal of the project
(1) Develop new models, based on network and information theory, to shed light on the collective behavior of water near biological environments; (2) Apply the graph theory approach to decipher the unique vibrational spectra of water, both in and out of thermal equilibrium; (3) Identify a pattern of collective behavior of water molecules and understand the mechanism of fluctuation propagation in the water network.
(i) Knowledge in classical statistical mechanics; (ii) Previous experience in Molecular Dynamics; (iii) High motivation and ability to work independently.
- A Hassanali, F. Giberti, J. Cuny, T. D. Kühne, and M. Parrinello, Proc. Natl. Acad. Sci. 110 (34), 13723 (2013)
- A Rodriguez, A Laio, Science 344, 6191, 1492-1496 (2014)
- A. Attanasi, A. Cavagna, L. Del Castello, I. Giardina, T. S. Grigera, A. Jelíc, S. Melillo, L.
Parisi, O. Pohl, E. Shen, M. Viale, Nature Physics 10, 691–696 (2014)
- C F A Negre, U. N. Morzan, H. P. Hendrickson, R. Pal, G. P. Lisi, J. P. Loria, I. Rivalta, J.
Ho, V. S. Batista, Proc. Natl. Acad. Sci. 115 (52) E12201-E12208 (2018)
- L. Lacasa, I. P. Mariño, J. Miguez, V. Nicosia, E. Roldán, A. Lisica, S. W. Grill, and J.
Gómez-Gardeñes, Physical Review X, 8(3), 031038 (2018).
Project 9. Time correlation functions for random dynamical systems with anomalous diffusion
Supervisors: Rainer Klages (Queen Mary University of London), Yuzuru Sato (Hokkaido University) and Stefano Ruffo (SISSA)
dynamical systems theory, stochastic theory, anomalous diffusion, computer simulations, nonequilibrium statistical physics
A combination of different deterministic dynamics by sampling randomly between them in time is called a random dynamical system; see the figure below for the principle. This project is motivated by very recent research in which it was shown that random dynamical systems can generate anomalous diffusion . While ordinary Brownian motion is characterized by a mean square displacement that grows linearly in the long time limit, for anomalous diffusion the spreading of particles grows either slower than linear in time (subdiffusion) or faster (superdiffusion) [2,3]. Diffusion is furthermore intimately linked via Taylor-Green-Kubo formulas to the temporal decay of velocity autocorrelation functions. Calculating these correlation functions as well as the generalized diffusion coefficient for the model defined in  is a very interesting open question.
As a warm-up the student should familiarize her/himself with the model of . This entails understanding analytical results for calculating the invariant density of the underlying map of the model  as well as studying this system numerically.
Goal of the project
The main goal of the project is to calculate the velocity autocorrelation function of this model by comparing analytical with numerical results. There are two theoretical approaches that can be explored for this purpose, a direct one and another one based on fractal generalised Takagi functions . Based on this one could try to analytically calculate the generalized diffusion coefficient of this model and to investigate its dependence on control parameters, which should be subtle.
- Y.Sato, R.Klages, Phys.Rev.Lett. 122, 174101 (2019)
- R. Klages, G.Radons, I.M.Sokolov (Eds.), Anomalous transport: Foundations and Applications (Wiley-VCH, Weinheim, 2008)
- J. Klafter, I. M. Sokolov, First steps in random walks (Oxford, 2011)
- S Pelikan, Trans.Am.Math.Soc. 281, 813 (1984)
- G.Knight, R.Klages, Nonlin. 24, 227 (2011)
Project 10. Machine learning techniques for Intermittent systems
Supervisors: Davide Faranda (CNRS), Yuzuru Sato (Hokkaido University), Jean Barbier (The Abdus Salam ICTP)
The advent of high-performance computing has paved the way for advanced analyses of high-dimensional datasets. Those successes have naturally raised the question on whether it is possible to learn the dynamical behavior of a system without simulating the underlying evolution equations. Several efforts have recently been done to apply machine learning to the prediction of geophysical data, to learn parameterizations of subgrid processes in climate models , for nowcasting  and forecasting of weather variables . A first great step in this direction was the use of Echo State Networks (ESN, ) to forecast the behavior of chaotic systems, such as the Lorenz 1963 and the Kuramoto-Sivashinsky dynamics. It was shown that ESN predictions of both systems attain performances comparable to those obtained with the real equations . This success motivated several follow up studies with a focus on meteorological and climate data. A proof concept of the applicability of ESN forecast in the short term forecasts and long term reconstruction of stationary signals such as the global sea-level pressure fields . During this study, the authors realized that a great limitation in the application of ESN to geophysical datasets was the appearance of intermittent behavior. The authors showed that some marginal improvements in the performance of ESN for intermittent systems could be obtained by separating the small scale from the large scale dynamics.
Goal of the project
The goal of the project is to improve ESN for intermittent systems, such as the Pomeau-Manneville dynamics studied in  or the stochastic intermittency in random dynamical systems . The strategy will be to start with the algorithms proposed in  and change the activation function from the hyperbolic tangent to a less smooth function and study whether there is some improvments in forecasting the behavior of intermittent systems.
The applicant should possess a M2 level with a background in climate sciences, mathematics, statistics or physics, a basic knowledge of machine learning algorithms and of dynamical systems theory. A good programming skill (Python or Matlab) is highly recommended.
- P. Gentine et al. Geophysical Research Letters 45, 5742 (2018).
- S. Xingjian et al. Advances in neural information processing systems (2015) pp. 802–810
- S. Scher and G. Messori, QJRMS 144, 2830 (2018).
- J. Pathak et al., Physical review letters 120, 024102 (2018).
- D. Faranda et al., Physical review letters (submitted) (2019), https://hal.archives-ouvertes.fr/hal-02337839 .
- Y. Sato and R. Klages, Physical Review Letters, 122, 174101, (2019).
Project 11. Non-convex optimization and regularization
Supervisors: Fabio Caccioli (University College London) and Imre Kondor (London Mathematical Laboratory)
Convex optimization problems can be solved efficiently up to very large sizes (large number of variables). Non-convex optimization problems arise when either the objective function or any of the constraints are non-convex. It can take exponential time in the number of variables and constraints to determine if such a problem is feasible, or the objective is unbounded, or to select the global optimum across all feasible regions. Non-convex optimization problems unavoidably arise in the context of machine learning and deep neural networks.
The purpose of this project is to study what may be one of the simplest examples of a non-trivial non-convex optimization problem: to “learn” the optimum of the variance from N*T data where N is the number of variables and T the length of observed time series, under a linear constraint on the variables. In the high-dimensional region where N/T >1 one needs to introduce a regularizer which we choose to be a constraint on the sum of absolute values of the variables (l-1 regularizer).
As long as the coefficient of the l-1 regularizer is positive, this is a convex optimization problem. A formally similar model, but with a negative regularizer, has appeared in various disciplines (in the problem of margin accounts , in the dynamics of replicators in evolution [2,3], in the perceptron and a toy model of the jamming transition in glasses , etc.). The negative regularizer renders the problem non-convex. The planned project will study this non-convex optimization using synthetic time series, determine the distribution and nature of local minima, find the global optimum and the domain of feasibility.
The tool to be used is Monte Carlo simulation, therefore the successful candidate(s) will have a good knowledge of coding, and familiarity with the elements of statistical physics. Prior experience with Monte Carlo simulations will be extremely useful.
- S. Galluccio, J.-P. Bouchaud and M. Potters: Rational decisions, random matrices and spin glasses, Physica A 259 (1998) 449-456
- S. Diederich and M. Opper: Replicators with random interactions: A solvable model, Phys. Rev. A, 39 (1989) 4333-4336
- P. Biscari and G. Parisi: Replica symmetry breaking in the random replicant model, J. Phys A: Math.Gen. 28 (1995) 4697
- S. Franz and G. Parisi: The simplest model of jamming, J. Phys. A: Math. Theor. 49 (2016) 145001
Project 12. Reconstructing Dynamical Systems with Machine Learning
Supervisors: Jean Barbier and Edgar Roldan (The Abdus Salam ICTP)
Prediction from time series generated by a dynamical system is a key problem in statistics. Recently, a revival in the field is taking place mainly due to the application of modern machine learning techniques that strongly boosted prediction capabilities, and robustness to noise or lack of data. These have proven to be highly effective in, e.g., prediction of the future behavior of chaotic systems, a task thought unreachable until now. Unfortunately such approaches often result in black-box models that are hard to interpret. Another recent line of work, part of the field of theory-guided data science, tries instead to reconstruct the true model using sparse regression techniques such as compressive sensing. This allows to reconstruct the fewest number of meaningful physical parameters underlying the dynamical system of interest. Even more recent work exploits the advantages of both approaches by combining the prediction power of deep neural networks while reconstructing interpretable, physically meaningful models.
In this mostly numerical project, we will develop and benchmark machine-learning techniques to reconstruct models from stochastic time series with hidden variables. We will adapt cutting-edge algorithms to take into account different sources of noise including partial observability and inherent stochasticity. We expect this project to establish the performance of existing inference techniques and to improve them for specific applications relevant to physics and biology.
- Programming in Python language
- Basics on machine learning
- Background in physics is recommendable (statistical physics, stochastic processes)
- S. Brunton, J. L. Proctor, and J. N. Kutz, PNAS 113, 3932 (2016).
- G-J. Both, et al. arXiv:1904.09406 (2019).
- S. Ouala, et al. arXiv:1907.02452 (2019).
- S. Rangan, P. Schniter, and A. K. Fletcher, IEEE Trans. Inf. Theory 65, 6664 (2019).
Project 13. Attractor reconstruction for dynamical systems from infrequent high dimensional observations in the presence of noise.
Supervisor: Prof Jeroen S.W. Lamb (Imperial College London)
Attractor reconstruction from time-series has been made famous as “Takens embedding”, building on the observation that generic projections of finite dimensional objects – like attractors – in any sufficiently high dimensional ambient space, are one-to-one. The precision of attractors obtained in this way from time-series, typically rely on the availability of sufficiently long time series. In many application areas, however, like medical, long time series are very expensive to obtain, while infrequent very high-dimensional observations are feasible. Prof Luonan Chen and collaborators have managed to use the information from high-dimensional infrequent observations to use for attractor reconstruction, with some remarkable success. It remains a problem how such methods can be used in the presence of noise.
The project will first consider the review of the methods proposed by Chen et al, before proceeding – using some toy models – to consider how a random dynamical system can be reconstructed from noise. While the project will focus on toy models, subsequent applications to real data will be pursued with Prof Chen in due course.
- H Ma, K Aihara, L Chen, Detecting causality from nonlinear dynamics with short-term time series, Scientific reports 4 (2014), 7464
- H Ma, T Zhou, K Aihara, L Chen, Predicting time series from short-term high-dimensional data, International Journal of Bifurcation and Chaos 24 (2014), 1430033
Project 14. Topological Quantum Matter
Supervisors: Joe Bhaseen (King’s College London) and Isaac Pérez Castillo (Institute of Physics, UNAM)
Topological states of matter exhibit many striking phenomena due to the inherent topological properties of their ground-state wavefunctions. Experimental signatures include the robust quantization of electrical transport, with direct links to topological invariants . The recent discovery of topological insulators  extends the reach of topology to a wider class of materials and dimensionalities, giving rise to exotic phases such as topological superconductors. Topological materials could also have potential applications in quantum computation, due to the robust topological character of their ground states.
Goal of the project
The aim of the project is to study models of topological quantum matter in one- and two-dimensions, and to explore the role of topological invariants in determining their properties. Depending on the student’s interest this can involve a combination of analytical and numerical techniques.
Knowledge of quantum mechanics, second quantization, and programming skills would be helpful.
- R. E. Prange and S. M. Girvin, The Quantum Hall Effect, Springer (1994).
- M. Z. Hasan and C. L. Kane, Colloquium: Topological Insulators, Rev. Mod. Phys. 82, 3045 (2010).
- János K. Asbóth, László Oroszlány and András Pályi, A Short Course on Topological Insulators, Springer (2016)
- Shun-Qing Shen, Topological Insulators: Dirac Equation in Condensed Matter, Springer (2017)
Project 15. Resolution-regularization in statistical inference and machine learning
Supervisors: Matteo Marsili (The Abdus Salam ITCP) and Isaac Pérez Castillo (Institute of Physics, UNAM)
Standard regularization schemes resort to adding ad-hoc penalty terms (e.g. ridge regression or LASSO ) to the log-likelihood, that suppress large parameter fluctuations in the fitted models . While the log-likelihood has dimensions of bits, because it can be represented as the sum of the entropy of the empirical distribution and the Kullback-Leibler divergence of this from the model, these penalty term have no information theoretic origin, in general. Their effect, however, is to effectively shift models away from the maximum likelihood point, increasing the entropy of the regularized model.
From first principles, it seems more natural to adopt a regularization that constrains the entropy of the model, which is a measure of resolution. We call this Resolution-regularization.
The project, hence, aims at using the entropy of the regularized model directly, as a regularizer, i.e. to study the maximization over the parameters θ of the following functional
is the entropy or resolution.
Goal of the Project
The first task of the project is to check that standard regularizers increase the entropy H[s] of the regularized model and how. The second is to assess whether resolution regularized models generalize better than models regularized in other manners.
These ideas will be tested on application to inference using Boltzmann machines  (Ising models) and restricted Boltzmann machines, using bench- mark datasets.
- Robert Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B. 58 (1): 267 – 288, (1996).
- David J.C. MacKay. Information Theory, Inference, and Learning Al- gorithms. Cambridge Univ. Press, 2005.