last update: 18.08.2021
We are happy to announce the following plenary talks:
(Some plenary talks have been recorded. The recordings are available to registered participants.)
Zdravko Botev, University of New South Wales Sydney, Australia.
In the age of multi-layer neural networks, we ask the question: Do we really understand the simple linear ridge regression model?
In this back-to-basics talk, I argue that after more than 50 years since its widespread adoption, the simplest Tikhonov regularization method is still a treasure trove of insights waiting to be explored.
Along the way, we explain why current popular methods for selecting the regularization parameters do not perform well and how to resolve this issue. In addition, we explain how collinearity amongst the features affects both prediction and inference in precise quantifiable ways.
It also turns out that a more sophisticated version of the ridge regression model can be used for model selection, in some cases performing better than much of the current 21 century technology. In view of some of the findings, the talk thus calls for a reassessment of some model selection methods currently in use.
Lester Mackey, Microsoft Research New England and Stanford University, USA.
Stein’s method is a powerful tool from probability theory for bounding the distance between probability distributions. In this talk, I’ll describe how this tool designed to prove central limit theorems can be adapted to assess and improve the quality of practical inference procedures. Along the way, I’ll highlight applications to Markov chain Monte Carlo sampler selection, goodness-of-fit testing, variational inference, de novo sampling, post-selection inference, and nonconvex optimization and close with several opportunities for future work.
Sandeep Juneja, Tata Institute of Fundamental Research, Mumbai, India.
Agent-based simulators are a popular epidemiological modelling tool to study the impact of various non-pharmaceutical interventions in managing an evolving pandemic. They provide the flexibility to accurately model a heterogeneous population with time and location varying, person specific interactions. To accurately model detailed behaviour, typically each person is separately modelled. This however, may make computational time prohibitive when the region population is large and when time horizons involved are large.
We observe that simply considering a smaller aggregate model and scaling up the output leads to inaccuracies. In this talk we primarily focus on the COVID-19 pandemic and dig deeper into the underlying probabilistic structure of an associated agent based simulator (ABS) to arrive at modifications that allow smaller models to give accurate statistics for larger models. We exploit the observations that in the initial disease spread phase, the starting infections behave like a branching process. Further, later once enough people have been infected, the infected population closely follow its mean field approximation. We build upon these insights to develop a shifted, scaled and reset version of the simulator that accurately evaluates the ABS's performance using a much smaller model while essentially eliminating the bias that otherwise arises from smaller models.
Larisa Yaroslavtseva, Universität Passau, Germany.
The classical assumption in the literature on numerical approximation of stochastic differential equations (SDEs) is global Lipschitz continuity of the coefficients of the equation. However, many SDEs arising in applications fail to have globally Lipschitz continuous coefficients.
In the last decade an intensive study of numerical approximation of SDEs with non-globally Lipschitz continuous coefficients has begun. In particular, strong approximation of SDEs with a drift coefficient that is discontinuous in space has recently gained a lot of interest. Such SDEs arise e.g. in mathematical finance, insurance, neuroscience and stochastic control problems. Classical techniques of error analysis are not applicable to such SDEs and well known convergence results for standard methods do not carry over in general. In this talk I will give an overview of recent results in this area.
Mireille Bossy, Inria Sophia Antipolis – Méditerranée, France.
Particle-laden flows occur in many industrial and environmental systems, such as pollutant dispersion, agglomeration, or deposition phenomena. When dealing with turbulent flow, at play in numerous situations, complex SDE descriptions are coupled with numerical models for the flow, leading to systems of SDEs representing for instance near wall particles dynamics, or colliding systems of Langevin or Brownian type. This talk will present some typical SDE models of this flied and the associated numerical difficulties. These difficulties or numerical constraints are increasing with the improvement of the represented physics, which requires to adapt the time integration schemes and their convergence analysis.
Starting with the simple situation of a stochastic particle in turbulence (Langevin type SDE), bouncing on a wall, we show how to obtain the optimal rate of weak convergence of a symmetrised Euler scheme for specular boundary problem. We then introduce and discuss more complex situations, such as SDE models for the transport of non spherical particles in a turbulent flow, that requires splitting scheme strategies and a more trajectorial view of the convergence to preserve the transmission of correlation induced by the complex Brownian diffusion between steps, or other numerical issues dealing with collision/
Elena Akhmatskaya, Basque Center for Applied Mathematics Bilbao, Spain.
With the recently increased interest in probabilistic models, such as Bayesian epidemic models or probabilistic deep learning, the efficiency of an underlying sampler becomes a crucial consideration. A Hamiltonian Monte Carlo (HMC) sampler is one popular choice for models of this kind. We revisit the standard practices of the HMC, and propose several promising alternatives. The topics of our discussion include a formulation of the HMC, numerical integration methods for Hamiltonian dynamics, and the choice of simulation parameters and settings.
Heping Wang, Capital Normal University, Beijing, China.
In this talk we discuss two multivariate approximation problems.
First we study approximation of multivariate functions from a separable Hilbert space in the randomized setting with the error measured in the weighted L2 norm. We consider the power of standard information Λstd for tractability under the normalized or absolute error criterion and show that it is the same as that of general linear information Λall for all notions of tractability. Specifically, we solve Open Problems 98, 101, 102 and almost solve Open Problem 100 as posed by Novak and Woźniakowski in the book: Tractability of Multivariate Problems, Volume III: Standard Information for Operators, EMS Tracts in Mathematics, Zürich, 2012.
Second we investigate uniform approximation of functions in the tensor product reproducing kernel Hilbert spaces using Λall in the randomized setting and show that it is polynomially tractable under the normalized error criterion. This is contrary to the deterministic case setting, in which uniform approximation problems of tensor product spaces suffer from the curse of dimensionality.
Rob Scheichl, Ruprecht-Karls-Universität Heidelberg, Germany.
Eigenvalue problems involving partial differential operators appear naturally when modelling physical phenomena, such as the buckling of mechanical structures, the criticality of nuclear reactors or the optical properties of photonic crystals. The model parameters are often only partially known or uncertain. To quantify these uncertainties, stochastic approaches are common. The stochasticity in the coefficients causes the eigenvalues and eigenfunctions to also be stochastic, and so our goal will be to compute statistics, such as expectation or variance, of these eigenvalues and eigenfunctions. Spatially distributed uncertainty, e.g. in the PDE coefficients or in the geometry of the domain, leads to problems with infinitely many stochastic parameters.
In this talk, we will present a multilevel quasi-Monte Carlo method for approximating the expectation of the minimal eigenvalue of a second-order elliptic eigenvalue problem and provide a rigorous error analysis guaranteeing dimension-independent convergence at a rate of (almost) 1/
This is joint work with Alexander Gilbert.
Daniel Rudolf, Georg-August-Universität Göttingen, Germany.
For approximate sampling of a partially known distribution the slice sampling methodology provides a machinery for the design and simulation of a Markov chain with desirable properties. In the machine learning community it is a frequently used approach, which appears not only their as standard sampling tool. In particular, the elliptical slice sampler attracted in the last decade, as tuning-free and dimension robust algorithm, considerable attention. However, from a theoretical point of view it is not well understood. In general, the theoretical results, which testify qualitatively robust and ``good'' convergence properties of classical slice sampling methods, are mostly not applicable because of idealized implementation assumptions. Motivated by that the aim of the talk is
1. to provide an introduction into the slice sampling methodology;
2. to discuss different interpretations;
3. to talk about convergence results; as well as
4. to point to open questions.