last update: 11.06.2021
The special sessions will consist of several talks on a closely related theme.
Stein's method is an abstract tool for bounding distances between distributions. Although the method originated in probability theory as a technique for proving central limit theorems, it has recently caught the attention of the computational statistics community.
Post-processing of Monte Carlo
Organized by: Chris J. Oates (Newcastle University)
One of the main reasons for this interest is the concept of a Stein discrepancy, which provides a computable discrepancy with convergence control in the important setting of an un-normalised probability model. Stein discrepancies have underpinned several major recent advances in the context of Monte Carlo methods, with applications including de novo sampling, variational inference, goodness-of- t testing and measuring the performance of algorithms for Markov chain Monte Carlo (MCMC).
The aim of this session is to showcase the potential for Stein discrepancies to be used in the post-processing of Monte Carlo output.
Alternatives to Monte Carlo
Organized by: Francois-Xavier Briol (University College London)
A notable advantage of Stein's method is its ability to create rich classes of functions which integrate to zero under some distribution of interest. This can be a particularly convenient computational trick to enhance, or replace Monte Carlo methods. This session will highlight several recent applications, including to the development of novel sampling schemes, variance reduction tools, and novel inference schemes where inference can be made conjugate.
Session A
Organized by: Kengo Kamatani (ISM, Japan)
Most of the Markov chains used in Monte Carlo integration satisfy the so-called reversible (detailed-balance) condition. Although reversible Markov chains form a nice class, the condition is not necessary for Monte Carlo integration. Breaking reversibility sometimes improves performance dramatically. However, without the condition, constructing a Markov chain is not an easy task. Recently, some non-reversible piecewise deterministic Markov processes have been popularized as efficient Monte Carlo integration tools. Many related non-reversible Markov chains have also been proposed inspired by the development. Now, non-reversible chains/
Session B
Organized by: Gareth Roberts (University of Warwick)
Non-reversible MCMC is now an active area and there have been tremendous advances in recent years, particularly in theory underpinning the use of Piecewise deterministic Markov Processes (PDMP) for Monte Carlo. On the other hand there are other directions which are currently not so well-developed, including numerical methods for the automated implementation of PDMPs, understanding the robustness of the efficiency of these methods in the context of different classes of target distributions, and the development of completely new classes of non-reversible algorithms not related to PDMPs. The session will therefore explore some of these new directions.
Organized by: Toni Karvonen and Jonathan Cockayne (The Alan Turing Institute)
Probabilistic numerical methods (PNMs) are a class of numerical methods that use ideas from probability and statistics in their construction. Since their inception these methods have been applied to numerical tasks such as integration, in which they are used to quantify uncertainty in the value of the integral or for other aspects of algorithm design. This session will present recent practical and theoretical advances from the literature on PNMs, including applications to Monte Carlo integration, algorithmic design and theoretical connections to optimal approximation in reproducing kernel Hilbert spaces.
Organized by: Fred Hickernell (Illinois Institute of Technology)
This session describes research to bring the theoretical benefits of quasi-Monte Carlo methods to bear in various applications by providing accessible, state of the art low discrepancy generators, use cases, and stopping rules.
Variance reduction techniques play a crucial role in improving the complexity of the Monte Carlo (MC) methods in many applications.
Rare Events
Organized by: Nadhir Ben Rached (RWTH Aachen) and Raul Tempone (RWTH Aachen, KAUST)
Rare events are events with small probabilities, but their occurrences are critical in many real-life applications. The problem of estimating rare event probabilities is encountered in various engineering applications (finance, wireless communications, system reliability, biology, etc.). Naive Monte Carlo simulations are, in this case, substantially expensive. This session focuses on methods belonging to the class of variance reduction techniques. These alternative methods deliver, when appropriately used, accurate estimates with a substantial amount of variance reduction compared to the naive Monte Carlo estimator.
Pure Jumps and Stochastic Reaction Networks
Organized by: Chiheb Ben Hammouda (RWTH Aachen), Nadhir Ben Rached (RWTH Aachen) and Raul Tempone (RWTH Aachen, KAUST)
In this mini-symposium, we are interested in advanced topics related to variance reduction methods for estimating statistical quantities for pure jumps and stochastic reaction networks, with a particular focus on stochastic supply chains and stochastic biological and chemical systems. We present the recent improvements of Monte Carlo (MC), multilevel Monte Carlo (MLMC), and quasi-Monte Carlo (QMC) methods.
Computational Finance
Organized by: Chiheb Ben Hammouda (RWTH Aachen) and Raul Tempone (RWTH Aachen, KAUST)
In this mini-symposium, we focus on advanced topics related to variance reduction methods for option pricing and estimating statistical quantities for computational finance. We are interested in the recent improvements of Monte Carlo (MC), multilevel Monte Carlo (MLMC), and quasi-Monte Carlo (QMC) methods.
Organized by: Sebastian Krumscheid (RWTH Aachen) and Abdul-Lateef Haji-Ali (Heriot-Watt University)
Extensions of the Monte Carlo method, like Quasi Monte Carlo and Multilevel Monte Carlo, try to reduce the computational complexity compared to the classical Monte Carlo method for a wide range of applications involving complex computational models. However, it is known that these methods provide less significant performance improvements when the quantity of interest exhibits a non-smooth dependence on the stochastic parameters. Examples of such problems range from financial engineering, transport and flow problems, to risk measures and distribution functions, to name but a few. In this session we will discuss recent developments and extensions of the Monte Carlo method, and their computational complexity, which are tailored to the presence of a discontinuous dependence on the stochastic parameter.
Organized by: Abdul-Lateef Haji-Ali (Heriot-Watt University) and Raul Tempone (RWTH Aachen, KAUST)
Particle systems are versatile modelling tools that are easy to build starting from simple ODEs or SDEs, but can have complicated emergent properties. One drawback of these systems is the involved computational complexity since hundreds of thousands of coupled ODEs or SDEs have to be solved at sufficient accuracy. Continuous methods are usually employed to alleviate this complexity, but since they assume an infinite number of particles, they introduce a modelling error. More recently, using Monte Carlo methods for particle systems have become more desirable. This is due to the availability of computational resources, especially with parallel architectures, and recent advancement in Monte Carlo methods that exploit the properties of the underlying particle systems. In this special session, we intend to present the latest algorithmic and theoretical contributions to Monte Carlo methods when applied to particle systems.
Organized by: Madalina Deaconu (INRIA, France) and Francisco Bernal (Carlos III of Madrid University)
Thanks to its many computational advantages, the modelling of phenomena in terms of functionals of spatially bounded stochastic differential equations is increasingly common in applications – as in biology, geophysics, finance, etc. The complexity of the interaction between the simulated diffusions and the boundary – whether absorbing or reflecting, and sometimes „interior“ – typically leads to numerical issues such as the degradation of accuracy and convergence rate. During the last years, plenty of new ideas and numerical schemes have been proposed and improved the situation in classical and novel scenarios. In this special session, several of them are discussed under a common perspective.
Organized by: Stefan Heinrich (TU Kaiserslautern), Thomas Müller-Gronbach (Universität Passau) and Larisa Yaroslavtseva (Universität Passau)
The session is devoted to algorithms and complexity for
including aspects of
Organized by: Andreas Roessler (Universität Lübeck) and Claudine von Hallern (Universität Hamburg)
This special session focuses on recent contributions in the field of stochastic partial differential equations. The topics of the session cover numerical analysis and simulation of solutions as well as modelling and applications with stochastic partial differential equations in any area of science and technology.
Organized by: Daniel Rudolf (University of Göttingen), Claudia Schillings (University of Mannheim), Björn Sprungk (TU Bergakademie Freiberg) and Philipp Wacker (FAU Erlangen-Nürnberg)
The Laplace approximation is a standard tool in computational statistics and has already been used in the early days of Bayesian neural networks for approximating predictive distributions. In recent years the underlying Gaussian proxy of the posterior measure has gained again interest with focus on high-dimensional applications in Bayesian inverse problems, deep learning as well as optimal experimental design. For instance, the Laplace approximation has been exploited as a suitable reference measure for constructing efficient sampling methods for concentrated posterior distributions. In this special session we bring together experts from various fields and discuss novel theoretical as well as practical insights on the Laplace approximation as a tool for efficient computation including among others recent results on convergence as well as explicit error bounds.
Organized by: Simone Göttlich and Thomas Schillinger (University of Mannheim)
This special session is concerned with recent advances in uncertainty quantification for hyperbolic systems of conservation laws. These nonlinear partial differential equations are used to describe transport phenomena that arise for instance in energy supply systems or traffic flow. Typically, uncertainty appears in conservation laws as initial or boundary data, coefficients or vector fields and hence propagates in space and time. The different contributions under consideration range from theoretical investigations to efficient sampling methods.
Organized by: Stefan Geiss (University of Jyväskylä)
The topic of this session are quantitative aspects of Stochastic Differential Equations of various types and corresponding connections and applications to stochastic analysis, simulation, and modelling.
Organized by: Daniel Adams (University of Edinburgh), Goncalo dos Reis (University of Edinburgh), Hong Duong (University of Birmingham)
This special session focuses on gradient flows of functionals in the space of probability measures. The theory provides a powerful framework to study evolutionary partial differential equations. It can be used to prove well-posedness, regularity, stability, and by studying the geodesic convexity of the functional, it can provide quantitative rates of convergence to equilibrium. The variational formulation of Bayesian inference shares a common structure with the study of gradient flows. These techniques have found application in a variety of domains like physics, image processing, graphics, biology and machine learning. Recently developed numerical schemes, and advances in computational optimal transport, have paved the way for efficient simulations. In this session, we will explore the extensive theory of Wasserstein gradient flows, with special interest devoted to numerical schemes, and the links to Bayesian inference, and machine learning.