

Invited speakers
ABSTRACTSA giant python and a lazy elephant talking in a Chinese restaurant(Bernard Bercu) The talk is devoted to the socalled elephant random walk with stops (ERWS). In contrast with the standard elephant random walk, the elephant is allowed to be lazy by staying on his own position. We shall show that the number of ones of the ERWS, properly normalized, converges almost surely to a MittagLeffler distribution. It allows us to carry out a sharp analysis on the asymptotic behavior of the ERWS. A connexion with the Chinese restaurant process will be briefly touched, under the watchful eye of a giant python. Stochastic approach to model reduction in computational fluid mechanics. Application to wind time series(Mireille Bossy) How to conciliate the time series of the wind (at one point) and the fluid mechanics PDEs and their complex range of parameterizations? By adopting a Lagrangian and stochastic point of view, we construct, from the 3D+time fluid mechanics PDE, a 0D+time stochastic model for the time series of the local variability of the measured wind. Remarkably, this reduction procedure leads to a wellknown stochastic process, the CIR process that allows simple calibration technique. The resulting calibration of the model against wind observation (without, then with uncertainty) is well consistent with the physical parameterisations classically injected in this type of PDE problem, and often resulting from much more controlled laboratory measurements. This result opens perspectives for parameterizing even more sophisticated models (fluid dynamics PDE and times series). Efficient sampling methods to solve inverse problems with credibility intervals(Pierre Chainais) Bayesian methods for inverse problems in signal and image processing have the advantage of giving access to the a posteriori distribution of the parameters of interest. Thus, one not only accesses a solution to the problem, but also valuable credibility intervals. For example, in astrophysics or medicine, there is generally no ground truth. Providing predictions with confidence intervals is essential: reading the reconstructed image is done with a controlled level of confidence. Nevertheless, Monte Carlo simulations of a posteriori distributions are reputed to be computationally intensive and limited in terms of scaling up in large dimensions or for a large number of parameters to be estimated. We will present a family of approaches called “Asymptotically Exact Data Augmentation” (AXDA). This approach, inspired by splitting in optimization, makes it possible to systematically construct an approximate distribution that is less expensive to sample than the target distribution of the initial model, within the framework of a tradeoff between numerical efficiency and quality of the approximation. These methods pave the way to many variations that we will discuss and illustrate by applications to the resolution of inverse problems. Still blind deconvolution(Elisabeth Gassiat) I consider the deconvolution problem in the case where no information is known about the noise distribution. More precisely, no assumption is made on the noise distribution and no samples are available to estimate it: the deconvolution problem is solved based only on observations of the corrupted signal. I will present an identifiability result, and its use for statistical inference in various applications. Parameter estimation and uncertainty quantification for Gaussian process interpolation(Toni Karvonen) In this talk I discuss some results on maximum likelihood estimation of covariance kernel parameters in Gaussian process interpolation, where Gaussian processes are used to approximate deterministic functions in the absence of noise, and review the effect the kernel and its parameters have on the reliability of the predictive variance as a measure of predictive uncertainty. I focus on the Matérn class of kernels, which is commonly used in machine learning and spatial statistics, and the extent to which maximum likelihood estimation is capable of detecting the smoothness of the latent function. The results rely on connections between Gaussian process interpolation and the theory of optimal approximation in reproducing kernel Hilbert spaces. The talk is primarily based on the following papers:
New estimation of sensitivity indices using kernels(Agnès Lagnoux) The use of complex computer models for the simulation and analysis of natural systems from physics, engineering and other fields is by now routine. These models usually depend on many input variables. Thus, it is crucial to understand which input parameter or which set of input parameters have an influence on the output. This is the aim of sensitivity analysis which has become an essential tool for system modeling and policy support [Read more…]. Modelling and simulation in pharmacometrics: some methods, tools and still open problems(Marc Lavielle) Pharmacometrics models are used to analyze data from clinical trials to understand and quantify the relationship between drug exposure, pharmacological effects, and clinical outcomes. Pharmacometrics models can be used to optimize drug dosing, predict drug safety and efficacy, and support regulatory decisionmaking. Mixed effects models are the reference tool in a population framework, in order to take into account the variability between individuals. Very efficient algorithms (Stochastic approximation of EM, MCMC, Monte Carlo Importance Sampling...) exist in this context to help the modeler to build his model. All these algorithms require a large number of evaluations of the structural model. Such an evaluation can be very fast when the model is relatively simple, such as a compartmental model described by a small number of ODEs. Complex mechanistic models, such as some models of viral dynamics or physiological based models, are described by a very large number of equations and depend on a very large number of parameters. Algorithms must then be adapted to limit the number of evaluations of these models in order to deliver a result in an acceptable time. Other problems are still worth considering, such as the propagation of uncertainties (the population parameters of the model being estimated with a certain uncertainty, what is the impact on the estimation of some individual outcome?) or sensitivity analysis (evaluate how sensitive the clinical outcome is to changes in each pharmacometric parameter). Uncertainty quantification in a federated learning setting(Eric Moulines) Many machine learning applications require training a centralized model on decentralized, heterogeneous, and potentially private data sets. Federated learning (FL, McMahan et al., 2017; Kairouz et al., 2021; Wang et al., 2021) has emerged as a privacyfriendly training paradigm that does not require clients’ private data to leave their local devices. FL brings new challenges in addition to ”traditional” distributed learning [Read more…]. Scalable engineering decisionmaking via multifidelity Sobol sensitivity analysis(Elizabeth Qian) Sobol global sensitivity indices quantify how uncertainty in model inputs contributes to uncertainty in the model output. Such sensitivity indices allow inputs to be ranked in importance and are typically computed using Monte Carlo estimation, but the many samples required for Monte Carlo to be sufficiently accurate can make this computation intractable when the model is expensive. This talk will present a multifidelity Monte Carlo approach to estimating Sobol indices that combines samples from both cheaper lowerfidelity models (e.g., models learned from data) and expensive highfidelity models to achieve computational acceleration with accuracy guarantees. We present new multifidelity Sobol index estimators based on rank statistics that can estimate Sobol indices for all inputs from a single set of independent samples. This significantly reduces the cost of Sobol analysis when the number of inputs is large. The approach accelerates the computation time required for sensitivity analysis of the JW Space Telescope thermal models from more than two months to less than two days. This demonstrates the power of the multifidelity approach to make sensitivity analysis tractable for largescale engineering systems in the real world. Walking with Fabrice among large deviations and sum rules(Alain Rouault) This talk will be devoted to describe the path starting from Kullback information minimization (1988) and leading to sum rules in spectral theory (2010–2023) according to the scientific route of Fabrice. I have always been glad to share a part of this adventure and will try now to explain the pivotal role played by large deviations in analysis and probability. Rare event estimation with PDEbased models(Elisabeth Ullmann) The estimation of the probability of rare events is an important task in reliability and risk assessment of critical societal systems, for example, groundwater flow and transport, and engineering structures. In this talk we consider rare events that are expressed in terms of a limit state function which depends on the solution of a partial differential equation (PDE). We give a brief overview of current estimation strategies and present two novel estimators for the rare event probability based on (1) the Ensemble Kalman filter for inverse problems, and (2) a consensusbuilding mechanism. Both approaches use particles which follow a suitable stochastic dynamics to reach the failure states. The particle methods have historically been used for Bayesian inverse problems. We connect them to rare event estimation. Joint work with Konstantin Althaus, Fabian Wagner and Iason Papaioannou (TUM). 
Online user: 1  Privacy 