27.04.2026 15:00 Murad Alim: Resurgence and Exact WKB: From Divergent Series to Nonperturbative Physics
Divergent asymptotic expansions are ubiquitous in mathematical physics, yet they often encode far more information than their formal nature suggests. In this talk, I will present ideas from resurgence theory, which provide a systematic way to reconstruct analytic functions from such expansions.
As a an example, I will consider the exact WKB method, where asymptotic series arise as formal solutions to Schrödinger operators. Resurgence reveals how different analytic realizations of these series are related through Stokes phenomena—discrete jumps that encode nonperturbative effects, and geometrically encode changes of triangulations of an underlying Riemann Surface.
Quelle
27.04.2026 16:30 Francesco Mattesini: Adapted Wasserstein Barycenters of Gaussian Processes: Existence, Uniqueness and Characterization
Optimal transport has become a central tool for comparing probability measures and extracting representative distributions from heterogeneous data — yet in many applications the objects of interest are stochastic processes, and the classical framework ignores a key structural feature: time and information. Indeed, classical Wasserstein barycenters ignore the filtration structure, making them ill-suited for problems in mathematical finance, stochastic control, and sequential decision-making.
We study Fréchet means of Gaussian process laws in adapted Wasserstein space, where transport plans must respect the temporal flow of information. We prove that barycenters of Gaussian inputs exist, are Gaussian, and are unique. The key insight is a decomposition of the adapted Bures–Wasserstein distance into independent classical Bures–Wasserstein problems, one per time step, which yields both a clean characterization of the barycenter and a tractable fixed-point algorithm for its computation. Finally, we briefly discuss possible applications in robust stress testing of financial models and illustrate with numerical examples Wasserstein barycenters of autoregressive models.
Based on joint work with Johannes Wiesel.
Quelle
30.04.2026 14:00 Sebastian Kassing: Stochastic modified flows, mean-field limits and dynamics of stochastic gradient descent
Stochastic Gradient Descent (SGD) and its variants are the standard
tools for training deep neural networks.
In this talk, we examine stochastic optimization methods through the
lens of dynamical systems, employing techniques traditionally used in
statistical physics or mean-field games. In particular, we introduce an
interacting-particle framework and derive the mean-field limit for
training shallow neural networks in the infinite-width regime.
Quelle
06.05.2026 16:15 Daniela M. Witten (University of Washington, Seattle): Data Thinning and beyond
Contemporary data analysis pipelines often involve the use and reuse of data. For instance, a scientist may explore a dataset to select an interesting hypothesis, and then wish to test this hypothesis with the same data. From a statistical perspective, this double use of data is highly problematic: it induces dependence between the hypothesis generation and testing stages, which complicates inference. Failure to account for this dependence renders classical inference techniques invalid.
I will present "data thinning", a set of strategies for obtaining independent training and test sets so that the former can be used to select a hypothesis, and the latter to test it. Data thinning enables valid selective inference in settings for which no solutions were previously available. However, it is also restrictive, in the sense that it requires strong distributional assumptions. Therefore, I will also present two strategies inspired by data thinning that enable valid post-selection inference without such assumptions. One strategy considers thinning summary statistics of the data, rather than the data itself, in order to take advantage of asymptotic properties of the summary statistics. The second strategy involves generating training and test sets that are not independent, and then orthogonalizing the latter with respect to the former in order to conduct valid inference.
Quelle
08.05.2026 10:00 Mahsa Taheri Ganjhobadi (Universität Hamburg): Non-asymptotic error bounds for probability flow ODEs under weak log-concavity
Score-based generative modeling, implemented through probability flow ODEs, has shown impressive results in numerous practical settings. However, most convergence guarantees rely on restrictive regularity assumptions on the target distribution -- such as strong log-concavity or bounded support. This work establishes non-asymptotic convergence bounds in the 2-Wasserstein distance for a general class of probability flow ODEs under considerably weaker assumptions: weak log-concavity and Lipschitz continuity of the score function. Our framework accommodates non-log-concave distributions, such as Gaussian mixtures, and explicitly accounts for initialization errors, score approximation errors, and effects of discretization via an exponential integrator scheme. Bridging a key theoretical challenge in diffusion-based generative modeling, our results extend convergence theory to more realistic data distributions and practical ODE solvers. We provide concrete guarantees for the efficiency and correctness of the sampling algorithm, complementing the empirical success of diffusion models with rigorous theory. Moreover, from a practical perspective, our explicit rates might be helpful in choosing hyperparameters, such as the step size in the discretization.
Quelle