05.11.2025 12:15 Nicolas-Domenic Reiter (TUM): A frequency domain approach to causal inference in discrete-time processes
The talk is divided into two parts. In the first part, I will introduce structural equation processes as a model for causal inference in discrete-time stationary processes. A structural equation process (SEP) consists of a directed graph, an independent stationary (zero-mean) process for every vertex of the graph, and a filter (i.e., an absolutely summable sequence) for every link on the graph. Every structural vector autoregressive (SVAR) process, a commonly used linear time series model, admits a representation as a SEP. Furthermore, the Fourier-transformed SEP representation of an SVAR process is parameterized over the field of rational functions with real coefficients. Using this frequency domain parameterization, we will see that d- and t- separation statements about the causal graph (associated with the SVAR process) are generically characterized by rank conditions on the spectral density of the SVAR process. Here, the spectral density is considered as a matrix over the field of rational functions with real coefficients. Additionally, we will see that the Fourier-transformed SEP parameterization of an SVAR process comes with a notion of rational identifiability for the Fourier transformed link filters. This notion allows to reason about identifiability in the presence of latent confounding processes. For instance, the recent latent factor half-trek criterion can be used to determine if the effect (i.e., the associated link function) between two potentially confounded processes is a rational function of the spectral density of the observed processes.
\[ \]
In the second part of the talk, I will expand the SEP framework to include a specific class of non-stationary linear processes. This class of non-stationary SEPs includes SVAR processes with periodically changing coefficients. I will also demonstrate how this framework can be used to reason about identifiability in subsampled processes, i.e., when observations are gathered at a lower frequency than the frequency at which causal effects occur.
Source
10.11.2025 14:15 Gemma Sedrakjan (TU Berlin) : How much should we care about what others know? Jump signals in optimal investment under relative performance concerns
We present a multi-agent and mean-field formulation of a game between investors who receive private signals informing their investment decisions and who interact through relative performance concerns. A key tool in our model is a Poisson random measure which drives jumps in both market prices and signal processes and thus captures common and idiosyncratic noise.
Upon receiving a jump signal, an investor evaluates not only the signal's implications for stock price movements but also its implications for the signals received by her peers and for their subsequent investment decisions. A crucial aspect of this assessment is the distribution of investor types in the economy. These types determine their risk aversion, performance concerns, and the quality and quantity of their signals. We demonstrate how these factors are reflected in the corresponding HJB equations, characterizing an agent's optimal response to her peers'
signal-based strategies. The existence of equilibria in both the multi-agent and mean-field game is established using Schauder's Fixed Point Theorem under suitable conditions on investor characteristics, particularly their signal processes. Finally, we present numerical case studies that illustrate these equilibria from a financial-economic perspective. This allows us to address questions such as how much investors should care about the information known by their peers.
Source
10.11.2025 15:00 Paul Sanders: Early Warning of Critical Transitions: Distinguishing Tipping Points from Turing Patterns
In our uncertain and ever-changing world, many systems face the danger of crossing tipping thresholds in the future. Therefore, there is a growing interest in developing swift and reliable early warning methods to signal such crossings ahead of time. Until now, most approaches have relied on critical slowing down, typically assuming white noise and neglecting spatial effects.
We introduce a data-driven method that reconstructs the linearised reaction–diffusion dynamics directly from spatio-temporal data. From the inferred model, we compute the dispersion relation and analyse the stability of Fourier modes, allowing early detection of both homogeneous and spatial instabilities.
By framing early detection as a data-driven stability analysis, this approach provides a unified and quantitative way to indicate whether and when a system is approaching a tipping point or a Turing-type transition.
Source
10.11.2025 15:15 Patricio Herbst (University of Michigan): Using simulations to teach about problem-based instruction in mathematics: Design considerations and research opportunities
In this talk I’ll describe the use of Design Based Research in an ongoing project to develop and improve a set of digital simulations that seek to develop mathematics teachers’ capacities to teach geometry through problems and classroom discussions. The intervention is inscribed in the approach known as practice-based teacher education and the simulations are conceived as approximations of practice—where the teacher-learner practices key tasks of teaching. The first simulation acquaints teachers with the problem space: using a problem about constructing a circle tangent to two lines to teach the tangent segments theorem where they act as observers of an avatar teacher. Teachers then simulate teaching the whole lesson to a class of student avatars for the first time, having the opportunity to select and sequence students’ work for classroom discussion and eliciting and responding comments from students. The following two simulations give them opportunity to notice students’ work and to respond to students’ contribution respectively. Finally, they have another opportunity at teaching the whole lesson. I’ll share considerations for the design of these 4 simulations and how they developed as a result of waves of data collection and analysis. Initial observation of the performance differences when simulating the teaching of the whole lesson (before and after the two learning simulations) suggest these are associated with performance gains. I’ll share how we are using participants’ responses during the learning activities to reveal learning traces that might help explain those gains.
_______________________
Invited by Prof. Stefan Ufer
Source
10.11.2025 15:15 Gero Junike (LMU): From characteristic functions to multivariate distribution functions and European option prices by the damped COS method
We provide a unified framework to obtain numerically certain quantities, such as the distribution function, absolute moments and prices of financial options, from the characteristic function of some (unknown) probability density function using the Fourier-cosine expansion (COS) method. The classical COS method is numerically very efficient in one-dimension, but it cannot deal very well with certain integrands in general dimensions. Therefore, we introduce the damped COS method, which can handle a large class of integrands very efficiently. We prove the convergence of the (damped) COS method and study its order of convergence. The method converges exponentially if the characteristic function decays exponentially. To apply the (damped) COS method, one has to specify two parameters: a truncation range for the multivariate density and the number of terms to approximate the truncated density by a cosine series. We provide an explicit formula for the truncation range and an implicit formula for the number of terms. Numerical experiments up to five dimensions confirm the theoretical results.
Source