12.01.2026 14:15 Thomas Mikosch (University Copenhagen) : Modeling extremal clusters in time series
Real-life financial time series exhibit heavy tails and clusters of extreme values. In this talk we will address models that exhibit these stylized facts. This is the class of regularly varying time series, introduced by Davis and Hsing (1995, AoP) and further developed by Basrak and Segers
(2009, SPA). The marginal distribution of a regularly varying time series has tails of power-law type, and the dynamics caused by an extreme event in this time series is described by the spectral tail process. The perhaps best known financial time series models of this kind are Engle’s (1982) ARCH process, Bollerslev’s (1986) GARCH process and Engle’s and Russell’s (1998) Autoregressive Conditional Duration (ACD) model. The length and magnitude of extremal clusters in such a series can be described by an analog of the autocorrelation function for extreme events: the extremogram. The extremal index is another useful tool for describing expected extremal cluster sizes. Both objects can be expressed in terms of the spectral tail process and allow for statistical estimation. The probabilistic and statistical aspects of regularly varying time series are summarized in the recent monograph by Mikosch and Wintenberger (2024) “Extreme Value Theory for Time Series. Models with Power-Law Tails”. The talk is based on joint work with Olivier Wintenberger (Sorbonne).
Source
14.01.2026 13:00 Fadoua Balabdaoui (ETH Zürich): Unmatched linear regression: Asymptotic results under identifiability
Consider the regression problem where the response and the covariate are unmatched. Under this scenario, we do not have access to pairs of observations from their joint distribution, but instead we have separate data sets of responses and covariates, possibly collected from different sources. We study this problem assuming that the regression function is linear and the noise distribution is known or can be estimated. We introduce an estimator of the regression vector based on deconvolution (the DLSE) and demonstrate its consistency and asymptotic normality under parametric identifiability. Under non-identifiability of the regression vector but identifiability of the distribution of the predictor, we construct an estimator of the latter based on the DLSE and show that it converges to the true distribution of the predictor at the parametric rate in the Wasserstein distance of order 1. We illustrate the theory with several simulation results.
\[ \]
This talk is based on my joint work with Mona Azadkia, Antonio di Noia and Cecile Durot
Source