Univ. Heidelberg
Statistics Group   Institute for Mathematics   Faculty of Mathematics and Computer Science   University Heidelberg
Ruprecht-Karls-Universität Heidelberg Institute for Mathematics Statistics of inverse problems Research Group
german english french



Publications
Cooperations
Research projects
Events
Teaching
Completed theses
People
Contact


Last edited on
Apr 18, 2024 by XL
.
Talk (.pdf):
13th German Open Conference on Probability and Statistics in Freiburg, Germany

Presented by:
Xavier Loizeau

Title:
A family of adaptive Bayesian methods for statistical ill-posed inverse problems

Abstract:
Consider an infinite dimensional parameter space Θ and a linear operator λ mapping Θ to itself. Given a family of sample distributions indexed by λθ and with noise level ε, we consider the estimation of θ◦ in Θ using a Bayesian point of view. Studying the asymptotic as ε tends to 0, we first introduce the notion of oracle optimal concentration over a family of prior distributions which does not rely on a comparison to a frequentist oracle optimal convergence rate. Considering a statistical ill-posed inverse problem, a family of sieve prior distributions is then introduced. It is indexed by a tuning parameter which has to be chosen. We show that Bayesian oracle optimality is achieved if the tuning parameter is chosen optimally, which requiers knowledge of the true parameter. Hence, a hierarchical approach is used to construct a fully data driven prior from this family. Facing the difficulty to justify the choice of a particular prior in the non-parametric context, we then study a non informative prior obtained by iteration of the posterior. This procedure generates a family of posterior distributions, giving more and more weight to the observations while the prior information fades away. We show that, interestingly, each element of the family conserves the oracle optimality property and that, as the number of iteration tends to infinite, the posterior distribution degenerates to the so called model selection frequentist estimator, providing a new proof for its optimality. Three examples are used all along the presentation; namely, the inverse Gaussian sequence space model, the circular deconvolution and the real line deconvolution. We can see in those three examples that the estimate given by the posterior mean optimally aggregates so called projection estimators and does not require to split the sample.


References: :: Johannes, J., Simoni, A. and Schenk, R. (2015). Adaptive Bayesian estimation in indirect Gaussian sequence space models. Discussion paper, arXiv:1502.00184.