Univ. Heidelberg
Statistik-Gruppe   Institut für Mathematik   Fakultät für Mathematik und Informatik   Universität Heidelberg
Ruprecht-Karls-Universität Heidelberg Institut für Mathematik Arbeitsgruppe Statistik inverser Probleme
german english french



Publikationen
Kooperationen
Forschungsprojekte
Veranstaltungen
Lehre
Abschlussarbeiten
Personen
Kontakt


Zuletzt geändert am
18 Apr 2024 von XL
.
Poster (.pdf):
Workshop “Inverse problems: theory and statistical inference” im IWH (International Academic Forum) in Heidelberg

Präsentiert von:
Xavier Loizeau

Titel:
A Bayesian interpretation of data-driven estimation by model selection

Abstrakt:
Considering an indirect Gaussian sequence space model and a hierarchical prior in Johannes, Simoni and Schenk (2016) oracle and minimax-optimal concentration and convergences rates for the associated posterior distribution and Bayes estimator, respectively, are shown as the noise level tends to zero. Notably, the hierarchical prior does not depend neither on the parameter value θ◦ that generates the data nor on the given class. In this paper the posterior is taken iteratively as a new prior and the associated posterior is calculated for the same observation again and again. Thereby, a family, indexed by the iteration parameter, of fully data driven prior distributions are constructed. Each element of this family leads to an oracle and minimax-optimal concentration and convergences rates for the associated posterior distribution and Bayes estimator as the noise level tends to zero. Moreover, the Bayes estimators can be interpreted either as fully data-driven shrinkage estimators or as a fully data-driven aggregation of projection estimators. Interestingly, increasing the value of the iteration parameter gives in some sense more weight to the information contained in the observations than in the hierarchical prior. In particular, for a fixed noise level letting the iteration parameter tend to infinity, the associated posterior distribution shrinks to a point measure. The limite distribution is degenerated on the value of a projection estimator with fully-daten driven choice of the dimension parameter using a model selection approach as in Massart (2003) where a penalized contrast criterium is minimised. Thereby, the classical model selection approach gives in some sense an infinitely increasing weight to the information contained in the observations in comparison to the prior distribution. It is further shown that the limite distribution and the associated Bayes estimator converges with oracle and minimax-optimal rates as the noise level tends to zero. Simulations are shown to illustrate the influence of the iteration parameter which provides in some sense a compromise between the fully-data driven hierarchical Bayes estimator proposed by Johannes, Simoni and Schenk (2016) and the fully-data driven projection estimator by model selection.

Literatur:
Bunke, O. und Johannes, J. (2005). Selfinformative limits of Bayes estimates and generalized maximum likelihood. Statistics, 39(6):483–502.
Johannes, J., Simoni, A. und Schenk, R. (2015). Adaptive Bayesian estimation in indirect Gaussian sequence space models. Discussion paper, arXiv:1502.00184.
Massart, P. (2003). Concentration inequalities and model selection. In Springer, editor, Ecole d’Été de Probabilités de Saint-Flour XXXIII, volume 1869.