Univ. Heidelberg
Groupe de statistique   Institut de Mathématiques   Faculté de Mathématiques et d'Informatique   Université Heidelberg
Ruprecht-Karls-Universität Heidelberg Institut de Mathématiques Groupe de recherche en Statistique des problèmes inverses
german english french



Publications
Coopérations
Projets de recherche
Événements
Enseignements
Mémoires et thèses
Membres
Contact


Dernière mise à jour
le 18 Avr 2024 par XL
.
Poster (.pdf) :
workshop “Problèmes inverses: théorie et inference statistique” au IWH (International Academic Forum) à Heidelberg

Présenté par :
Xavier Loizeau

Titre :
A Bayesian interpretation of data-driven estimation by model selection

Abstrait :
Considering an indirect Gaussian sequence space model and a hierarchical prior in Johannes, Simoni and Schenk (2016) oracle and minimax-optimal concentration and convergences rates for the associated posterior distribution and Bayes estimator, respectively, are shown as the noise level tends to zero. Notably, the hierarchical prior does not depend neither on the parameter value θ◦ that generates the data nor on the given class. In this paper the posterior is taken iteratively as a new prior and the associated posterior is calculated for the same observation again and again. Thereby, a family, indexed by the iteration parameter, of fully data driven prior distributions are constructed. Each element of this family leads to an oracle and minimax-optimal concentration and convergences rates for the associated posterior distribution and Bayes estimator as the noise level tends to zero. Moreover, the Bayes estimators can be interpreted either as fully data-driven shrinkage estimators or as a fully data-driven aggregation of projection estimators. Interestingly, increasing the value of the iteration parameter gives in some sense more weight to the information contained in the observations than in the hierarchical prior. In particular, for a fixed noise level letting the iteration parameter tend to infinity, the associated posterior distribution shrinks to a point measure. The limite distribution is degenerated on the value of a projection estimator with fully-daten driven choice of the dimension parameter using a model selection approach as in Massart (2003) where a penalized contrast criterium is minimised. Thereby, the classical model selection approach gives in some sense an infinitely increasing weight to the information contained in the observations in comparison to the prior distribution. It is further shown that the limite distribution and the associated Bayes estimator converges with oracle and minimax-optimal rates as the noise level tends to zero. Simulations are shown to illustrate the influence of the iteration parameter which provides in some sense a compromise between the fully-data driven hierarchical Bayes estimator proposed by Johannes, Simoni and Schenk (2016) and the fully-data driven projection estimator by model selection.

Références :
Bunke, O. et Johannes, J. (2005). Selfinformative limits of Bayes estimates and generalized maximum likelihood. Statistics, 39(6):483–502.
Johannes, J., Simoni, A. et Schenk, R. (2015). Adaptive Bayesian estimation in indirect Gaussian sequence space models. Discussion paper, arXiv:1502.00184.
Massart, P. (2003). Concentration inequalities and model selection. In Springer, editor, Ecole d’Été de Probabilités de Saint-Flour XXXIII, volume 1869.