Journal of Machine Learning Research 6 (2005) 883–904 Submitted 12/04; Revised 3/05; Published 5/05 Learning from Examples as an Inverse Problem Ernesto De Vito
[email protected] Dipartimento di Matematica Universita` di Modena e Reggio Emilia Modena, Italy and INFN, Sezione di Genova, Genova, Italy Lorenzo Rosasco
[email protected] Andrea Caponnetto
[email protected] Umberto De Giovannini
[email protected] Francesca Odone
[email protected] DISI Universita` di Genova, Genova, Italy Editor: Peter Bartlett Abstract Many works related learning from examples to regularization techniques for inverse problems, em- phasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regu- larization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algo- rithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and study the interplay between the discrete and con- tinuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in ill-posed in- verse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.