Show simple item record

dc.contributor.advisorTomaso Poggio
dc.contributor.authorCaponnetto, Andrea
dc.contributor.otherCenter for Biological and Computational Learning (CBCL)
dc.date.accessioned2006-09-29T18:36:42Z
dc.date.available2006-09-29T18:36:42Z
dc.date.issued2006-09-10
dc.identifier.otherMIT-CSAIL-TR-2006-062
dc.identifier.otherCBCL-264
dc.identifier.urihttp://hdl.handle.net/1721.1/34216
dc.description.abstractWe develop some new error bounds for learning algorithms induced by regularization methods in the regression setting. The "hardness" of the problem is characterized in terms of the parameters r and s, the first related to the "complexity" of the target function, the second connected to the effective dimension of the marginal probability measure over the input space. We show, extending previous results, that by a suitable choice of the regularization parameter as a function of the number of the available examples, it is possible attain the optimal minimax rates of convergence for the expected squared loss of the estimators, over the family of priors fulfilling the constraint r + s > 1/2. The setting considers both labelled and unlabelled examples, the latter being crucial for the optimality results on the priors in the range r < 1/2.
dc.format.extent16 p.
dc.format.extent776374 bytes
dc.format.extent738421 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
dc.subjectoptimal rates, regularized least-squares algorithm, regularization methods, adaptation
dc.titleOptimal Rates for Regularization Operators in Learning Theory


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record