Show simple item record

dc.contributor.advisorTomaso Poggio
dc.contributor.authorRosasco, Lorenzoen_US
dc.contributor.authorPereverzyev, Sergeien_US
dc.contributor.authorDe Vito, Ernestoen_US
dc.contributor.otherCenter for Biological and Computational Learning (CBCL)en_US
dc.date.accessioned2008-10-17T15:30:10Z
dc.date.available2008-10-17T15:30:10Z
dc.date.issued2008-10-16
dc.identifier.urihttp://hdl.handle.net/1721.1/42896
dc.description.abstractThe regularization parameter choice is a fundamental problem in supervised learning since the performance of most algorithms crucially depends on the choice of one or more of such parameters. In particular a main theoretical issue regards the amount of prior knowledge on the problem needed to suitably choose the regularization parameter and obtain learning rates. In this paper we present a strategy, the balancing principle, to choose the regularization parameter without knowledge of the regularity of the target function. Such a choice adaptively achieves the best error rate. Our main result applies to regularization algorithms in reproducing kernel Hilbert space with the square loss, though we also study how a similar principle can be used in other situations. As a straightforward corollary we can immediately derive adaptive parameter choice for various kernel methods recently studied. Numerical experiments with the proposed parameter choice rules are also presented.en_US
dc.format.extent24 p.en_US
dc.relation.ispartofseriesMIT-CSAIL-TR-2008-062
dc.relation.ispartofseriesCBCL-275
dc.subjectAdaptive Model Selectionen_US
dc.subjectLearning Theoryen_US
dc.subjectInverse Problemsen_US
dc.subjectRegularizationen_US
dc.titleAdaptive Kernel Methods Using the Balancing Principleen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record