Show simple item record

dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2020-03-25T20:02:45Z
dc.date.available2020-03-25T20:02:45Z
dc.date.issued2020-03-25
dc.identifier.urihttps://hdl.handle.net/1721.1/124343
dc.description.abstractWe consider here the class of supervised learning algorithms known as Empirical Risk Minimization (ERM). The classical theory by Vapnik and others characterize universal consistency of ERM in the classical regime in which the architecture of the learning network is fixed and n, the number of training examples, goes to infinity. According to the classical theory, the minimizer of the empirical risk is consistent if the hypothesis space has finite complexity. We do not have a similar general theory for the modern regime of interpolating regressors and over-parameterized deep networks, in which d > n and 𝑑/n remains constant as n goes to infinity. In this note I propose the outline of such a theory based on the specific notion of CVloo stability of the learning algorithm with respect to perturbations of the training set. The theory shows that for interpolating regressors and separating classifiers (either kernel machines or deep RELU networks) 1. minimizing CVloo stability minimizes the expected error 2. the most stable solutions are minimum norm solutions The hope is that this approach may lead to a unified theory encompassing both the modern regime and the classical one.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo;103
dc.titleStable Foundations for Learning: a foundational framework for learning theory in both the classical and modern regime.en_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record