dc.contributor.advisor | Leslie Kaelbling | |
dc.contributor.author | Kawaguchi, Kenji | en_US |
dc.contributor.author | Kaelbling, Leslie Pack | en_US |
dc.contributor.author | Bengio, Yoshua | en_US |
dc.contributor.other | Learning and Intelligent Systems | en |
dc.date.accessioned | 2018-05-09T19:55:51Z | |
dc.date.available | 2018-05-09T19:55:51Z | |
dc.date.issued | 2018-05-01 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/115274 | |
dc.description.abstract | With a direct analysis of neural networks, this paper presents a mathematically tight generalization theory to partially address an open problem regarding the generalization of deep learning. Unlike previous bound-based theory, our main theory is quantitatively as tight as possible for every dataset individually, while producing qualitative insights competitively. Our results give insight into why and how deep learning can generalize well, despite its large capacity, complexity, possible algorithmic instability, nonrobustness, and sharp minima, answering to an open question in the literature. We also discuss limitations of our results and propose additional open problems. | en_US |
dc.format.extent | 31 pages | en_US |
dc.relation.ispartofseries | MIT-CSAIL-TR-2018-014 | |
dc.rights | Creative Commons Attribution 4.0 International | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en_US |
dc.subject | neural network | en_US |
dc.subject | learning theory | en_US |
dc.title | Generalization in Deep Learning | en_US |
dc.date.updated | 2018-05-09T19:55:51Z | |