dc.contributor.author | Poggio, Tomaso | |
dc.contributor.author | Mhaskar, Hrushikesh | |
dc.contributor.author | Rosasco, Lorenzo | |
dc.contributor.author | Miranda, Brando | |
dc.contributor.author | Liao, Qianli | |
dc.date.accessioned | 2016-11-28T17:38:30Z | |
dc.date.available | 2016-11-28T17:38:30Z | |
dc.date.issued | 2016-11-23 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/105443 | |
dc.description.abstract | [formerly titled "Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: a Review"]
The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures. | en_US |
dc.description.sponsorship | This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM), arXiv | en_US |
dc.relation.ispartofseries | CBMM Memo Series;058 | |
dc.rights | Attribution-NonCommercial-ShareAlike 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/us/ | * |
dc.subject | Deep Learning | en_US |
dc.subject | deep convolutional networks | en_US |
dc.title | Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality? | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |
dc.identifier.citation | arXiv:1611.00740v5 | en_US |