dc.contributor.author | Mhaskar, Hrushikesh | |
dc.contributor.author | Liao, Qianli | |
dc.contributor.author | Poggio, Tomaso | |
dc.date.accessioned | 2016-03-08T17:13:57Z | |
dc.date.available | 2016-03-08T17:13:57Z | |
dc.date.issued | 2016-03-08 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/101635 | |
dc.description.abstract | We describe computational tasks - especially in vision - that correspond to compositional/hierarchical functions. While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower VC-dimension as well as the number of training parameters. This leads to the question of approximation by sparse polynomials (in the number of independent parameters) and, as a consequence, by deep networks. We also discuss connections between our results and learnability of sparse Boolean functions, settling an old conjecture by Bengio. | en_US |
dc.description.sponsorship | This work was supported by the Center for Brains, Minds and Machines
(CBMM), funded by NSF STC award CCF 1231216. HNM was supported in
part by ARO Grant W911NF-15-1-0385. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM), arXiv | en_US |
dc.relation.ispartofseries | CBMM Memo Series;045 | |
dc.rights | Attribution-NonCommercial-ShareAlike 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/3.0/us/ | * |
dc.subject | computational tasks | en_US |
dc.subject | Computer vision | en_US |
dc.subject | Hierarchy | en_US |
dc.title | Learning Real and Boolean Functions: When Is Deep Better Than Shallow | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |
dc.identifier.citation | arXiv:1603.00988 | en_US |