Show simple item record

dc.contributor.authorMhaskar, Hrushikesh
dc.contributor.authorLiao, Qianli
dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2016-03-08T17:13:57Z
dc.date.available2016-03-08T17:13:57Z
dc.date.issued2016-03-08
dc.identifier.urihttp://hdl.handle.net/1721.1/101635
dc.description.abstractWe describe computational tasks - especially in vision - that correspond to compositional/hierarchical functions. While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower VC-dimension as well as the number of training parameters. This leads to the question of approximation by sparse polynomials (in the number of independent parameters) and, as a consequence, by deep networks. We also discuss connections between our results and learnability of sparse Boolean functions, settling an old conjecture by Bengio.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. HNM was supported in part by ARO Grant W911NF-15-1-0385.en_US
dc.language.isoen_USen_US
dc.publisherCenter for Brains, Minds and Machines (CBMM), arXiven_US
dc.relation.ispartofseriesCBMM Memo Series;045
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectcomputational tasksen_US
dc.subjectComputer visionen_US
dc.subjectHierarchyen_US
dc.titleLearning Real and Boolean Functions: When Is Deep Better Than Shallowen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US
dc.identifier.citationarXiv:1603.00988en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record