Show simple item record

dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2025-06-02T17:39:04Z
dc.date.available2025-06-02T17:39:04Z
dc.date.issued2025-02-01
dc.identifier.urihttps://hdl.handle.net/1721.1/159332
dc.description.abstractIn previous papers [4, 6] we have claimed that for each function which is efficiently Turing computable there exists a deep and sparse network which approximates it arbitrarily well. We also claimed a key role for compositional sparsity in this result. Though the general claims are correct some of our statements may have been imprecise and thus potentially misleading. In this short paper we wish to formally restate our claims and provide definitions and proofs.en_US
dc.description.sponsorshipThis work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo;156
dc.titleOn efficiently computable functions, deep networks and sparse compositionalityen_US
dc.typeArticleen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record