MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • Center for Brains, Minds & Machines
  • Publications
  • CBMM Memo Series
  • View Item
  • DSpace@MIT Home
  • Center for Brains, Minds & Machines
  • Publications
  • CBMM Memo Series
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

On efficiently computable functions, deep networks and sparse compositionality

Author(s)
Poggio, Tomaso
Thumbnail
DownloadCBMM-Memo-156.pdf (586.7Kb)
Metadata
Show full item record
Abstract
In previous papers [4, 6] we have claimed that for each function which is efficiently Turing computable there exists a deep and sparse network which approximates it arbitrarily well. We also claimed a key role for compositional sparsity in this result. Though the general claims are correct some of our statements may have been imprecise and thus potentially misleading. In this short paper we wish to formally restate our claims and provide definitions and proofs.
Date issued
2025-02-01
URI
https://hdl.handle.net/1721.1/159332
Publisher
Center for Brains, Minds and Machines (CBMM)
Series/Report no.
CBMM Memo;156

Collections
  • CBMM Memo Series

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.