dc.contributor.author | Zhang, Chiyuan | |
dc.contributor.author | Evangelopoulos, Georgios | |
dc.contributor.author | Voinea, Stephen | |
dc.contributor.author | Rosasco, Lorenzo | |
dc.contributor.author | Poggio, Tomaso | |
dc.date.accessioned | 2015-12-08T20:45:08Z | |
dc.date.available | 2015-12-08T20:45:08Z | |
dc.date.issued | 2014-17-03 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/100163 | |
dc.description.abstract | Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this paper we propose the use of such computational modules for extracting invariant and discriminative audio representations. Building on a theory of invariance in hierarchical architectures, we propose a novel, mid-level representation for acoustical signals, using the empirical distributions of projections on a set of templates and their transformations. Under the assumption that, by construction, this dictionary of templates is composed from similar classes, and samples the orbit of variance-inducing signal transformations (such as shift and scale), the resulting signature is theoretically guaranteed to be unique, invariant to transformations and stable to deformations. Modules of projection and pooling can then constitute layers of deep networks, for learning composite representations. We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification. | en_US |
dc.description.sponsorship | This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM), arXiv | en_US |
dc.relation.ispartofseries | CBMM Memo Series;002 | |
dc.rights | Attribution-NonCommercial 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/3.0/us/ | * |
dc.subject | Audio Representation | en_US |
dc.subject | Hierarchy | en_US |
dc.subject | Invariance | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Theories for Intelligence | en_US |
dc.title | A Deep Representation for Invariance And Music Classification | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |
dc.identifier.citation | arXiv:1404.0400v1 | en_US |