Search
Now showing items 11-18 of 18
I-theory on depth vs width: hierarchical function composition
(Center for Brains, Minds and Machines (CBMM), 2015-12-29)
Deep learning networks with convolution, pooling and subsampling are a special case of hierar- chical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can ...
Predicting Actions Before They Occur
(Center for Brains, Minds and Machines (CBMM), 2015-10-26)
Humans are experts at reading others’ actions in social contexts. They efficiently process others’ movements in real-time to predict intended goals. Here we designed a two-person reaching task to investigate real-time body ...
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex
(Center for Brains, Minds and Machines (CBMM), bioRxiv, 2015-04-26)
Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to ...
Parsing Occluded People by Flexible Compositions
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-06-01)
This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior ...
On Invariance and Selectivity in Representation Learning
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-03-23)
We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one ...
The infancy of the human brain
(Center for Brains, Minds and Machines (CBMM), Neuron, 2015-10-07)
The human infant brain is the only known machine able to master a natural language and develop explicit, symbolic, and communicable systems of knowledge that deliver rich representations of the external world. With the ...
How Important is Weight Symmetry in Backpropagation?
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-11-29)
Gradient backpropagation (BP) requires symmetric feedforward and feedback connections—the same weights must be used for forward and backward passes. This “weight transport problem” [1] is thought to be one of the main ...
Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-07)
In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. ...