Search
Now showing items 11-20 of 150
Fast, invariant representation for human action in the visual system
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-06)
The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography ...
Where do hypotheses come from?
(Center for Brains, Minds and Machines (CBMM), 2016-10-24)
Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available ...
Double descent in the condition number
(Center for Brains, Minds and Machines (CBMM), 2019-12-04)
In solving a system of n linear equations in d variables Ax=b, the condition number of the (n,d) matrix A measures how much errors in the data b affect the solution x. Bounds of this type are important in many inverse ...
Brain Signals Localization by Alternating Projections
(Center for Brains, Minds and Machines (CBMM), arXiv, 2019-08-29)
We present a novel solution to the problem of localization of brain signals. The solution is sequential and iterative, and is based on minimizing the least-squares (LS) criterion by the alternating projection (AP) algorithm, ...
Deep compositional robotic planners that follow natural language commands
(Center for Brains, Minds and Machines (CBMM), Computation and Systems Neuroscience (Cosyne), 2020-05-31)
We demonstrate how a sampling-based robotic planner can be augmented to learn to understand a sequence of natural language commands in a continuous configuration space to move and manipu- late objects. Our approach combines ...
Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas
(Center for Brains, Minds and Machines (CBMM), The Ninth International Conference on Learning Representations (ICLR), 2020-10-25)
We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions. The input LTL formulas have never been seen before, ...
Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset
(Center for Brains, Minds and Machines (CBMM), The 22nd Annual Conference of the International Speech Communication Association (Interspeech), 2021-08-30)
Visually-grounded spoken language datasets can enable models to learn cross-modal correspon- dences with very weak supervision. However, modern audio-visual datasets contain biases that un- dermine the real-world performance ...
Social Interactions as Recursive MDPs
(Center for Brains, Minds and Machines (CBMM), Conference on Robot Learning (CoRL), 2021-11-08)
While machines and robots must interact with humans, providing them with social skills has been a largely overlooked topic. This is mostly a consequence of the fact that tasks such as navigation, command following, and ...
Representation Learning in Sensory Cortex: a theory
(Center for Brains, Minds and Machines (CBMM), 2014-11-14)
We review and apply a computational theory of the feedforward path of the ventral stream in visual cortex based on the hypothesis that its main function is the encoding of invariant representations of images. A key ...
A Review of Relational Machine Learning for Knowledge Graphs
(Center for Brains, Minds and Machines (CBMM), arXiv, 2015-03-23)
Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, ...