Now showing items 12-31 of 141

    • Can a biologically-plausible hierarchy e ectively replace face detection, alignment, and recognition pipelines? 

      Liao, Qianli; Leibo, Joel Z; Mroueh, Youssef; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-03-27)
      The standard approach to unconstrained face recognition in natural photographs is via a detection, alignment, recognition pipeline. While that approach has achieved impressive results, there are several reasons to be ...
    • Can we Contain Covid-19 without Locking-down the Economy? 

      Shalev-Shwartz, Shai; Shashua, Amnon (Center for Brains, Minds and Machines (CBMM), 2020-03-26)
      We present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle ...
    • Classical generalization bounds are surprisingly tight for Deep Networks 

      Liao, Qianli; Miranda, Brando; Hidary, Jack; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2018-07-11)
      Deep networks are usually trained and tested in a regime in which the training classification error is not a good predictor of the test error. Thus the consensus has been that generalization, defined as convergence of the ...
    • Complexity of Representation and Inference in Compositional Models with Part Sharing 

      Yuille, Alan L.; Mottaghi, Roozbeh (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-05)
      This paper performs a complexity analysis of a class of serial and parallel compositional models of multiple objects and shows that they enable efficient representation and rapid inference. Compositional models are generative ...
    • The Compositional Nature of Event Representations in the Human Brain 

      Barbu, Andrei; Narayanaswamy, Siddharth; Xiong, Caiming; Corso, Jason J.; Fellbaum, Christiane D.; e.a. (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-07-14)
      How does the human brain represent simple compositions of constituents: actors, verbs, objects, directions, and locations? Subjects viewed videos during neuroimaging (fMRI) sessions from which sentential descriptions of ...
    • Compositional Networks Enable Systematic Generalization for Grounded Language Understanding 

      Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei (Center for Brains, Minds and Machines (CBMM), Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021-11-07)
      Humans are remarkably flexible when under- standing new sentences that include combinations of concepts they have never encountered before. Recent work has shown that while deep networks can mimic some human language ...
    • Compositional RL Agents That Follow Language Commands in Temporal Logic 

      Kuo, Yen-Ling; Barbu, Andrei; Katz, Boris (Center for Brains, Minds and Machines (CBMM), Frontiers in Robotics and AI, 2021-07-19)
      We demonstrate how a reinforcement learning agent can use compositional recurrent neural net- works to learn to carry out commands specified in linear temporal logic (LTL). Our approach takes as input an LTL formula, ...
    • Compositional Sparsity of Learnable Functions 

      Poggio, Tomaso; Fraser, Maia (Center for Brains, Minds and Machines (CBMM), 2024-02-08)
      Neural networks have demonstrated impressive success in various domains, raising the question of what fundamental principles underlie the effectiveness of the best AI systems and quite possibly of human intelligence. This ...
    • Compositional Sparsity: a framework for ML 

      Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2022-10-10)
      The main claim of this perspective is that compositional sparsity of the target function, which corre- sponds to the task to be learned, is the key principle underlying machine learning. I prove that under restrictions of ...
    • Computational role of eccentricity dependent cortical magnification 

      Poggio, Tomaso; Mutch, Jim; Isik, Leyla (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-06)
      We develop a sampling extension of M-theory focused on invariance to scale and translation. Quite surprisingly, the theory predicts an architecture of early vision with increasing receptive field sizes and a high resolution ...
    • Concepts in a Probabilistic Language of Thought 

      Goodman, Noah D.; Tenenbaum, Joshua B.; Gerstenberg, Tobias (Center for Brains, Minds and Machines (CBMM), 2014-06-14)
      Knowledge organizes our understanding of the world, determining what we expect given what we have already seen. Our predictive representations have two key properties: they are productive, and they are graded. Productive ...
    • Constant Modulus Algorithms via Low-Rank Approximation 

      Adler, Amir; Wax, Mati (Center for Brains, Minds and Machines (CBMM), 2018-04-12)
      We present a novel convex-optimization-based approach to the solutions of a family of problems involving constant modulus signals. The family of problems includes the constant modulus and the constrained constant modulus, ...
    • Contrastive Analysis with Predictive Power: Typology Driven Estimation of Grammatical Error Distributions in ESL 

      Berzak, Yevgeni; Reichart, Roi; Katz, Boris (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-06-05)
      This work examines the impact of crosslinguistic transfer on grammatical errors in English as Second Language (ESL) texts. Using a computational framework that formalizes the theory of Contrastive Analysis (CA), we demonstrate ...
    • Cross-validation Stability of Deep Networks 

      Banburski, Andrzej; De La Torre, Fernanda; Plant, Nishka; Shastri, Ishana; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-02-09)
      Recent theoretical results show that gradient descent on deep neural networks under exponential loss functions locally maximizes classification margin, which is equivalent to minimizing the norm of the weight matrices under ...
    • Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) 

      Mao, Junhua; Xu, Wei; Yang, Yi; Wang, Jiang; Huang, Zhiheng; e.a. (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-05-07)
      In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. ...
    • Deep compositional robotic planners that follow natural language commands 

      Kuo, Yen-Ling; Katz, Boris; Barbu, Andrei (Center for Brains, Minds and Machines (CBMM), Computation and Systems Neuroscience (Cosyne), 2020-05-31)
      We demonstrate how a sampling-based robotic planner can be augmented to learn to understand a sequence of natural language commands in a continuous configuration space to move and manipu- late objects. Our approach combines ...
    • Deep Convolutional Networks are Hierarchical Kernel Machines 

      Anselmi, Fabio; Rosasco, Lorenzo; Tan, Cheston; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-08-05)
      We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, ...
    • Deep Nets: What have they ever done for Vision? 

      Yuille, Alan L.; Liu, Chenxi (Center for Brains, Minds and Machines (CBMM), 2018-05-10)
      This is an opinion paper about the strengths and weaknesses of Deep Nets. They are at the center of recent progress on Artificial Intelligence and are of growing importance in Cognitive Science and Neuroscience since they ...
    • Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning 

      Lotter, William; Kreiman, Gabriel; Cox, David (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-03-01)
      While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning—leveraging unlabeled examples to learn about the structure of a domain — remains ...
    • Deep Regression Forests for Age Estimation 

      Shen, Wei; Guo, Yilu; Wang, Yan; Zhao, Kai; Wang, Bo; e.a. (Center for Brains, Minds and Machines (CBMM), 2018-06-01)
      Age estimation from facial images is typically cast as a nonlinear regression problem. The main challenge of this problem is the facial feature space w.r.t. ages is inhomogeneous, due to the large variation in facial ...