Now showing items 1-20 of 62

    • 3D Object-Oriented Learning: An End-to-end Transformation-Disentangled 3D Representation 

      Liao, Qianli; Poggio, Tomaso (2017-12-31)
      We provide more detailed explanation of the ideas behind a recent paper on “Object-Oriented Deep Learning” [1] and extend it to handle 3D inputs/outputs. Similar to [1], every layer of the system takes in a list of ...
    • An analysis of training and generalization errors in shallow and deep networks 

      Mhaskar, H.N.; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2019-05-30)
      This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze ...
    • An analysis of training and generalization errors in shallow and deep networks 

      Mhaskar, Hrushikesh; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2018-02-20)
      An open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit ...
    • Biologically Inspired Mechanisms for Adversarial Robustness 

      Vuyyuru Reddy, Manish; Banburski, Andrzej; Plant, Nishka; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2020-06-23)
      A convolutional neural network strongly robust to adversarial perturbations at reasonable computational and performance cost has not yet been demonstrated. The primate visual ventral stream seems to be robust to small ...
    • Biologically-Plausible Learning Algorithms Can Scale to Large Datasets 

      Xiao, Will; Chen, Honglin; Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2018-09-27)
      The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feed- back pathways. To address ...
    • Biologically-plausible learning algorithms can scale to large datasets 

      Xiao, Will; Chen, Honglin; Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv.org, 2018-11-08)
      The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address ...
    • Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex 

      Liao, Qianli; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-04-12)
      We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with ...
    • Can a biologically-plausible hierarchy e ectively replace face detection, alignment, and recognition pipelines? 

      Liao, Qianli; Leibo, Joel Z; Mroueh, Youssef; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-03-27)
      The standard approach to unconstrained face recognition in natural photographs is via a detection, alignment, recognition pipeline. While that approach has achieved impressive results, there are several reasons to be ...
    • Classical generalization bounds are surprisingly tight for Deep Networks 

      Liao, Qianli; Miranda, Brando; Hidary, Jack; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2018-07-11)
      Deep networks are usually trained and tested in a regime in which the training classification error is not a good predictor of the test error. Thus the consensus has been that generalization, defined as convergence of the ...
    • Compositional Sparsity of Learnable Functions 

      Poggio, Tomaso; Fraser, Maia (Center for Brains, Minds and Machines (CBMM), 2024-02-08)
      Neural networks have demonstrated impressive success in various domains, raising the question of what fundamental principles underlie the effectiveness of the best AI systems and quite possibly of human intelligence. This ...
    • Compositional Sparsity: a framework for ML 

      Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2022-10-10)
      The main claim of this perspective is that compositional sparsity of the target function, which corre- sponds to the task to be learned, is the key principle underlying machine learning. I prove that under restrictions of ...
    • Computational role of eccentricity dependent cortical magnification 

      Poggio, Tomaso; Mutch, Jim; Isik, Leyla (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-06-06)
      We develop a sampling extension of M-theory focused on invariance to scale and translation. Quite surprisingly, the theory predicts an architecture of early vision with increasing receptive field sizes and a high resolution ...
    • Cross-validation Stability of Deep Networks 

      Banburski, Andrzej; De La Torre, Fernanda; Plant, Nishka; Shastri, Ishana; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-02-09)
      Recent theoretical results show that gradient descent on deep neural networks under exponential loss functions locally maximizes classification margin, which is equivalent to minimizing the norm of the weight matrices under ...
    • Deep Convolutional Networks are Hierarchical Kernel Machines 

      Anselmi, Fabio; Rosasco, Lorenzo; Tan, Cheston; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2015-08-05)
      We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, ...
    • A Deep Representation for Invariance And Music Classification 

      Zhang, Chiyuan; Evangelopoulos, Georgios; Voinea, Stephen; Rosasco, Lorenzo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2014-17-03)
      Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this ...
    • Deep vs. shallow networks : An approximation theory perspective 

      Mhaskar, Hrushikesh; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2016-08-12)
      The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in ...
    • Do Deep Neural Networks Suffer from Crowding? 

      Volokitin, Anna; Roig, Gemma; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), arXiv, 2017-06-26)
      Crowding is a visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. In this work, we study the ...
    • Double descent in the condition number 

      Poggio, Tomaso; Kur, Gil; Banburski, Andrzej (Center for Brains, Minds and Machines (CBMM), 2019-12-04)
      In solving a system of n linear equations in d variables Ax=b, the condition number of the (n,d) matrix A measures how much errors in the data b affect the solution x. Bounds of this type are important in many inverse ...
    • Dreaming with ARC 

      Banburski, Andrzej; Ghandi, Anshula; Alford, Simon; Dandekar, Sylee; Chin, Peter; e.a. (Center for Brains, Minds and Machines (CBMM), 2020-11-23)
      Current machine learning algorithms are highly specialized to whatever it is they are meant to do –– e.g. playing chess, picking up objects, or object recognition. How can we extend this to a system that could solve a ...
    • The Effects of Image Distribution and Task on Adversarial Robustness 

      Kunhardt, Owen; Deza, Arturo; Poggio, Tomaso (Center for Brains, Minds and Machines (CBMM), 2021-02-18)
      In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε_0, ε_1] (interval of adversarial perturbation strengths) ...