Search
Now showing items 1-8 of 8
The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors
(Center for Brains, Minds and Machines (CBMM), 2018-11-01)
The digital information age has generated new outlets for content creators to publish so-called “fake news”, a new form of propaganda that is intentionally designed to mislead the reader. With the widespread effects of the ...
Foveation-based Mechanisms Alleviate Adversarial Examples
(Center for Brains, Minds and Machines (CBMM), arXiv, 2016-01-19)
We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in ...
Theory of Deep Learning III: explaining the non-overfitting puzzle
(arXiv, 2017-12-30)
THIS MEMO IS REPLACED BY CBMM MEMO 90
A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly ...
Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results
(Center for Brains, Minds and Machines (CBMM), 2018-11-02)
Deep neural networks have been shown to predict neural responses in higher visual cortex. The mapping from the model to a neuron in the brain occurs through a linear combination of many units in the model, leaving open the ...
Transformer Module Networks for Systematic Generalization in Visual Question Answering
(Center for Brains, Minds and Machines (CBMM), 2022-02-03)
Transformer-based models achieve great performance on Visual Question Answering (VQA). How- ever, when we evaluate them on systematic generalization, i.e., handling novel combinations of known concepts, their performance ...
Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations
(Center for Brains, Minds and Machines (CBMM), 2022-01-26)
The training data distribution is often biased towards objects in certain orientations and illumination conditions. While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations ...
Do Neural Networks for Segmentation Understand Insideness?
(Center for Brains, Minds and Machines (CBMM), 2020-04-04)
The insideness problem is an image segmentation modality that consists of determining which pixels are inside and outside a region. Deep Neural Networks (DNNs) excel in segmentation benchmarks, but it is unclear that they ...
On the Capability of Neural Networks to Generalize to Unseen Category-Pose Combinations
(Center for Brains, Minds and Machines (CBMM), 2020-07-17)
Recognizing an object’s category and pose lies at the heart of visual understanding. Recent works suggest that deep neural networks (DNNs) often fail to generalize to category-pose combinations not seen during training. ...