Search
Now showing items 11-17 of 17
Can we Contain Covid-19 without Locking-down the Economy?
(Center for Brains, Minds and Machines (CBMM), 2020-03-26)
We present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle ...
Universal Format Conversions
(2020-06-05)
Information is the fuel for intelligence. Any competitive intelligence system should be information hungry. “Formats” on the other hand, is the container for information. Accessing information without the ability to
decipher ...
Flexible Intelligence
(2020-06-18)
We discuss the problem of flexibility in intelligence, a relatively little-studied topic in machine learning and AI. Flexibility can be understood as out-of-distribution generalization, and it can be achieved by converting ...
Do Neural Networks for Segmentation Understand Insideness?
(Center for Brains, Minds and Machines (CBMM), 2020-04-04)
The insideness problem is an image segmentation modality that consists of determining which pixels are inside and outside a region. Deep Neural Networks (DNNs) excel in segmentation benchmarks, but it is unclear that they ...
On the Capability of Neural Networks to Generalize to Unseen Category-Pose Combinations
(Center for Brains, Minds and Machines (CBMM), 2020-07-17)
Recognizing an object’s category and pose lies at the heart of visual understanding. Recent works suggest that deep neural networks (DNNs) often fail to generalize to category-pose combinations not seen during training. ...
Implicit dynamic regularization in deep networks
(Center for Brains, Minds and Machines (CBMM), 2020-08-17)
Square loss has been observed to perform well in classification tasks, at least as well as crossentropy. However, a theoretical justification is lacking. Here we develop a theoretical analysis for the square loss that also ...
Loss landscape: SGD can have a better view than GD
(Center for Brains, Minds and Machines (CBMM), 2020-07-01)
Consider a loss function L = ni=1 l2i with li = f(xi) − yi, where f(x) is a deep feedforward network with R layers, no bias terms and scalar output. Assume the network is overparametrized that is, d >> n, where d is the ...