CBMM Memo Series: Recent submissions
Now showing items 76-78 of 149
-
Theory of Intelligence with Forgetting: Mathematical Theorems Explaining Human Universal Forgetting using “Forgetting Neural Networks”
(Center for Brains, Minds and Machines (CBMM), 2017-12-05)In [42] we suggested that any memory stored in the human/animal brain is forgotten following the Ebingghaus curve – in this follow-on paper, we define a novel algebraic structure, a Forgetting Neural Network, as a simple ... -
Theory of Deep Learning III: explaining the non-overfitting puzzle
(arXiv, 2017-12-30)THIS MEMO IS REPLACED BY CBMM MEMO 90 A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly ... -
3D Object-Oriented Learning: An End-to-end Transformation-Disentangled 3D Representation
(2017-12-31)We provide more detailed explanation of the ideas behind a recent paper on “Object-Oriented Deep Learning” [1] and extend it to handle 3D inputs/outputs. Similar to [1], every layer of the system takes in a list of ...


