dc.contributor.author | Yuille, Alan L. | |
dc.contributor.author | Liu, Chenxi | |
dc.date.accessioned | 2018-05-10T15:36:52Z | |
dc.date.available | 2018-05-10T15:36:52Z | |
dc.date.issued | 2018-05-10 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/115292 | |
dc.description.abstract | This is an opinion paper about the strengths and weaknesses of Deep Nets. They are at the center of recent progress on Artificial Intelligence and are of growing importance in Cognitive Science and Neuroscience since they enable the development of computational models that can deal with a large range of visually realistic stimuli and visual tasks. They have clear limitations but they also have enormous successes. There is also gradual, though incomplete, understanding of their inner workings. It seems unlikely that Deep Nets in their current form will be the best long-term solution either for building general purpose intelligent machines or for understanding the mind/brain, but it is likely that many aspects of them will remain. At present Deep Nets do very well on specific types of visual tasks and on specific benchmarked datasets. But Deep Nets are much less general purpose, flexible, and adaptive than the human visual system. Moreover, methods like Deep Nets may run into fundamental difficulties when faced with the enormous complexity of natural images. To illustrate our main points, while keeping the references small, this paper is slightly biased towards work from our group. | en_US |
dc.description.sponsorship | This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM) | en_US |
dc.relation.ispartofseries | CBMM Memo Series;088 | |
dc.title | Deep Nets: What have they ever done for Vision? | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |