dc.contributor.author | Wang, Yu-Siang | |
dc.contributor.author | Liu, Chenxi | |
dc.contributor.author | Zeng, Xiaohui | |
dc.contributor.author | Yuille, Alan L. | |
dc.date.accessioned | 2018-05-15T15:59:52Z | |
dc.date.available | 2018-05-15T15:59:52Z | |
dc.date.issued | 2018-05-10 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/115375 | |
dc.description.abstract | In this paper, we study the problem of parsing structured knowledge graphs from textual descrip- tions. In particular, we consider the scene graph representation that considers objects together with their attributes and relations: this representation has been proved useful across a variety of vision and language applications. We begin by introducing an alternative but equivalent edge-centric view of scene graphs that connect to dependency parses. Together with a careful redesign of label and action space, we combine the two-stage pipeline used in prior work (generic dependency parsing followed by simple post-processing) into one, enabling end-to-end training. The scene graphs generated by our learned neural dependency parser achieve an F-score similarity of 49.67% to ground truth graphs on our evaluation set, surpassing best previous approaches by 5%. We further demonstrate the effective- ness of our learned parser on image retrieval applications. | en_US |
dc.description.sponsorship | This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM) | en_US |
dc.relation.ispartofseries | CBMM Memo Series;082 | |
dc.title | Scene Graph Parsing as Dependency Parsing | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |