Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/135813
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorQi, Y.-
dc.contributor.authorPan, Z.-
dc.contributor.authorHong, Y.-
dc.contributor.authorYang, M.H.-
dc.contributor.authorVan Den Hengel, A.-
dc.contributor.authorWu, Q.-
dc.date.issued2022-
dc.identifier.citationProceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision, 2022, pp.1635-1644-
dc.identifier.isbn9781665428125-
dc.identifier.issn1550-5499-
dc.identifier.urihttps://hdl.handle.net/2440/135813-
dc.description.abstractVision-and-Language Navigation (VLN) requires an agent to find a path to a remote location on the basis of natural-language instructions and a set of photo-realistic panoramas. Most existing methods take the words in the instructions and the discrete views of each panorama as the minimal unit of encoding. However, this requires a model to match different nouns (e.g., TV, table) against the same input view feature. In this work, we propose an object-informed sequential BERT to encode visual perceptions and linguistic instructions at the same fine-grained level, namely objects and words. Our sequential BERT also enables the visual-textual clues to be interpreted in light of the temporal context, which is crucial to multiround VLN tasks. Additionally, we enable the model to identify the relative direction (e.g., left/right/front/back) of each navigable location and the room type (e.g., bedroom, kitchen) of its current and final navigation goal, as such information is widely mentioned in instructions implying the desired next and final locations. We thus enable the model to know-where the objects lie in the images, and to know-where they stand in the scene. Extensive experiments demonstrate the effectiveness compared against several state-of-the-art methods on three indoor VLN tasks: REVERIE, NDH, and R2R. Project repository: https://github.com/YuankaiQi/ORIST-
dc.description.statementofresponsibilityYuankai Qi, Zizheng Pan, Yicong Hong, Ming-Hsuan Yang, Anton van den Hengel, Qi Wu-
dc.language.isoen-
dc.publisherIEEE-
dc.rightsCopyright © 2021, IEEE-
dc.source.urihttps://ieeexplore.ieee.org/xpl/conhome/9709627/proceeding-
dc.titleThe Road to Know-Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation-
dc.typeConference paper-
dc.contributor.conferenceIEEE/CVF International Conference on Computer Vision (ICCV) (10 Oct 2021 - 17 Oct 2021 : Montreal, QC, Canada (virtual online))-
dc.identifier.doi10.1109/ICCV48922.2021.00168-
dc.publisher.placeonline-
dc.relation.granthttp://purl.org/au-research/grants/arc/DE190100539-
pubs.publication-statusPublished-
dc.identifier.orcidVan Den Hengel, A. [0000-0003-3027-8364]-
dc.identifier.orcidWu, Q. [0000-0003-3631-256X]-
Appears in Collections:Computer Vision publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.