Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/138395
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGarg, S.-
dc.contributor.authorSuenderhauf, N.-
dc.contributor.authorMilford, M.-
dc.date.issued2018-
dc.identifier.citationIEEE International Conference on Robotics and Automation, 2018, pp.3645-3652-
dc.identifier.isbn1538630818-
dc.identifier.isbn9781538630815-
dc.identifier.issn1050-4729-
dc.identifier.issn2577-087X-
dc.identifier.urihttps://hdl.handle.net/2440/138395-
dc.description.abstractWhen a human drives a car along a road for the first time, they later recognize where they are on the return journey typically without needing to look in their rear view mirror or turn around to look back, despite significant viewpoint and appearance change. Such navigation capabilities are typically attributed to our semantic visual understanding of the environment [1] beyond geometry to recognizing the types of places we are passing through such as “passing a shop on the left” or “moving through a forested area”. Humans are in effect using place categorization [2] to perform specific place recognition even when the viewpoint is 180 degrees reversed. Recent advances in deep neural networks have enabled high performance semantic understanding of visual places and scenes, opening up the possibility of emulating what humans do. In this work, we develop a novel methodology for using the semantics-aware higher-order layers of deep neural networks for recognizing specific places from within a reference database. To further improve the robustness to appearance change, we develop a descriptor normalization scheme that builds on the success of normalization schemes for pure appearance-based techniques such as SeqSLAM [3]. Using two different datasets — one road-based, one pedestrian-based, we evaluate the performance of the system in performing place recognition on reverse traversals of a route with a limited field of view camera and no turn-back-and-look behaviours, and compare to existing stateof- the-art techniques and vanilla off-the-shelf features. The results demonstrate significant improvements over the existing state of the art, especially for extreme perceptual challenges that involve both great viewpoint change and environmental appearance change. We also provide experimental analyses of the contributions of the various system components: the use of spatio-temporal sequences, place categorization and placecentric characteristics as opposed to object-centric semantics.-
dc.description.statementofresponsibilitySourav Garg, Niko Suenderhauf and Michael Milford-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE International Conference on Robotics and Automation ICRA-
dc.rights©2018 IEEE-
dc.source.urihttps://ieeexplore.ieee.org/xpl/conhome/8449910/proceeding-
dc.titleDon't look back: Robustifying place categorization for viewpoint- and condition-invariant place recognition-
dc.typeConference paper-
dc.contributor.conferenceIEEE International Conference on Robotics and Automation (ICRA) (21 May 2018 - 25 May 2018 : Brisbane, Australia)-
dc.identifier.doi10.1109/ICRA.2018.8461051-
dc.publisher.placePiscataway, NJ.-
dc.relation.granthttp://purl.org/au-research/grants/arc/CE140100016-
dc.relation.granthttp://purl.org/au-research/grants/arc/FT140101229-
pubs.publication-statusPublished-
dc.identifier.orcidGarg, S. [0000-0001-6068-3307]-
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.