Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/116545
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorFelix, R.-
dc.contributor.authorVijay Kumar, B.-
dc.contributor.authorReid, I.-
dc.contributor.authorCarneiro, G.-
dc.contributor.editorFerrari, V.-
dc.contributor.editorHebert, M.-
dc.contributor.editorSminchisescu, C.-
dc.contributor.editorWeiss, Y.-
dc.date.issued2018-
dc.identifier.citationLecture Notes in Artificial Intelligence, 2018 / Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (ed./s), vol.11210 LNCS, pp.21-37-
dc.identifier.isbn9783030012304-
dc.identifier.issn0302-9743-
dc.identifier.issn1611-3349-
dc.identifier.urihttp://hdl.handle.net/2440/116545-
dc.description.abstractIn generalized zero shot learning (GZSL), the set of classes are split into seen and unseen classes, where training relies on the semantic features of the seen and unseen classes and the visual representations of only the seen classes, while testing uses the visual representations of the seen and unseen classes. Current methods address GZSL by learning a transformation from the visual to the semantic space, exploring the assumption that the distribution of classes in the semantic and visual spaces is relatively similar. Such methods tend to transform unseen testing visual representations into one of the seen classes’ semantic features instead of the semantic features of the correct unseen class, resulting in low accuracy GZSL classification. Recently, generative adversarial networks (GAN) have been explored to synthesize visual representations of the unseen classes from their semantic features - the synthesized representations of the seen and unseen classes are then used to train the GZSL classifier. This approach has been shown to boost GZSL classification accuracy, but there is one important missing constraint: there is no guarantee that synthetic visual representations can generate back their semantic feature in a multi-modal cycle-consistent manner. This missing constraint can result in synthetic visual representations that do not represent well their semantic features, which means that the use of this constraint can improve GAN-based approaches. In this paper, we propose the use of such constraint based on a new regularization for the GAN training that forces the generated visual features to reconstruct their original semantic features. Once our model is trained with this multi-modal cycle-consistent semantic compatibility, we can then synthesize more representative visual representations for the seen and, more importantly, for the unseen classes. Our proposed approach shows the best GZSL classification results in the field in several publicly available datasets.-
dc.description.statementofresponsibilityRafael Felix, B. G. Vijay Kumar, Ian Reid, and Gustavo Carneiro-
dc.language.isoen-
dc.publisherSpringer-
dc.relation.ispartofseriesLecture Notes in Computer Science-
dc.rights© Springer Nature, Switzerland AG 2018-
dc.source.urihttp://dx.doi.org/10.1007/978-3-030-01231-1_2-
dc.subjectGeneralized zero-shot learning; generative adversarial networks; cycle consistency loss-
dc.titleMulti-modal cycle-consistent generalized zero-shot learning-
dc.typeConference paper-
dc.contributor.conference15th European Conference on Computer Vision (ECCV 2018) (8 Sep 2018 - 14 Sep 2018 : Munich)-
dc.identifier.doi10.1007/978-3-030-01231-1_2-
dc.relation.granthttp://purl.org/au-research/grants/arc/CE140100016-
dc.relation.granthttp://purl.org/au-research/grants/arc/FL130100102-
dc.relation.granthttp://purl.org/au-research/grants/arc/DP180103232-
pubs.publication-statusPublished-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
dc.identifier.orcidCarneiro, G. [0000-0002-5571-6220]-
Appears in Collections:Aurora harvest 3
Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.