Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/111348
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Multi-attention network for one shot learning
Author: Wang, P.
Liu, L.
Shen, C.
Huang, Z.
van den Hengel, A.
Shen, H.
Citation: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2017, vol.2017-January, pp.6212-6220
Publisher: IEEE
Publisher Place: Online
Issue Date: 2017
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 9781538604588
ISSN: 1063-6919
Conference Name: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (21 Jul 2017 - 26 Jul 2017 : Honolulu, HI)
Statement of
Responsibility: 
Peng Wang, Lingqiao Liu, Chunhua Shen, Zi Huang, Anton van den Hengel, Heng Tao Shen
Abstract: One-shot learning is a challenging problem where the aim is to recognize a class identified by a single training image. Given the practical importance of one-shot learning, it seems surprising that the rich information present in the class tag itself has largely been ignored. Most existing approaches restrict the use of the class tag to finding similar classes and transferring classifiers or metrics learned thereon. We demonstrate here, in contrast, that the class tag can inform one-shot learning as a guide to visual attention on the training image for creating the image representation. This is motivated by the fact that human beings can better interpret a training image if the class tag of the image is understood. Specifically, we design a neural network architecture which takes the semantic embedding of the class tag to generate attention maps and uses those attention maps to create the image features for one-shot learning. Note that unlike other applications, our task requires that the learned attention generator can be generalized to novel classes. We show that this can be realized by representing class tags with distributed word embeddings and learning the attention map generator from an auxiliary training set. Also, we design a multiple-attention scheme to extract richer information from the exemplar image and this leads to substantial performance improvement. Through comprehensive experiments, we show that the proposed approach leads to superior performance over the baseline methods.
Rights: © 2017 IEEE
DOI: 10.1109/CVPR.2017.658
Grant ID: http://purl.org/au-research/grants/arc/DE170101259
Published version: http://dx.doi.org/10.1109/cvpr.2017.658
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.