Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/137539
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Piecewise Classifier Mappings: Learning Fine-Grained Learners for Novel Categories with Few Examples
Author: Wei, X.S.
Wang, P.
Liu, L.
Shen, C.
Wu, J.
Citation: IEEE Transactions on Image Processing, 2019; 28(12):6116-6125
Publisher: IEEE
Issue Date: 2019
ISSN: 1057-7149
1941-0042
Statement of
Responsibility: 
Xiu-Shen Wei, Peng Wang, Lingqiao Liu, Chunhua Shen, Jianxin Wu
Abstract: Humans are capable of learning a new fine-grained concept with very little supervision, e.g., few exemplary images for a species of bird, yet our best deep learning systems need hundreds or thousands of labeled examples. In this paper, we try to reduce this gap by studying the fine-grained image recognition problem in a challenging few-shot learning setting, termed fewshot fine-grained recognition (FSFG). The task of FSFG requires the learning systems to build classifiers for the novel fine-grained categories from few examples (only one or less than five). To solve this problem, we propose an end-to-end trainable deep network, which is inspired by the state-of-the-art fine-grained recognition model and is tailored for the FSFG task. Specifically, our network consists of a bilinear feature learning module and a classifier mapping module: while the former encodes the discriminative information of an exemplar image into a feature vector, the latter maps the intermediate feature into the decision boundary of the novel category. The key novelty of our model is a “piecewise mappings” function in the classifier mapping module, which generates the decision boundary via learning a set of more attainable sub-classifiers in a more parameter-economic way. We learn the exemplar-to-classifier mapping based on an auxiliary dataset in a meta-learning fashion, which is expected to be able to generalize to novel categories. By conducting comprehensive experiments on three fine-grained datasets, we demonstrate that the proposed method achieves superior performance over the competing baselines.
Keywords: Computer vision; fine-grained image recognition; few-shot learning; learning to learn
Rights: © 2019 IEEE.
DOI: 10.1109/TIP.2019.2924811
Grant ID: http://purl.org/au-research/grants/arc/DE170101259
Published version: http://dx.doi.org/10.1109/tip.2019.2924811
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.