Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/137568
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, F.-
dc.contributor.authorTian, Y.-
dc.contributor.authorCordeiro, F.R.-
dc.contributor.authorBelagiannis, V.-
dc.contributor.authorReid, I.-
dc.contributor.authorCarneiro, G.-
dc.contributor.editorLian, C.-
dc.contributor.editorCao, X.-
dc.contributor.editorRekik, I.-
dc.contributor.editorXu, X.-
dc.contributor.editorYan, P.-
dc.date.issued2021-
dc.identifier.citationLecture Notes in Artificial Intelligence, 2021 / Lian, C., Cao, X., Rekik, I., Xu, X., Yan, P. (ed./s), vol.12966 LNIP, pp.426-436-
dc.identifier.isbn9783030875886-
dc.identifier.issn0302-9743-
dc.identifier.issn1611-3349-
dc.identifier.urihttps://hdl.handle.net/2440/137568-
dc.descriptionThis is the 12th in a series of workshops on this topic in conjunction with the 24th International Conference on Medical Image Computing & Computer Assisted Intervention (MICCAI 2021)-
dc.description.abstractThe training of deep learning models generally requires a large amount of annotated data for effective convergence and generalisation. However, obtaining high-quality annotations is a laboursome and expensive process due to the need of expert radiologists for the labelling task. The study of semi-supervised learning in medical image analysis is then of crucial importance given that it is much less expensive to obtain unlabelled images than to acquire images labelled by expert radiologists. Essentially, semi-supervised methods leverage large sets of unlabelled data to enable better training convergence and generalisation than using only the small set of labelled images. In this paper, we propose Selfsupervised Mean Teacher for Semi-supervised (S2MTS2) learning that combines self-supervised mean-teacher pre-training with semi-supervised fine-tuning. The main innovation of S2MTS2 is the self-supervised meanteacher pre-training based on the joint contrastive learning, which uses an infinite number of pairs of positive query and key features to improve the mean-teacher representation. The model is then fine-tuned using the exponential moving average teacher framework trained with semisupervised learning. We validate S2MTS2 on the multi-label classification problems from Chest X-ray14 and CheXpert, and the multi-class classification from ISIC2018, where we show that it outperforms the previous SOTA semi-supervised learning methods by a large margin. Our code will be available upon paper acceptance.-
dc.description.statementofresponsibilityFengbei Liu, Yu Tian, Filipe R. Cordeiro, Vasileios Belagiannis, Ian Reid, and Gustavo Carneiro-
dc.language.isoen-
dc.publisherSpringer International Publishing-
dc.relation.ispartofseriesLecture Notes in Computer Science; 12966-
dc.rights© Springer Nature Switzerland AG 2021-
dc.source.urihttps://link.springer.com/book/10.1007/978-3-030-87589-3-
dc.subjectSemi-supervised learning; Chest X-ray; Self-supervised learning; Multi-label classification-
dc.titleSelf-supervised Mean Teacher for Semi-supervised Chest X-Ray Classification-
dc.typeConference paper-
dc.contributor.conference12th International Workshop, Machine Learning in Medical Imaging (MLMI) (27 Sep 2021 - 27 Sep 2021 : Strasbourg, France)-
dc.identifier.doi10.1007/978-3-030-87589-3_44-
dc.publisher.placeSwitzerland-
dc.relation.granthttp://purl.org/au-research/grants/arc/DP180103232-
dc.relation.granthttp://purl.org/au-research/grants/arc/FT190100525-
pubs.publication-statusPublished-
dc.identifier.orcidLiu, F. [0000-0003-0355-2006]-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
dc.identifier.orcidCarneiro, G. [0000-0002-5571-6220]-
Appears in Collections:Australian Institute for Machine Learning publications
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.