Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/135671
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Using Style-Transfer to Understand Material Classification for Robotic Sorting of Recycled Beverage Containers
Author: McDonnell, M.D.
Moezzi, B.
Brinkworth, R.S.A.
Citation: Proceedings of the Digital Image Computing: Techniques and Applications (DICTA 2019), 2020, pp.1-8
Publisher: IEEE
Publisher Place: Online
Issue Date: 2020
ISBN: 9781728138572
Conference Name: Digital Image Computing: Techniques and Applications (DICTA) (2 Dec 2019 - 4 Dec 2019 : Perth, Australia)
Statement of
Responsibility: 
Mark D. McDonnell, Bahar Moezzi, Russell S. A. Brinkworthy
Abstract: Robotic sorting machines are increasingly being investigated for use in recycling centers.We consider the problem of automatically classifying images of recycled beverage containers by material type, i.e. glass, plastic, metal or liquid-packagingboard, when the containers are not in their original condition, meaning their shape and size may be deformed, and coloring and packaging labels may be damaged or dirty. We describe a retrofitted computer vision system and deep convolutional neural network classifier designed for this purpose, that enabled a sorting machine’s accuracy and speed to reach commercially viable benchmarks. We investigate what was more important for highly accurate container material recognition: shape, size, color, texture or all of these? To help answer this question, we made use of style-transfer methods from the field of deep learning. We found that removing either texture or shape cues significantly reduced the accuracy in container material classification, while removing color had a minor negative effect. Unlike recent work on generic objects in ImageNet, networks trained to classify by container material type learned better from object shape than texture. Our findings show that commercial sorting of recycled beverage containers by material type at high accuracy is feasible, even when the containers are in poor condition. Furthermore, we reinforce the recent finding that convolutional neural networks can learn predominantly either from texture cues or shape.
Rights: ©2019 IEEE
DOI: 10.1109/DICTA47822.2019.8945993
Grant ID: http://purl.org/au-research/grants/arc/DP170104600
Published version: https://ieeexplore.ieee.org/xpl/conhome/8943071/proceeding
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.