A systematic study of the class imbalance problem in convolutional neural networks.

Citation data:

Neural networks : the official journal of the International Neural Network Society, ISSN: 1879-2782, Vol: 106, Page: 249-259

Publication Year:
2018
Captures 428
Readers 428
Mentions 1
Q&A Site Mentions 1
Social Media 195
Tweets 169
Shares, Likes & Comments 26
Citations 5
Citation Indexes 5
Repository URL:
http://arxiv.org/abs/1710.05381
PMID:
30092410
DOI:
10.1016/j.neunet.2018.07.011
Author(s):
Buda, Mateusz; Maki, Atsuto; Mazurowski, Maciej A.
Publisher(s):
Elsevier BV
Tags:
Neuroscience; Computer Science; Computer Science - Computer Vision and Pattern Recognition; Computer Science - Artificial Intelligence; Computer Science - Machine Learning; Computer Science - Neural and Evolutionary Computing; Statistics - Machine Learning
Most Recent Tweet View All Tweets
article description
In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest.