Calibration of Convolutional Neural Networks

Name
Markus Kängsepp
Abstract
Deep neural networks have become more popular over time and nowadays these are used for many practical applications. However, the precise output by itself might not be enough, as in some areas it is also important to know how confident the model is. As recently shown, deep neural network predictions are not well-calibrated, in contrast to shallow ones. For example, deep neural networks tend to be over-confident.
In 2017, Guo et al. published temperature scaling method (Guo et al., 2017) and compared it to other existing confidence calibration methods. Later that year, Kull et al. published beta calibration method (Kull et al., 2017), however it was not tested on neural networks. The thesis evaluates beta calibration in context of convolutional neural networks and in order to compare the results with other calibration methods, some of the Guo et al. results were replicated.
This thesis compares histogram binning, isotonic regression and temperature scaling methods from Guo et al. and beta calibration by Kull et al. on various state-of-the-art convolutional neural networks. In addition to loss measures used by Guo et al., Brier score was added. The results were in accordance with Guo et al. outcome. The beta calibration was a little bit worse for most of the models compared to temperature scaling, however, in case of error rate, it was a bit better compared to temperature scaling.
Graduation Thesis language
English
Graduation Thesis type
Master - Computer Science
Supervisor(s)
Meelis Kull
Defence year
2018
 
PDF