Extended Methods to Handle Classification Biases

Emma Beauxis-Aussalet, Lynda Hardman

Research output: Contribution to conferencePosterAcademic

1 Downloads (Pure)

Abstract

[Poster presentation of eponym paper, published in 2017 from my work at CWI] Abstract: Classifiers can provide counts of items per class, but systematic classification errors yield biases (e.g., if a class is often misclassified as another, its size may be under-estimated). To handle classification biases, the statistics and epidemiology domains devised methods for estimating unbiased class sizes (or class probabilities) without identifying which individual items are misclassified. These bias correction methods are applicable to machine learning classifiers, but in some cases yield high result variance and increased biases. We present the applicability and drawbacks of existing methods and extend them with three novel methods. Our Sample-to-Sample method provides
accurate confidence intervals for the bias correction results. Our Maximum Determinant method predicts which classifier yields the least result variance. Our Ratio-to-TP method details the error decomposition in classifier outputs (i.e., how many items classified as class Cy truly belong to Cx, for all possible classes) and has properties of interest for applying the Maximum Determinant method. Our methods are demonstrated empirically, and we discuss the need for establishing theory and guidelines for choosing the methods and classifier to apply.
Original languageEnglish
Number of pages1
Publication statusPublished - 2018
Externally publishedYes
EventBNAIC 30th Annual Conference 2018 - Den Bosch, Netherlands
Duration: 8 Nov 20189 Nov 2018

Conference

ConferenceBNAIC 30th Annual Conference 2018
Abbreviated titleBNAIC 2018
CountryNetherlands
CityDen Bosch
Period8/11/189/11/18

Fingerprint Dive into the research topics of 'Extended Methods to Handle Classification Biases'. Together they form a unique fingerprint.

Cite this