DIFAIR: Towards Learning Differentiated Image Representations
Résumé
Neural network classifiers are generally trained to differentiate between the same classes during training and testing. In order to prevent incorrect predictions, when an input image contains a class that was not part of the training set, it should be detected. The process of detection of "unknown" classes is called Open-Set Recognition (OSR). Given that a neural network extracts a representation (a feature vector) describing an image, its capacity to detect the presence of a class in an image, through the recognition of specific features, should also imply the ability to detect the absence of a "known" class, through the absence of those features in the representation. In this article, we present DIFAIR (DIFferentiAted Image Representations), a novel approach aimed at learning a representation exhibiting: (i) class separability, through predefined class positions in the representation space; (ii) the extraction of distinct features, which remain inactive if not present in the image; and (iii) semantic meaning when comparing representations. We present a distance-based loss function to optimize a network, in a supervised way, to obtain the proposed representation. The evaluation of DIFAIR in OSR shows a performance close to a similar, distance-based, method, but below state-of-the-art methods. Finally, we visually inspect learned representations to identify the limits of our approach and present directions for future improvement. Code and more figures are available at https://github.com/qchristoffel/DIFAIR.
Domaines
Intelligence artificielle [cs.AI]Origine | Publication financée par une institution |
---|---|
licence |