Deep convolutional neural networks have achieved state-of-art-performance on many vision-related tasks. In this work we investigate how color information is utilized by detecting learned color-sensitive features.
Deep CNNs (such as VGG1 and Alexnet2) exhibit different performance on classification, depending on whether they are presented with color or grayscale images.
Color-sensitive units are defined as units whose average activation changes significantly between color and grayscale versions of the same data. In the figure below, we show color-sensitivity based on images from the PASCAL VOC3 dataset.
The hue-specificity of units is measured by observing the activation value in response to a monochrome image of varying hue. The co-activation between units and classes is used to identify units that are class-invariant, or class-specific.
Example units:
[1] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
[2] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds., pp. 1097–1105. Curran Associates, Inc., 2012.
[3] Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman, “The pascal visual object classes challenge: A retrospective,” International Journal of Computer Vision, vol. 111, no. 1, pp. 98–136, 2015.