Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization

Authors: Fel, T., Boissin, T., Boutin, V., Picard, A., Novello, P., Colin, J. , Linsley, D., Rousseau, T., Cadène, R., Gardes, L., Serre, T.

External link: https://proceedings.neurips.cc/paper_files/paper/2023/hash/76d2f8e328e1081c22a77ca0fa330ca5-Abstract-Conference.html
Publication: 37th Conference on Neural Information Processing Systems (NeurIPS), 36, p. 37813-37826, 2023
PDF: Click here for the PDF paper
A preprint has been published at arXiv:2306.06805 on June 11th, 2023.

Feature visualization has gained substantial popularity, particularly after the influential work by Olah et al. in 2017, which established it as a crucial tool for explainability. However, its widespread adoption has been limited due to a reliance on tricks to generate interpretable images, and corresponding challenges in scaling it to deeper neural networks. Here, we describe MACO, a simple approach to address these shortcomings. The main idea is to generate images by optimizing the phase spectrum while keeping the magnitude constant to ensure that generated explanations lie in the space of natural images. Our approach yields significantly better results – both qualitatively and quantitatively – and unlocks efficient and interpretable feature visualizations for large state-of-the-art neural networks. We also show that our approach exhibits an attribution mechanism allowing us to augment feature visualizations with spatial importance. We validate our method on a novel benchmark for comparing feature methods, and release its visualizations for all classes of the ImageNet dataset on https://serre-lab.github.io/Lens/. Overall, our approach unlocks, for the first time, feature visualizations for large, state-of-the-art deep neural networks without resorting to any parametric prior image model.