Pruning for feature preserving circuits in CNNs

Authors: Chris Hamblin, Talia Konkle, George Alvarez (2022)

Article link: https://arxiv.org/abs/2206.01627

Abstract: Deep convolutional neural networks are a powerful model class for a range of computer vision problems, but it is difficult to interpret the image filtering process they implement, given their sheer size. In this work, we introduce a method for extracting 'feature-preserving circuits' from deep CNNs, leveraging methods from saliency-based neural network pruning. These circuits are modular sub-functions, embedded within the network, containing only a subset of convolutional kernels relevant to a target feature. We compare the efficacy of 3 saliency-criteria for extracting these sparse circuits. Further, we show how 'sub-feature' circuits can be extracted, that preserve a feature's responses to particular images, dividing the feature into even sparser filtering processes. We also develop a tool for visualizing 'circuit diagrams', which render the entire image filtering process implemented by circuits in a parsable format.

Presenter: Julien Colin

Date: 2024-06-25 15:00 (CEST)

Online: https://bit.ly/ellis-hcml-rg