Learn about gradient-weighted CAMs that use gradients for generating visualizations.

We'll cover the following

Gradient-weighted CAM

A gradient-weighted CAM or Grad-CAM is another class activation map algorithm that uses the gradients of the predicted class propagating into the final convolutional layer to weigh the feature maps used for creating maps.

Let's assume that FlRh×w\mathcal{F}_l \in \R^{h \times w} is the lthl^{th} feature map obtained from the penultimate convolutional layer (hh and ww are the height and width of the feature map) and Fl(i,j)\mathcal{F}_l(i,j) represents the activation of the (i,j)th(i,j)^{th} neuron in the lthl^{th} feature map (total LL feature maps in F\mathcal{F}).

Now we compute the gradient Gl(i,j)=Fl(i,j)fk(X) \mathcal{G}_l(i,j) = \nabla_{\mathcal{F_l(i,j)}} f^{k^*}(X) of the prediction score fk(X)f^{k^*}(X) with respect to the feature Fl(i,j)\mathcal{F}_l(i,j). The gradient Gl(i,j)Rh×w\mathcal{G}_l(i,j) \in \R^{h \times w} is then average pooled across height and width to be used as weights in the CAM equation. This looks as follows:

Get hands-on with 1200+ tech skills courses.