Convolutional Neural Networks (CNN) are state of the art when it comes to computer vision tasks, such as image recognition and object detection. However, due to the high amount of architectural complexity, it is often difficult to interpret the decisions made by these networks. Luckily, there are several techniques available which can enhance our comprehension of CNN decisions. These techniques are generally divided into attribution and visualisation methods. In this interactive talk, we will describe two attribution methods: Class Activation Maps (CAMs) and Layerwise Relevance Propagation (LRP), and talk about visualisation techniques for convolutional filters. We will demonstrate all the techniques in colab notebooks and show how we used them in some of our own projects.