Despite their widespread adoption, machine learning models are considered to be black boxes. When using them, it is often argued, that if a model performs well, we should trust it and simply ignore the question why it made a certain prediction. But this argumentation is short-sighted. Understanding a model is crucial when decisions are made based on model predictions. This talk will highlight the importance of understanding predictions made by machine learning and will give an overview of state-of-the-art methods to open the black box.
Speaker: Fabian Müller, Head of Data Science, Statworx