Interpretable Machine Learning
Prof. Dr. rer. nat. Marius Lindauer
Übungsbetreuung:
Background
The learning objectives include the students acquiring both the theoretical and practical basics of interpretable machine learning (iML). For this purpose, they should internalize the mathematical basics as well as be able to implement, execute and evaluate iML approaches. In a final project, the students will independently apply the learned concepts to a new problem.
Topics
Following topic are covered within the lecture:
- GAMs and Rule-based Approaches
- Feature Effects
- Local Explanations
- Shapley Values for Explainability
- Instance-wise Feature Selection
- Gradient-based Feature Attribution
- Actionable Explanation and Resources
- Evaluating Interpretability and Utility
Requirements
We strongly recommend that you know the foundations of
- AI
- Machine Learning
- Deep Learning
in order to attend the course. You should have attended at least one other course for ML and DL in the past.
Literature
- Interpretable ML Book by Molnar.
- Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K. R. (Eds.). (2019). Explainable AI: interpreting, explaining and visualizing deep learning (Vol. 11700). Springer Nature., ISO 690.
Dynamics
This course as well as the exercises will be in English only.