In the days where we have autonomous cars, drones, and automated medical diagnostics, we want to learn more about how to interpret the decisions made by the machine learning models. Having such information we are able to debug the models and retrain it in the most efficient way.
This talk is dedicated to managers, developers and data scientists that want to learn how to interpret the decisions made by machine learning models. We explain the difference between white and black box models, the taxonomy of explainable models and approaches to XAI. Knowing XAI methods is especially useful in any regulated company.
We go through the basic methods like the regression methods, decision trees, ensemble methods, and end with more complex methods based on neural networks. In each example, we use a different data set for each example. Finally, we show how to use model agnostic methods to interpret it and the complexity of the interpretability of many neural networks.
Dr Karol Przystalski
12:40PM - Day 1