TL;DR: If you are going to explain predictions for a black box model you should combine statistical charts with natural language descriptions. This combination is more powerful than SHAP/LIME/PDP/Break Down...continue reading.
XAI (eXplainable artificial intelligence) is a fast growing and super interesting area. Working with complex models generates lots of problems with model validation (on test data performance is great but...continue reading.
I had amazing weekend in Gdansk thanks to the satRday conference organized by Olgun Aydin, Ania Rybinska and Michal Maj. Together with Hanna Piotrowska we had a talk ,,Machine learning...continue reading.
Do you spend a lot of time on data exploration? If yes, then you will like today’s post about AutoEDA written by Mateusz Staniak. If you ever dreamt of automating...continue reading.
iBreakDown: faster, prettier and more precise explanations for predictive models (with interactions)
LIME and SHAP are two very popular methods for instance level explanations of machine learning models (XAI). They work nicely for images and text inputs, but share similar weakness in...continue reading.
DALEX is an R package for visual explanation, exploration, diagnostic and debugging of predictive ML models (aka XAI – eXplainable Artificial Intelligence). It has a bunch of visual explainers for...continue reading.
Written by: Alicja Gosiewska In applied machine learning, there are opinions that we need to choose between interpretability and accuracy. However in field of the Interpretable Machine Learning, there are...continue reading.
At the last homework before Christmas I asked my students from DataVisTechniques to create a ,,Christmas style” data visualization in R or Python (based on simulated data). Libaries like rbokeh,...continue reading.
Facebook Twitter Google+ LinkedIn The breakDown package explains predictions from black-box models, such as random forest, xgboost, svm or neural networks (it works for lm and glm as well). As...continue reading.