Understanding Machine Learning Model Interpretation
A famous example: White Guy Problem
TSNE Visualization
Other Visualization
Just telling the business, “I have a model with 90% accuracy” is not sufficient information for them to start trusting the model when deployed in the real world.
We need human-interpretable interpretations (HII) of a model’s decision policies which could be explained with proper and intuitive inputs and outputs.
This would enable insightful information to be easily shared with peers (analysts, managers, data scientists, data engineers).