Tools for Explainability and Transparency

  • This resource presents a unified approach to interpreting model predictions known as SHAP (shapely regression values and layer-wise relevance propagation) which combines the methodologies of local-interpretable model explainers, DeepLIFT (https://github.com/kundajelab/deeplift), Tree Interpreters, QII and shapely sampling values to deliver a method that they clam can be used to explain the prediction of any machine learning model.

·  

  • InterpretML is an open-sourced code by Microsoft toolkit aimed at improving explainability.

  • The PWC report by Oxborugh et al. titled ‘Explainable AI: Driving business value through greater understanding’ provides a high-level introduction to the range of techniques available to developers seeking to make their models more explainable. Some of the ‘hands-on’ available ones are LIME, a model-agnostic approach and TreeInterpreters, an algorithm-specific method.

• Random Forest 1 Explainer (RFEX 2.0), by D. Petkovic, A. Alavi, D. Cai, J. Yang, S. Barlaskar, offers integrated model and novel sample explainability. RFEX 2.0 is designed in User Centric way with non-AI experts in mind, and with simplicity and familiarity, e.g. providing a one-page tabular output and measures familiar to most users. RFEX is demonstrated in a case study from the collaboration of Petkovic et al. with the J. Craig Venter Institute (JCVI).

  • Alibi is an open source Python library aimed at ML model inspection and interpretation. It focuses on providing the code needed to produce explanations for black-box algorithms. The goals of the library are to provide high quality reference implementations of black-box ML model explanation algorithms, ·   define a consistent API for interpretable ML models, support multiple use cases (e.g. tabular, text and image data classification, regression) and implement the latest model explanation, concept drift algorithmic bias detection and other ML model monitoring and interpretation methods.

  • DeepLIFT (Deep Learning Important FeaTures) as a method for ‘explaining’ the predictions made by neural networks.

  • Among the techniques available for understanding neural-networks there are the visualisation of CNN representations, methods for diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations and middle-to-end learning based on model interpretability. These can be found in their python implementation here:

  • An online toolkit providing a range of resources (e.g. codebooks) available for use for the purpose of improving the interpretability of a an algorithm. have created a series of Juptyer notebooks using open source tools including Python, H20, XGBoost, GraphViz, Pandas, and NumPy to outline practical explanatory techniques for machine learning models and results.

·