Explainable AI for Credit Risk Assessment

  • Khamis Alkhyeli

Student thesis: Master's Thesis

Abstract

This study extends the existing interpretable machine learning workflows by adding a diagnostic step for assessing the trustworthiness of two popular model-agnostic explainers, SHAP and LIME. More specifically, the proposal is to measure their stability in repeated random samples, the discriminative validity of weights produced by them, and the level of agreement between the two methods. Substantial instability, low discriminatory power, and poor agreement between the two methods may imply interpretability quality risks for a given method applied to a given dataset. The proposed approach is implemented on five binary classification cases based on datasets related to credit risk assessment. The use of multiple datasets has allowed for obtaining empirical benchmarks for the proposed diagnostic indicators. Traditional explainable AI workflow has been extended by creating a set of functions to evaluate the stability of weights, discriminating power, and the agreement between SHAP and LIME results. This diagnostic component was implemented in a Shiny web application not only to help the model-builder choose the preferred explainer but also to warn them about data quality issues that may arise if the results are contradictory.
Date of AwardAug 2023
Original languageAmerican English
SupervisorU Zeyar Aung (Supervisor)

Keywords

  • Explainable AI (XAI)
  • SHAP
  • LIME
  • Model-agnostic
  • Credit risk assessment

Cite this

'