Supporting Data Science via Explainable Artificial Intelligence

  • Maria Sahakyan

Student thesis: Doctoral Thesis


Machine learning techniques are increasingly gaining attention due to their widespread use in various disciplines across academia and industry. Despite their tremendous success, many such techniques suffer from the 'black-box' problem. This problem has fuelled interest in Explainable Artificial Intelligence (XAI). Although most XAI techniques are designed to explain the model decisions on an instance level, our main research question is to investigate whether such techniques can be used beyond that context. More specifically, our goal is to explore the possibility of using XAI techniques not for their originally intended purpose of explaining the outcome of a particular model on a particular instance but rather for the purpose of analyzing the data set. Despite the numerous survey articles that summarize a wide range of XAI techniques, no survey exists to date that focuses on tabular data, which is surprising given how popular this type of data is. We fill this gap by providing a comprehensive, up-to-date survey of the XAI techniques relevant to tabular data. Our main contribution is the demonstration of different ways in which XAI techniques can be used as generic tools to support data analysts by providing new insights and perspectives on the data set under consideration. Through experimental studies, we identified two XAI techniques that can indeed support analysts and researchers in two aspects: to identify and analyze groups of people who exhibit similar patterns, thereby providing insights that cannot be obtained by conventional methods such as regression analysis and to improve the performance of generic data analysis tools. In so doing, we demonstrate that XAI can indeed help researchers to avoid misleading conclusions when they only use traditional statistical methods for data analysis.
Date of AwardDec 2020
Original languageAmerican English
SupervisorU Zeyar Aung (Supervisor)

Cite this