Learning simplified functions to understand

Bruno Apolloni, Ernesto Damiani

Research output: Contribution to journalConference articlepeer-review

Abstract

We propose an unprecedented approach to post-hoc interpretable machine learning. Facing a complex phenomenon, rather than fully capturing its mechanisms through a universal learner, albeit structured in modular building blocks, we train a robust neural network, no matter its complexity, to use as an oracle. Then we approximate its behavior via a linear combination of simple, explicit functions of its input. Simplicity is achieved by (i) marginal functions mapping individual inputs to the network output, (ii) the same consisting of univariate polynomials with a low degree,(iii) a small number of polynomials being involved in the linear combination, whose input is properly granulated. With this contrivance, we handle various real-world learning scenarios arising from expertise and experimental frameworks’ composition. They range from cooperative training instances to transfer learning. Concise theoretical considerations and comparative numerical experiments further detail and support the proposed approach .

Original languageBritish English
Pages (from-to)14-28
Number of pages15
JournalCEUR Workshop Proceedings
Volume2742
StatePublished - 2020
Event2020 Italian Workshop on Explainable Artificial Intelligence, XAI.it 2020 - Virtual, Online
Duration: 25 Nov 202026 Nov 2020

Keywords

  • Compatible explanation
  • Explainable AI
  • Minimum description length
  • Post-hoc Intepretable ML
  • Ridge polynomials
  • Transfer learning

Fingerprint

Dive into the research topics of 'Learning simplified functions to understand'. Together they form a unique fingerprint.

Cite this