October 26, 2018
11:00 a.m.
Louvain-la-Neuve
CORE, room b-135
Towards Explainable AI: Significance and tests for Neural Networks
Kay Giesecke, Stanford University
Neural networks underpin many of the best-performing artificial-intelligence systems, including speech recognizers on smartphones of Google's latest automatic translator. The tremendous success of these applications has spurred the interest in applying neural networks in a variety of other fields. In finance for instance, researchers have developed several high-impact applications in risk, investment, and operations management. However, the difficulty of interpreting a neural network model has slowed the implementation of these applications in financial practice, where regulators and other stakeholders insist on model explainability. In this paper, we tackled this issue by developing statistical tests to assess the significance of the input variables of a neural network. We propose a gradient-based test statistic and study its asymptotics using nonparametric techniques. The tests enable one to discern the impact of individual variables on the prediction a oa a neural network. Experiments using actual mortgage data illustrate their properties.
(with Enguerrand Horel)