Interpretable, calibrated neural networks for analysis and understanding of inelastic neutron scattering data

J Phys Condens Matter. 2021 Apr 27;33(19). doi: 10.1088/1361-648X/abea1c.

Abstract

Deep neural networks (NNs) provide flexible frameworks for learning data representations and functions relating data to other properties and are often claimed to achieve 'super-human' performance in inferring relationships between input data and desired property. In the context of inelastic neutron scattering experiments, however, as in many other scientific scenarios, a number of issues arise: (i) scarcity of labelled experimental data, (ii) lack of uncertainty quantification on results, and (iii) lack of interpretability of the deep NNs. In this work we examine approaches to all three issues. We use simulated data to train a deep NN to distinguish between two possible magnetic exchange models of a half-doped manganite. We apply the recently developed deterministic uncertainty quantification method to provide error estimates for the classification, demonstrating in the process how important realistic representations of instrument resolution in the training data are for reliable estimates on experimental data. Finally we use class activation maps to determine which regions of the spectra are most important for the final classification result reached by the network.

Keywords: machine learning; neutron scattering; perovskite.