Neural network for 3D inertial confinement fusion shell reconstruction from single radiographs

Rev Sci Instrum. 2021 Mar 1;92(3):033547. doi: 10.1063/5.0043653.

Abstract

In inertial confinement fusion (ICF), x-ray radiography is a critical diagnostic for measuring implosion dynamics, which contain rich three-dimensional (3D) information. Traditional methods for reconstructing 3D volumes from 2D radiographs, such as filtered backprojection, require radiographs from at least two different angles or lines of sight (LOS). In ICF experiments, the space for diagnostics is limited, and cameras that can operate on fast timescales are expensive to implement, limiting the number of projections that can be acquired. To improve the imaging quality as a result of this limitation, convolutional neural networks (CNNs) have recently been shown to be capable of producing 3D models from visible light images or medical x-ray images rendered by volumetric computed tomography. We propose a CNN to reconstruct 3D ICF spherical shells from single radiographs. We also examine the sensitivity of the 3D reconstruction to different illumination models using preprocessing techniques such as pseudo-flatfielding. To resolve the issue of the lack of 3D supervision, we show that training the CNN utilizing synthetic radiographs produced by known simulation methods allows for reconstruction of experimental data as long as the experimental data are similar to the synthetic data. We also show that the CNN allows for 3D reconstruction of shells that possess low mode asymmetries. Further comparisons of the 3D reconstructions with direct multiple LOS measurements are justified.