Initializing photonic feed-forward neural networks using auxiliary tasks

Neural Netw. 2020 Sep:129:103-108. doi: 10.1016/j.neunet.2020.05.024. Epub 2020 Jun 1.

Abstract

Photonics is among the most promising emerging technologies for providing fast and energy-efficient Deep Learning (DL) implementations. Despite their advantages, these photonic DL accelerators also come with certain important limitations. For example, the majority of existing photonic accelerators do not currently support many of the activation functions that are commonly used in DL, such as the ReLU activation function. Instead, sinusoidal and sigmoidal nonlinearities are usually employed, rendering the training process unstable and difficult to tune, mainly due to vanishing gradient phenomena. Thus, photonic DL models usually require carefully fine-tuning all their training hyper-parameters in order to ensure that the training process will proceed smoothly. Despite the recent advances in initialization schemes, as well as in optimization algorithms, training photonic DL models is still especially challenging. To overcome these limitations, we propose a novel adaptive initialization method that employs auxiliary tasks to estimate the optimal initialization variance for each layer of a network. The effectiveness of the proposed approach is demonstrated using two different datasets, as well as two recently proposed photonic activation functions and three different initialization methods. Apart from significantly increasing the stability of the training process, the proposed method can be directly used with any photonic activation function, without further requiring any other kind of fine-tuning, as also demonstrated through the conducted experiments.

Keywords: Neural network initialization; Photonic activation functions; Photonic deep learning.

MeSH terms

  • Deep Learning*
  • Photons*