Redefining Neural Architecture Search of Heterogeneous Multinetwork Models by Characterizing Variation Operators and Model Components

IEEE Trans Neural Netw Learn Syst. 2023 Feb 13:PP. doi: 10.1109/TNNLS.2023.3242877. Online ahead of print.

Abstract

With neural architecture search (NAS) methods gaining ground on manually designed deep neural networks-even more rapidly as model sophistication escalates-the research trend is shifting toward arranging different and often increasingly complex NAS spaces. In this conjuncture, delineating algorithms which can efficiently explore these search spaces can result in a significant improvement over currently used methods, which, in general, randomly select the structural variation operator, hoping for a performance gain. In this article, we investigate the effect of different variation operators in a complex domain, that of multinetwork heterogeneous neural models. These models have an extensive and complex search space of structures as they require multiple subnetworks within the general model in order to answer different output types. From that investigation, we extract a set of general guidelines whose application is not limited to that particular type of model and are useful to determine the direction in which an architecture optimization method could find the largest improvement. To deduce the set of guidelines, we characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.