Micro-architecture design exploration template for AutoML case study on SqueezeSEMAuto

Sci Rep. 2023 Jun 30;13(1):10642. doi: 10.1038/s41598-023-37682-0.

Abstract

Convolutional Neural Network (CNN) models have been commonly used primarily in image recognition tasks in the deep learning area. Finding the right architecture needs a lot of hand-tune experiments which are time-consuming. In this paper, we exploit an AutoML framework that adds to the exploration of the micro-architecture block and the multi-input option. The proposed adaption has been applied to SqueezeNet with SE blocks combined with the residual block combinations. The experiments assume three search strategies: Random, Hyperband, and Bayesian algorithms. Such combinations can lead to solutions with superior accuracy while the model size can be monitored. We demonstrate the application of the approach against benchmarks: CIFAR-10 and Tsinghua Facial Expression datasets. The searches allow the designer to find the architectures with better accuracy than the traditional architectures without hand-tune efforts. For example, CIFAR-10, leads to the SqueezeNet architecture using only 4 fire modules with 59% accuracy. When exploring SE block insertion, the model with good insertion points can lead to an accuracy of 78% while the traditional SqueezeNet can achieve an accuracy of around 50%. For other tasks, such as facial expression recognition, the proposed approach can lead up to an accuracy of 71% with the proper insertion of SE blocks, the appropriate number of fire modules, and adequate input merging, while the traditional model can achieve the accuracy under 20%.

MeSH terms

  • Algorithms
  • Bayes Theorem
  • Benchmarking
  • Delayed Emergence from Anesthesia*
  • Facial Recognition*
  • Humans