The development of food image detection and recognition model of Korean food for mobile dietary management

Nutr Res Pract. 2019 Dec;13(6):521-528. doi: 10.4162/nrp.2019.13.6.521. Epub 2019 Nov 21.

Abstract

Background/objectives: The aim of this study was to develop Korean food image detection and recognition model for use in mobile devices for accurate estimation of dietary intake.

Subjects/methods: We collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size. The dataset for training contained more than 92,000 images categorized into 23 groups of Korean food. All images were down-sampled to a fixed resolution of 150 × 150 and then randomly divided into training and testing groups at a ratio of 3:1, resulting in 69,000 training images and 23,000 test images. We used a Deep Convolutional Neural Network (DCNN) for the complex recognition model and compared the results with those of other networks: AlexNet, GoogLeNet, Very Deep Convolutional Neural Network, VGG and ResNet, for large-scale image recognition.

Results: Our complex food recognition model, K-foodNet, had higher test accuracy (91.3%) and faster recognition time (0.4 ms) than those of the other networks.

Conclusion: The results showed that K-foodNet achieved better performance in detecting and recognizing Korean food compared to other state-of-the-art models.

Keywords: Food recognition; deep convolutional neural networks (DCNN); dietary assessment; mobile device.