Food Detection and Segmentation from Egocentric Camera Images

Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov:2021:2736-2740. doi: 10.1109/EMBC46164.2021.9630823.

Abstract

Tracking an individual's food intake provides useful insight into their eating habits. Technological advancements in wearable sensors such as the automatic capture of food images from wearable cameras have made the tracking of food intake efficient and feasible. For accurate food intake monitoring, an automated food detection technique is needed to recognize foods from unstaged real-world images. This work presents a novel food detection and segmentation pipeline to detect the presence of food in images acquired from an egocentric wearable camera, and subsequently segment the food image. An ensemble of YOLOv5 detection networks is trained to detect and localize food items among other objects present in captured images. The model achieves an overall 80.6% mean average precision on four objects-Food, Beverage, Screen, and Person. Post object detection, the predicted food objects which are sufficiently sharp were considered for segmentation. The Normalized-Graph-Cut algorithm was used to segment the different parts of the food resulting in an average IoU of 82%.Clinical relevance- The automatic monitoring of food intake using wearable devices can play a pivotal role in the treatment and prevention of eating disorders, obesity, malnutrition and other related issues. It can aid in understanding the pattern of nutritional intake and make personalized adjustments to lead a healthy life.

MeSH terms

  • Algorithms
  • Eating
  • Feeding Behavior
  • Food*
  • Humans
  • Wearable Electronic Devices*