Machine Learning-Based Sensor Data Fusion for Animal Monitoring: Scoping Review

Sensors (Basel). 2023 Jun 20;23(12):5732. doi: 10.3390/s23125732.

Abstract

The development of technology, such as the Internet of Things and artificial intelligence, has significantly advanced many fields of study. Animal research is no exception, as these technologies have enabled data collection through various sensing devices. Advanced computer systems equipped with artificial intelligence capabilities can process these data, allowing researchers to identify significant behaviors related to the detection of illnesses, discerning the emotional state of the animals, and even recognizing individual animal identities. This review includes articles in the English language published between 2011 and 2022. A total of 263 articles were retrieved, and after applying inclusion criteria, only 23 were deemed eligible for analysis. Sensor fusion algorithms were categorized into three levels: Raw or low (26%), Feature or medium (39%), and Decision or high (34%). Most articles focused on posture and activity detection, and the target species were primarily cows (32%) and horses (12%) in the three levels of fusion. The accelerometer was present at all levels. The findings indicate that the study of sensor fusion applied to animals is still in its early stages and has yet to be fully explored. There is an opportunity to research the use of sensor fusion for combining movement data with biometric sensors to develop animal welfare applications. Overall, the integration of sensor fusion and machine learning algorithms can provide a more in-depth understanding of animal behavior and contribute to better animal welfare, production efficiency, and conservation efforts.

Keywords: animal computer interaction; animals; machine learning; sensor; sensor fusion.

Publication types

  • Review

MeSH terms

  • Algorithms
  • Animals
  • Artificial Intelligence*
  • Biometry
  • Cattle
  • Female
  • Horses
  • Machine Learning*
  • Movement

Grants and funding

We acknowledge CONACyT support of this project under the funding CF-2019/2275.