A Comprehensive Data Gathering Network Architecture in Large-Scale Visual Sensor Networks

PLoS One. 2020 Jan 7;15(1):e0226649. doi: 10.1371/journal.pone.0226649. eCollection 2020.

Abstract

The fundamental utility of the Large-Scale Visual Sensor Networks (LVSNs) is to monitor specified events and to transmit the detected information back to the sink for achieving the data aggregation purpose. However, the events of interest are usually not uniformly distributed but frequently detected in certain regions in real-world applications. It implies that when the events frequently picked up by the sensors in the same region, the transmission load of LVSNs is unbalanced and potentially cause the energy hole problem. To overcome this kind of problem for network lifetime, a Comprehensive Visual Data Gathering Network Architecture (CDNA), which is the first comparatively integrated architecture for LVSNs is designed in this paper. In CDNA, a novel α-hull based event location algorithm, which is oriented from the geometric model of α-hull, is designed for accurately and efficiently detect the location of the event. In addition, the Chi-Square distribution event-driven gradient deployment method is proposed to reduce the unbalanced energy consumption for alleviating energy hole problem. Moreover, an energy hole repairing method containing an efficient data gathering tree and a movement algorithm is proposed to ensure the efficiency of transmitting and solving the energy hole problem. Simulations are made for examining the performance of the proposed architecture. The simulation results indicate that the performance of CDNA is better than the previous algorithms in the realistic LVSN environment, such as the significant improvement of the network lifetime.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Computer Communication Networks / instrumentation*
  • Computer Simulation
  • Electronic Data Processing / instrumentation
  • Electronic Data Processing / methods*
  • Humans
  • Image Interpretation, Computer-Assisted / methods*
  • Models, Theoretical*
  • Pattern Recognition, Automated / methods*
  • Visual Perception
  • Wireless Technology / instrumentation*

Grants and funding

This work was supported by National Natural Science Foundation of China (Grant No.: 61902069), Jing Zhang is the project leader, the URL of the project is as follows: http://output.nsfc.gov.cn/fundingQuery; Natural Science Foundation of Fujian Province of China (2017J05098), Jing Zhang is the project leader, the URL of the project is as follows: http://xmgl.fjkjt.gov.cn/p_itemsearch.pr.pr_iteminfo_public_index.do?I_ITEMID=73203&I_ITEMTYPEID=4; and The Education Department of Fujian Province science and technology project (JZ160461), Jing Zhang is the project leader, the URL of the project is as follows: http://59.77.139.101/project/project.do?actionType=view&pageModeId=view&bean.id=3639&pageFrom=commonList.