Social network extraction and analysis based on multimodal dyadic interaction

Sensors (Basel). 2012;12(2):1702-19. doi: 10.3390/s120201702. Epub 2012 Feb 7.

Abstract

Social interactions are a very important component in people's lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times' Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links' weights are a measure of the "influence" a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.

Keywords: audio/visual data fusion; influence model; social interaction; social network analysis.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Interpersonal Relations*
  • Multimedia*
  • Social Support*
  • User-Computer Interface*