Citizen Social Lab: A digital platform for human behavior experimentation within a citizen science framework

PLoS One. 2018 Dec 6;13(12):e0207219. doi: 10.1371/journal.pone.0207219. eCollection 2018.

Abstract

Cooperation is one of the behavioral traits that define human beings, however we are still trying to understand why humans cooperate. Behavioral experiments have been largely conducted to shed light into the mechanisms behind cooperation-and other behavioral traits. However, most of these experiments have been conducted in laboratories with highly controlled experimental protocols but with limitations in terms of subject pool or decisions' context, which limits the reproducibility and the generalization of the results obtained. In an attempt to overcome these limitations, some experimental approaches have moved human behavior experimentation from laboratories to public spaces, where behaviors occur naturally, and have opened the participation to the general public within the citizen science framework. Given the open nature of these environments, it is critical to establish the appropriate data collection protocols to maintain the same data quality that one can obtain in the laboratories. In this article we introduce Citizen Social Lab, a software platform designed to be used in the wild using citizen science practices. The platform allows researchers to collect data in a more realistic context while maintaining the scientific rigor, and it is structured in a modular and scalable way so it can also be easily adapted for online or brick-and-mortar experimental laboratories. Following citizen science guidelines, the platform is designed to motivate a more general population into participation, but also to promote engaging and learning of the scientific research process. We also review the main results of the experiments performed using the platform up to now, and the set of games that each experiment includes. Finally, we evaluate some properties of the platform, such as the heterogeneity of the samples of the experiments, the satisfaction level of participants, or the technical parameters that demonstrate the robustness of the platform and the quality of the data collected.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Community Participation / methods*
  • Comprehension
  • Cooperative Behavior
  • Data Collection / methods*
  • Decision Making
  • Empirical Research
  • Games, Experimental
  • Human Experimentation
  • Humans
  • Learning
  • Reproducibility of Results
  • Research Design
  • Research Personnel
  • Science / methods
  • Social Behavior*
  • Software

Grants and funding

This work was partially supported by: Ministerio de Economía, Industria y Competitividad (Spain) (http://www.idi.mineco.gob.es/portal/site/MICINN/), through grants FIS2013-47532-C3-1-P (JD), FIS2016-78904-C3-1-P (JD), FIS2013-47532-C3-2-P (JP), FIS2016-78904-C3-2-P (JP), by Generalitat de Catalunya (Spain) (http://agaur.gencat.cat/en/inici/index.html) through Complexity Lab Barcelona contracts no. 2014 SGR 608 and no. 2016 SGR 1064 (JP) and through Secretaria d’Universitats i Recerca (http://doctoratsindustrials.gencat.cat/en/pages/home) contract no. 2013 DI 49, (JD, JV), by European Union Horizon 2020 research and innovation (https://ec.europa.eu/programmes/horizon2020/) project STEMForYouth grant agreement no 7010577 (JV and JP). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.