Development and evaluation of a 3-D virtual pronunciation tutor for children with autism spectrum disorders

PLoS One. 2019 Jan 28;14(1):e0210858. doi: 10.1371/journal.pone.0210858. eCollection 2019.

Abstract

The deficit in speech sound production in some children with autism spectrum disorder (ASD) adds to their communication barriers. The 3-D virtual environments have been implemented to improve their communication abilities. However, there were no previous studies on the use of a 3-D virtual pronunciation tutor designed specifically to train pronunciation for children with ASD. To fill this research gap, the current study developed and evaluated a 3-D virtual tutor which served as a multimodal and real-data-driven speech production tutor to present both places and manners of Mandarin articulation. Using an eye-tracking technique (RED 5 Eye Tracker), Experiment 1 objectively investigated children's gauged attention distribution online while learning with our computer-assisted 3-D virtual tutor in comparison to a real human face (HF) tutor. Eye-tracking results indicated most participants showed more interests in the visual speech cues of the 3-D tutor, and paid some degree of absolute attention to the additional visual speech information of both articulatory movements and airflow changes. To further compare treatment outcomes, training performance was evaluated in Experiment 2 with the ASD learners divided into two groups, with one group learning from the HF tutor and the other from the 3-D tutor (HF group vs. 3-D group). Both groups showed improvement with the help of computer-based training in the post-intervention test based on the calculation of a 5-point Likert scale. However, the 3-D group showed much higher gains in producing Mandarin stop and affricate consonants, and apical vowels. We conclude that our 3-D virtual imitation intervention system provides an effective approach of audiovisual pronunciation training for children with ASD.

Publication types

  • Evaluation Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Attention
  • Autism Spectrum Disorder / psychology
  • Autism Spectrum Disorder / therapy*
  • Case-Control Studies
  • Child
  • Child, Preschool
  • China
  • Computer-Assisted Instruction / methods*
  • Eye Movement Measurements
  • Face
  • Humans
  • Imaging, Three-Dimensional
  • Male
  • Phonetics*
  • Speech
  • Speech Therapy / methods*
  • Virtual Reality*

Grants and funding

This work was partly supported by grants from National Natural Science Foundation of China (NSFC: U1736202, 61771461, 11474300) (http://www.nsfc.gov.cn/), and Shenzhen Fundamental Research Program (JCYJ20160429184226930, JCYJ20170413161611534) (http://www.szsti.gov.cn/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.