Assessing cognitively complex strategy use in an untrained domain

Top Cogn Sci. 2010 Jan;2(1):127-37. doi: 10.1111/j.1756-8765.2009.01068.x. Epub 2009 Dec 11.

Abstract

Researchers of advanced technologies are constantly seeking new ways of measuring and adapting to user performance. Appropriately adapting system feedback requires accurate assessments of user performance. Unfortunately, many assessment algorithms must be trained on and use pre-prepared data sets or corpora to provide a sufficiently accurate portrayal of user knowledge and behavior. However, if the targeted content of the tutoring system changes depending on the situation, the assessment algorithms must be sufficiently independent to apply to untrained content. Such is the case for Interactive Strategy Training for Active Reading and Thinking (iSTART), an intelligent tutoring system that assesses the cognitive complexity of strategy use while a reader self-explains a text. iSTART is designed so that teachers and researchers may add their own (new) texts into the system. The current paper explores student self-explanations from newly added texts (which iSTART had not been trained on) and focuses on evaluating the iSTART assessment algorithm by comparing it to human ratings of the students' self-explanations.

Keywords: Automatic assessment; Empirical validation; Intelligent tutoring systems; Reading strategies.

Publication types

  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adolescent
  • Algorithms*
  • Comprehension*
  • Educational Measurement* / methods
  • Educational Measurement* / standards
  • Educational Technology* / instrumentation
  • Educational Technology* / methods
  • Educational Technology* / standards
  • Feedback, Psychological
  • Humans
  • Reading*