START Conference Manager    

Are Self-Assessments Reliable Indicators of Topic Knowledge?

Michael J. Cole, Xiangmin Zhang, Jinging Liu, Chang Liu, Nicholas J. Belkin, Ralf Bierig and Jacek Gwizdka

(Submission #285)


Abstract

Self-assessments of topic/task knowledge are taken to be generally reliable. We test this by constructing a concept-based topic representation that permits comparison of participant knowledge with expert knowledge and then compare this measure of topic knowledge with participant judgments of their topic knowledge. The knowledge domain in this study is genomics and knowledge representations are constructed using the MeSH thesaurus terms that index documents judged relevant to a task by expert Trec assessors. We conducted a user study with 40 participants who provided self-assessments of their topic knowledge in the form of direct questions about anticipated task difficulty, and familiarity with the topic, and questions about other mental states associated with topic knowledge, such as amount of learning during the task. The results provide evidence that these self-assessed topic knowledge measures are correlated in the expected way with the independently-constructed topic knowledge measure normalized against expert topic knowledge. Although the experiment tasks were in a specialized scientific domain, we argue the results provide evidence for the general validity and reliability of direct self-assessment of topic knowledge.

Categories

Program Track:  Track 3 - Information Systems, Interactivity and Design
Submission Type:  Research Paper

START Conference Manager (V2.56.8 - Rev. 1261)