2013 Annual Meeting
Montréal, Québec, Canada | November 1-5, 2013
Vanessa Kitzie, Rutgers University
Erik Choi, Rutgers University
Chirag Shah, Rutgers University
Social question-answering (SQA) allows people to ask questions in natural language and receive answers from others. Research on SQA has focused on the quality of the answers provided, with implications for system-based interventions. Yet, few studies focus on assessing the quality of questions asked, and whether the question accurately depicts an asker’s information need. The current study therefore assesses how indicative a question is of the asker’s information need by comparing human assessments of question quality to textual features extracted from the questions’ content in order to determine whether there is a significant relationship between subjective judgments on one hand, and objective ones on the other. Findings indicate that not only is there a significant relationship between human-based ratings of question quality criteria and extracted textual features, but also that distinct textual features contribute to explaining the variability of each rating. These findings encourage further study of the relationship between the reasons for why a question might be of poor quality and textual features that can be extracted from the question, which can ultimately inform the design of intervention-based systems that not only can assess question quality, but also give reasons as to why question quality is poor.