ASIS&T 2005 START ConferenceManager    

Evaluating Success in Search Systems (SIGs Metrics, DL, KM, USE)

Elaine G.Toms, Bernard J. Jenson and Gheorghe Muresan

Sparking Synergies: Bringing Research and Practice Together @ ASIST '05 (ASIS&T 2005)
Westin Charlotte, Charlotte, North Carolina, October 28 - November 2, 2005


Abstract

Success in information retrieval (IR) systems has been an illusive concept for much of the past half century. On the one hand, system success is equated with recall and precision by the hard core algorithm generators, who mechanically associate success with a query to retrieved-results match. The TREC Conferences have certainly addressed this area of success. On the other, success is associated with relevance, a concept so nefarious that a recent topic search on Web of Science yielded more 70,000 hits. So much paper and so much verbosity about something that seems well understood. Nearly 10 years ago Saracevic defined five levels of relevance which are well cited. Yet, we are no closer to an operational definition of user relevance, no closer to operationalizing its indicators, and thus no closer to precisely determining how to measure the success of the many IR systems developed for internal organizations or external commercial use. From those initial and seminal studies conducted by Saracevic and Kantor in the 1980ís to those conducted both pre-Internet and post-Web emergence, the process for evaluating, analyzing and reporting results from IR system evaluations has remained much the same, namely: have a few subjects, usually students, perform a few searches in a laboratory setting, labour over recordings and surveys, and months or years later, report the findings, and yes, measure something! Certainly, our evaluation methodologies and metrics are due for a review, discussion, debate and update. This panel will focus on current and emerging methodologies for collecting, analyzing and reporting findings from the evaluation of IR systems, with a view to working toward valid and reliable measures of search success. The panel will examine three key areas in pursuit of this goal, which are: 1) Processes and tools for adding greater diversity to study populations, increasing the sample size, and simplifying the research study process. 2) Methods for decreasing the lag time between data collection and data analysis. 3) The metrics, challenges, and methods to address these challenges in studies of IR systems in context.

Note: We are seeking a 4th from the search engine development community who is not confirmed at present.

Discuss this on the ASIS&T 2005 Annual Meeting wiki!


  
START Conference Manager (V2.49.6)
Maintainer: rrgerber@softconf.com