ASIS&T 2005 START ConferenceManager    

Metadata Quality and Evaluation (SIG CR)

Presenters: William E. Moen, School of Library and Information Sciences, University of North Texas, wemoen@unt.edu; and Jane Greenberg, School of Information and Library Science, University of North Carolina at Chapel Hill, janeg@ils.unc.edu; and Elizabeth D. Liddy, Director, Center for Natural Language Processing, School of Information Studies, Syracuse University, liddy@syr.edu; and Marcia Zeng and Bhagirathi Subrahmanyam, School of Library and Information Science, Kent State University, mzeng@kent.ed, bsubrahm@kent.edu.

Sparking Synergies: Bringing Research and Practice Together @ ASIST '05 (ASIS&T 2005)
Westin Charlotte, Charlotte, North Carolina, October 28 - November 2, 2005


Abstract

ABSTRACT

        Interest in "metadata quality and evaluation" has increased with the growth in digital resource repositories and continued study of metadata generation. Two fundamental questions driving research in this area are: What is quality metadata? How do we measure metadata quality? Quality is very difficult to define. Guy, Powell, and Day (2004) offer a useful definition by explaining "high quality metadata supports the functional requirements of the system it is designed to support," and further posit that "quality is about fitness for purpose."

        Researchers have also presented measures.  For example, Moen, Stewart, and McClure (1997) identified 23 criteria from an in-depth comparative analysis of bibliographic control, metadata, and digital library literature. From this work, accuracy, consistency, completeness, and currency were used for their analysis of Government Information Locator Services (GILS) metadata records. More recently, and related to the work of Moen, et al, Bruce and Hillman (2004) draw from the library community's experience and have identified measures, such as completeness, accuracy, provenance, conformance to expectation, logical consistency, coherence, timeliness, and accessibility. These and other related initiatives are important to furthering our understanding of means for measuring metadata quality.

        This panel addresses the topic of metadata quality and evaluation. Specific topics covered include how metadata specialists (library catalogers) utilize available rich content designation for MARC bibliographic records; means for comparing metadata generated by resource authors, automatic metadata generation applications, and professional metadata creators-and the identification of evaluation criteria; and two National Science Foundation National Science Digital Library (NSF-NSDL) projects: a tripartite evaluation involving a metadata quality study, an information retrieval study, and a metadata user study; and a quality assessment of metadata records contributed by members to the NSDL.

Discuss this on the ASIS&T 2005 Annual Meeting wiki!


  
START Conference Manager (V2.49.6)
Maintainer: rrgerber@softconf.com