|Annual Meeting Contributed Papers 2009||START Conference Manager|
Metadata quality is a big concern for digital libraries as it directly related to the quality of services. The paper presented the results from evaluating the metadata quality of the IPL. Two evaluation methods were used: a preliminary automatic evaluation and a human evaluation using the survey method. The automatic evaluation was focused on the completeness of major IPL metadata fields. The human evaluation asked evaluators to judge the accuracy, completeness, consistency, functionality and usefulness of the IPL metadata fields. Qualitative feedback from the evaluators provided us an in depth picture of the IPL metadata quality. We compared the results from the automatic evaluation and from the human evaluation to determine whether findings from the two methods were different.
|START Conference Manager (V2.54.6)|