|ASIS&T 2006||START Conference Manager|
Whether researchers are working on the development of a taxonomy or a thesaurus for an image collection or testing a content based image retrieval algorithm, a standard test collection is invaluable if their research results are to have any wider impact. There is a need for standard image test collections in order to address the disparity in image research, in terms of materials (images) used, and compare research results across studies.
There are efforts already underway. Prominent among these are the cross-language image retrieval track (ImageCLEF, http://ir.shef.ac.uk/imageclef/) and the Medical Image Retrieval Challenge Evaluation (ImageCLEFmed, http://ir.ohsu.edu/image/) that have already seen participation from both academic and commercial research groups worldwide. ImageCLEF has been running for four years while researchers have been submitting test/evaluation results based on the ImageCLEFmed test collection for the last three years. They are both part of the Cross Language Evaluation Forum (CLEF) campaign (http://clef.iei.pi.cnr.it/), which is a benchmarking event for multilingual information retrieval held annually since 2000.
The main purpose of this panel is to sensitize the image research community in general and ASIS&T’s SIG/VIS () in particular with respect to the image retrieval evaluations and encourage researchers to make an effort in adopting standard image collections for their future research. It is hoped that the panel presentations will serve as catalysts for increased participation in ImageCLEF, ImageCLEFmed, and other benchmarking events, thereby increasing the level of standardization of image research which will in turn help researchers compare their results to results from similar research studies.
|START Conference Manager (V2.52.6)|