| ASIS&T 2006
||START Conference Manager
Semantic Visual Features in Content-based Video Retrieval
ASIS&T Annual Meeting - 2006 (ASIS&T 2006)
Austin, Texas, November 3-9, 2006
Targeting the goal of improving the effectiveness of content-based video retrieval, a novel shot level video browsing method based on semantic visual features (e.g., car, mountain, and fire) is proposed. In contrast to common temporal neighbor browsing, which allows users to navigate temporal neighbors of a selected sample keyframe to find additional matches, semantic visual feature browsing enables users to navigate among keyframes that have similar features. A pilot user evaluation was conducted to compare the effectiveness of three video retrieval and browsing systems: temporal neighbor browsing system, semantic visual feature browsing system, and fused browsing system. The initial results indicated that the fused browsing was the most effective means for all the retrieval tasks. The semantic visual feature browsing system was more efficient than the other two approaches in all non-visual centric tasks and has the same level of effectiveness as the temporal neighbor browsing. Further development of semantic visual feature browsing is warranted by these findings, as well as the study of the relationships between the browsing methods and users’ retrieval tasks.