ASIS&T 2014 Annual Meeting 
Seattle, WA | October 31 - November 5, 2014

High Performance Sound Technologies for Access and Scholarship (HiPSTAS) in the Digital Humanities

Tanya Clement1, David Tcheng2, Loretta Auvil2, Tony Borries2
University of Texas at Austin, United States of America; 2University of Illinois at Urbana-Champaign

Monday, Nov. 3, 8:30am


Currently there are few means for humanists interested in accessing and analyzing spoken word audio collections to use and to understand how to use advanced technologies for analyzing sound. The HiPSTAS (High Performance Sound Technologies for Access and Scholarship) project introduces humanists to ARLO (Adaptive Recognition with Layered Optimization), an application that has been developed to perform spectral visualization, matching, classification and clustering on large sound collections. As this paper will address, this project has yielded three significant results for the computational analysis of spoken word collections of keen interest to the humanities: (1) an assessment of user requirements; (2) an assessment of technological infrastructure needed to support a community tool; and (3) preliminary experiments using these advanced resources that show the efficacy, both in terms of user needs and computational resources required, of using machine learning tools to improve discovery with unprocessed audio collections.