We applied policy capturing and bootstrapping methods to investigate the relevance judgment process, with a particular focus on understanding how judges summarize an overall relevance judg-ment from five specific aspects of relevance. Our data come from relevance judgments made in the development of the MALACH (Multilingual Access to Large Spoken ArCHives) Speech Retrieval Test Collection. We developed a linear model for each of four relevance judges by regressing his or her overall judgments on the five specific relevance aspects. According to these models, different judges tended to assign different importance weights to different aspects. One of the linear models was applied to seven new judgment sets and was highly successful at predicting accurate overall judgments for the seven judgment sets.