In this paper we report preliminary results of a study to develop, and subsequently to automate, new metrics for assessment of information quality in text documents, particularly in news. Through focus group studies, quality judgment experiments, and textual feature extraction and analysis, we were able to generate nine quality aspects and apply them in human assessments. Experts and students participated quality experiments, during which 1000 TREC documents were evaluated by participants from two sites -- Albany and Rutgers. Data showed good inter-judge agreement between judges from both sites. Principal component analysis revealed that the nine aspects form clusters of "content" and "presentation." Automatic quality prediction has been derived based on statistical analysis on the association between textual features and human quality judgments.