B  U L  L E  T I  N

of the American Society for Information Science and Technology       Vol. 29, No. 2      December/January  2003

Go to
Bulletin Index

bookstore2Go to the ASIST Bookstore


IA Column

On Trust and Users
by Andrew Dillon

Andrew Dillon is dean, GSLIS, University of Texas at Austin, and can be reached by e-mail at adillon@gslis.utexas.edu

I've found myself thinking a lot about trust recently: trust in computers, trust in users and trust in information architects. At first these were just vague rumblings, a concern that all was not quite right, provoked in part by the comments and responses I noted on various lists where otherwise bright people seemed to lose their balance over certain topics (such as "who can call themselves an IA?"). But it came to a head when I read a blog by Kevin Marks (http://epeus.blogspot.com/) entitled "Trust people, not computers." Apparently The Economist ran a report on a study from IBM that shows Internet adoption in a culture has less to do with the classic indices of number of telephone lines or years of education and more to do with the cultural norms related to trust in dealing with others. While the notion of trust has received most attention in work on e-commerce, or academic writing on authenticity and authority, such work is highly localized. Here was a broad sweeping idea about dispositions towards information technology that transcended narrow contexts.

As I write this column I am teaching a class on user cognition and behavior that explores high-level principles governing how users act in information environments. You might think this field has established more than a few such principles over the years. However the process of teaching, as so often happens, has caused me to question many of the assumptions I have made about the trust we can place in users and the information professionals who claim to design for and serve them. It seems to me that the basic ideas of user-centeredness, that we gain reliable and valid design guidance by asking users what they need and which design options they prefer, are difficult to support on the basis of evidence.

Case in point: my students and I have gathered data indicating that early perceptions of usability seem to be heavily influenced by visual appeal or aesthetics and are not accurate indicators of how well people will be able to use an application. We have data showing that early dislike and poor performance with a system can give way, over time, to mastery and even a preference for the poor system over a better-designed alternative. And did you know that a recent German study of user-centered design processes revealed that greater user participation was related to less satisfaction with the resulting design? Add these up and you begin to see that unquestioning acceptance of users' views as the primary basis for designing information systems is simply nave.

Does this mean we should exclude users? Of course not! They remain our best source of information on what needs to be built and how well a resource works for its intended audience. But much depends on what we ask of users and our abilities to interpret their responses. We must move beyond the unquestioning acceptance of all user data as a true and accurate representation of what needs to be built. User responses are subject to many forces, not all of them clearly recognized by the users themselves or the designers and evaluators who study them. User-centered design advocates have concentrated to date on gaining acceptance of their methods, and while progress has been made, this has been at the cost of strong research into the value and trust we can place in certain types of user data. How many claims for usability, for example, are based on initial reactions to a system? Far too many, from my reading of the literature, and it now appears that in many circumstances, such initial reactions are poor indicators of actual use.

When we design our information spaces and invoke user-centered methods to test and evaluate them, how much consideration is given to the forces that drive the user response at that specific time? A quick usability test is certainly a good basis for gaining impressions but it can hardly tell us more. The most popular usability evaluation method in industry, the Heuristic evaluation method popularized by web guru Jakob Nielsen, is itself a limited, dare I say flawed, method that does not even employ users but rests on an evaluator inferring what a user would do and think (as if these were facts any evaluator really could infer). For me, this raises a whole other question of trust that of trusting professionals who claim to serve the interests of users. It would be a good idea for us to take a critical look at the assumptions and analyses underlying the process of user-centered design.

Practicing user-centeredness requires more than asking users for their opinions or their time. It requires us to truly understand the complexities of user behavior and the forces that shape human actions and responses. William Horton remarked at IA 2000 that our designs often reflect our unconscious mistrust of users. It is about time that we raised the basis for such trust or mistrust to the conscious level. I suspect such an examination would cast doubt on many of the assumptions we have made about user-centeredness in this and related fields. Is the field of information architecture up to that challenge?

How to Order

American Society for Information Science and Technology
8555 16th Street, Suite 850, Silver Spring, Maryland 20910, USA
Tel. 301-495-0900, Fax: 301-495-0810 | E-mail:

Copyright © 2003, American Society for Information Science and Technology