WHERE ARE WE and WHERE DO WE
WANT TO BE: SUMMARIZING THE 1999 ASIS MIDYEAR MEETING
John C. Bertot and Carol A. Hert
Thanks are in order to all the attendees, presentors
and the ASIS organization for making this an exciting and informative Midyear
meeting. Over the period of the conference, we got a sense of the
momentum building for evaluation efforts and starting to see networking
among projects and people beginning. Below, we briefly distill what
we found to be the key themes at the meeting and follow that up with some
suggestions for where we, as a field and as a professional organization,
should be heading in terms of the evaluation of networked resources.
Themes:
-
Substantial work in developing new research models
and frameworks: We heard about numerous projects which were incorporating
new methods of evaluation, new frameworks for an evaluation approach, etc.
There is beginning to be sufficient empirical data to examine the utility
of these different approaches for design and management of networked resources.
-
The Nike Approach -“Just do it”. This theme
speaks to the need to begin to evaluate. Given the constant change
of systems, it will be impossible to wait until the perfect moment to evaluate
or the perfect evaluation method-doing some evaluation is better than none!
-
Various audiences: There are many audiences
of an evaluation: Are you interested in speaking to system designers, system
funders, decision makers, your users, other evaluators? Be clear,
as each will require different reporting techniques and perhaps evaluation
approaches. The conference presentors provided examples of how to
speak to different audiences of evaluations.
-
Different user communities need different assessments.
There is no “one-size-fits-all” evaluation strategy. Assessing a
community network will require a different approach then assessing a corporate
intranet because the communities are different and the systems have different
goals.
-
REMEMBER NON-USERS! Traditionally, we’ve been
good as assessing systems from the perspective of users-however what about
all the non-users, the potential users? Are they somehow barred from
usage of the resources? Remember to consider them in evaluation strategies.
-
Social/political context for networked resources/systems
matters: Numerous presenters provided specific case evidence for
the context specific nature of evaluation activities. Understanding how
the resources/system fit into their environment is critical to establishing
evaluation goals.
-
Need for evaluation approaches that recognize
this: Given that the social/political context matters, we are still struggling
for evaluation strategies that explicitely recognize this and incorporate
it.
-
Move away from description: Evaluation should be
at the service of design, management, and other decisions. Evaluators
need to be clear on what can be done with the results of their evaluation.
-
Reconceptualization of the phenomena of interest
leading to new evaluation approaches: A number of speakers chose to present
new ways to conceptualize what was being evaluated as a starting place
for new metrics and strategies. There was a general sense that the
very traditional metrics of recall and precision were not sufficient given
the nature of networked resources.
-
New metrics: A number of new metrics were suggested.
Trustworthiness seemed important to several speakers given the nature of
information available on the web. A user’s willingness to use a resource
was also presented as a relevant metric in an environment where people
have a wide variety of resources to choose among.
-
Clearly a need for multi-method approaches: The complexity
of networked resources and their usage clearly indicates that evaluators
should use multiple methods in evaluations. Interviews, usability
tests, transaction logs, policy analyses, questionnaires, etc. were all
discussed.
-
Need for social impact data: The systems under discussion
are changing our world and as such, there is a need for data on the social
and societal impacts of these technologies.
-
Policy issues are important components of evaluation.
The policy structure surrounding networked resources is rapidly changing
and evaluators must be cognizant of how those policies and their change
impact their evaluation efforts.
-
Need for education and training: Evaluation
efforts are hindered due to limited training on how to evaluate and use
the results of evaluation. Perhaps more importantly, decision makers
also need to be educated to understand how to incorporate the results of
evaluation in decision making.
-
Funding agencies are out there to support evaluation
initiatives: the good news is that Federal funding agencies are willing
to support evaluation activities-both within the context of other projects
and as stand-alone activities.
WHERE SHOULD WE BE IN FIVE YEARS?
This conference made great strides in bringing
together a cadre of interested and informed individuals with a stake in
networked resource evaluation. In order to build on what happened at the
meeting, we would like to suggest the following goals for the community
to strive for in the next several years:
-
Better sharing of data/a clearinghouse: The
many projects/evaluations which are occurring can inform each other-a mechanism
is needed to provide a way in which to share information. Additionally,
we need to keep each other informed on what doesn’t work so that individuals
do not recreate the wheel with each evaluation. A review article
or bibliography of sources might be useful.
-
Internationally-based evaluation agenda: Since
evaluations are socially, politically, and contextually embedded, an evaluation
agenda for networked resources that recognizes the international scope
of those resources and the evaluation efforts must be developed.
-
Broad need for research and funding in this area--a
follow-up meeting: Keeping our attention focussed on evaluation is critical
for maintaining momentum. We need additional research projects (and
thus funding) that can continue to add empirical data and build evaluation
frameworks. Ongoing conversations with funding agencies and awareness
activities (such as further meetings) may be helpful in keeping attention
focussed.
-
Methods for integrating technical and social aspects
of evaluation: Some concern was expressed that technical evaluations (Such
as transaction log studies) may not be well connected to social evaluations.
Formalizing strategies for that integration may be helpful.
-
More clarity on what purposes evaluations serve and
the clarification of connections between evaluation and actionable outcomes
at all levels from technical aspects to social world: We must expand educational/training
activities associated with evaluation to provide evaluators and decision
makers with the expertise to utilize evaluations appropriately. The
development of training packages, syllabi, etc. may be helpful. Each
of us should be an advocate/irritant in our organizations for the role
of evaluation.
7/5/99