ASIS Midyear '98 Proceedings

Collaboration Across Boundaries:
Theories, Strategies, and Technology

Evaluation of Community Networks

Kim Gregson, Charlotte Ford
School of Library & Information Science, Indiana University, Bloomington, Indiana

 

Abstract


We reviewed 14 published evaluations of community networks with an eye to their usefulness to community network developers. We also examined the goals and mission statements of 84 web-based community networks and found no core set of goals applied across all of them, which makes it more difficult to develop general goal-based evaluation measures. Based on our review of the published evaluations and our reading of the evaluation literature, we developed a set of recommendations for future community network evaluations.

INTRODUCTION


Over the past decade, community networks have emerged as attempts to promote a sense of community, encourage civic participation, and provide avenues for the exchange of community information (Schuler, 1996). However, evaluation of these networks has been limited, both in number and depth. In this paper, some of the evaluative efforts surrounding community networks are examined and critiqued in light of the literature on evaluation, and recommendations for improving the evaluation of community networks are made. Specifically, evaluations of these community networks can be improved by increasing their rigor, using multiple evaluation methods and data sources, and evaluating at different stages of the networks' development, with an eye toward producing actionable results.



THE NATURE OF EVALUATION


Evaluation research is the systematic, data-based assessment of social programs, with the purpose of informing action and improving decision making (Rossi & Freeman, 1993; Patton 1990). Evaluators of any project or program must determine the purpose of the evaluation they are undertaking (Rutman, 1984). Based on this purpose, they must then decide who to study, what kind of research design to use, and how data collection will be done (Hernon & McClure, 1990; Patton, 1990). There are usually many choices in each of these categories: multiple stakeholders to be considered, quantitative and qualitative research techniques, an abundance of possible sampling strategies. In certain situations, some techniques are more appropriate than others. In the case of community networks, we saw great potential for the use of qualitative evaluation methods, for several reasons. Qualitative evaluation works well for exploratory studies in new fields where there are not many hypotheses yet; it is considered particularly appropriate for evaluating programs that are still developing, for monitoring their progress (Patton, 1990). Qualitative inquiry is naturalistic and inductive; it offers a holistic view of a dynamic situation.

Such methods seem very appropriate for community networks at this point in time, when networks are just getting started, definitions are fuzzy, and hypotheses (and even goals) are not well established. The community-building process is an evolutionary one; developers of electronic communities cannot expect immediate, measurable results (Burgstahler & Swift, n.d.). Flexible, ongoing methods of assessment that consider multiple points of impact could be useful in this kind of environment.

COMMUNITY NETWORKS


Computerized community networks go by many names: public access networks, civic networks, freenets, community networks. They use computers, modems, and communication networks as an extension of the physical geographical community into cyberspace (Maslyn, 1996; Wilcox, 1996), as an electronic communication linkage between the residents of that community and with the rest of the world (Beamish, 1995; Wiencko, 1993). Individuals and organizations make information available about and to the community and use the network to communicate. The networks and the information are locally owned, locally based, and locally controlled (Guy, 1995).


While there are different nuances in the definitions employed by various authors, common elements have shaped the definition used in this paper: a "community network" is a geographically based computer network (that is, one rooted in a particular town, county or neighborhood) that provides a forum for community information and communication, and the opportunity to gain access to the Internet, with an emphasis on making these services available to non-profits and other underserved parts of the community.


This definition implies that what a community network is, is bound up with its purpose. The goals of community networks are what set them apart from other computerized information providers. The literature is filled with the idealistic aims of community networks: they are intended to strengthen and revitalize the people-based communities, strengthen democracy and civic participation, build community awareness, develop economic opportunities in disadvantaged communities, and provide access to knowledge (Schuler, 1996; Civille 1993; Heterick 1997). These nebulous kinds of goals are not easily converted into sets of measurable objectives.



HOW TO EVALUATE A COMMUNITY NETWORK


Several analysts have produced recommendations about how community networks should be evaluated. One theme running through many of these recommendation is the importance of outcomes-based evaluation, focusing on "real benefit for real people... measured against cost and accessibility." (Odasz, 1995) The importance of the goals set out in community network charters as guides to decision and action is a recurrent theme (Morino Institute, 1994; Beamish, 1995; Gygi, 1996; Newman, n.d.; Pigg, 1996). For example, Beamish lists three specific community network goals that she believes are common to all community networks. It is her impression that many community networks (at least in 1995) did not have articulated goals or objectives, or perhaps just did not make them available online. Such goals are necessary to direct a project, monitor it, and focus on priorities. Furthermore, many of the community networks she examined appeared to have given little attention to these three long-term goals. As an example of the kind of evaluation she considers useful, Beamish recommends that community networks find out who is and who is not using their system so that they can further their goal of community building. The Economic Outcomes Evaluation Methodology of the Colorado Rural Telecommunications Project (CRTP, 1996) is a practical example of this goal-oriented approach.

Not all recommendations focus on the goals of the community networks, which seems to create some difficulties in effective evaluation. Doctor & Ankem (1996) developed a taxonomy of information needs and services by which to evaluate community networks, based on user needs identified in the LIS literature. By comparing existing community networks to their taxonomy, they hoped to find answers to the questions: who do the systems serve? what information needs do they try to meet? how effectively do they meet those needs? While useful in providing a picture of the content of existing community networks, the evaluative potential of this framework is unclear. At best, it might provide a basis for comparison and idea generation among community networkers. This theoretical user needs approach is very different from the goal based approach other authors suggest.

Some of the recommendations point to the fact that community networks need to evaluate in different ways at different stages of development. Patterson (1997) suggests market research at the initial stages of a network's existence, to determine desirable content and services. The Morino Institute (1994) emphasizes the need for soliciting public feedback before, during and after development to ensure buy-in and relevance to community needs. They suggest that relevance of information will make or break a community network: community networks will thrive if they help facilitate community, and will fail if they see themselves simply as "alternative information providers." Beamish (1995) differentiates between short-term community network goals (sustainability and growth) and loftier long term goals such as access, public discussion and democratic participation, and community development. At this stage, she recommends that community networks "be measured against the direction and speed of moving toward their goals, rather than the goal itself. Since planning should be an ongoing process, monitoring and evaluation is essential." In other words, when dealing with a nascent phenomenon such as community networks, ongoing monitoring is perhaps more appropriate than applying rigid, post-hoc evaluation techniques.

Some authors also recommend including different stakeholders in the evaluative process for community networks. For instance, focused, periodic user surveys are suggested as an appropriate means of measuring satisfaction; anecdotal reports are also valuable (Pigg, 1996). However, user surveys are only one measure of meeting objectives. A stakeholder approach would involve identifying stakeholder objectives, then developing ways to evaluate whether they have been met. The less specific the objective, the more difficult the evaluation. For instance, open-ended interviews with community network users or stakeholders to determine impacts may be time consuming and costly. Newman (n.d.) proposes eliciting organizational objectives from community and voluntary groups, and interviewing key users in each organization to find out how they would achieve these objectives without a community network and how clients would benefit. In this model, the focus is on measuring benefits (client benefits and organizational efficiency) in existing community organizations rather than in a harder-to-define community.

In addition to considering multiple stakeholders, Patterson (1997) and the CRTP (1996) encourage a multidisciplinary approach to evaluation. For instance, Patterson's model of evaluation, enacted at the Blacksburg Electronic Village (BEV) consists of 4 interconnected nodes: design, access, critical mass, and impacts. Methods of evaluation range from focus group interviews and beta testing (in the design node) to mail surveys and transaction logs (access node) to user profile questionnaire (critical mass node) to surveys investigating psychological gratification and market research (impacts node). While this kind of interdisciplinary approach is key to obtaining a valid picture of a community network, it is fundamental to have agreed upon a common model beforehand in order for it to work, and all researchers must agree to public dissemination of findings so that they can be used to improve the system (Patterson, 1997).

One aspect of multiple methods, the inclusion of both quantitative and qualitative approaches, is a recurrent theme in the literature. Beamish (1995) believes that both quantitative and qualitative information can be useful in assessing what is going well or poorly, and why. For instance, logbooks can tell how the system is being used, but do not tell the whole story. Qualitative information (eg. "stories that people tell about the effect that community networks have on their lives") may be useful in constructing a more complete picture. The complexity and variations of individual use in an online environment make it hard to measure the value of a community network in a quantitative fashion. Odasz (1995) suggests that anecdotal evidence of attitudinal changes, progressive learning, and conceptual growth, may offer better clues to how people benefit and how greater benefits can be achieved.

There are some problems with applying quantitative evaluative methods to community networks. One problem involves their current stage of development; quantitative types of analysis, such as cost-benefit analysis may be inappropriate at a stage in which effects cannot be predicted, results cannot be measured accurately, and one cannot put a price or value on the outcome. For instance, Gygi's (1996) quantitative evaluation examples (counting the number of businesses that participate in forums, number of public terminals, percentage of the population signed on to the community network, user demographic indicators, number of messages sent to government officials) have no clear benchmarks, nor is it evident how effective they are as measures of her stated goals. Nonetheless, collecting and examining this kind of data may give organizers some sense of the range of outcomes that may be expected (Gygi, 1996).

In sum, recommendations for evaluating community networks suggest that the strength of the networks will be in finding their "niche," their unique purpose, and that ultimately, success or failure will depend on this. Not only does this presuppose an understanding of community needs, it requires a clear statement of community network purpose and goals. The evaluative frameworks proposed by Beamish (1995), Gygi (1996), Patterson (1997), and the Colorado Rural Telecommunications Project (1996) are linked to these community network goals. The richest frameworks appear to be those set forth by Patterson (1997) and Beamish (1995), which emphasize multiple approaches to evaluation based on a common model (or set of goals), performed on an ongoing basis, and set up so as to provide feedback to the developing network.



EVALUATION STUDIES OF COMMUNITY NETWORKS


In addition to reading recommendations on how to evaluate community networks, 14 actual evaluations were examined, each of which appeared to be a serious, scientific evaluative effort. Few of the evaluations we identified focused on evaluating performance in relationship to the established goals of the network. Instead they focused on quantity of usage (Patrick, 1996; Patrick & Black, 1996a,b; Patrick, Black & Whalen, 1996), demographic breakdown of usage (Virnoche, 1995; Schalken, 1994), or set up a study to examine a goal thought to be important by the researcher even if it was not considered important by network developers (Guy, 1996). One reason for this lack of goal-oriented evaluation may be that community networks have not written their goals in terms of measurable, observable outcomes, but rather in more abstract terms (Strickland, 1996).


One evaluation that was goal-oriented (Avis, 1995) compared the goals of two community networks with those commonly ascribed to community networks by researchers. He examined these two networks for evidence of some generally accepted benefits of community networks in education, political participation, and community development and then asked questions in a survey to see why the goals of the individual networks differed from the common goals. The evaluation of the projects funded by the Colorado Rural Telecommunications Project (CRTP, 1996) were based on the goals established by the funders and focused on how the funding helped local community development through the introduction of technology. Goals established by the individual developers are not measured. An evaluation of PEN (Rogers, Collins-Jarvis & Schmitz, 1994), the community network in Santa Monica, California, focused on just two goals, selected by the researchers, from the goals set forth by the network developers.

To test the hypothesis that evaluations are hindered by the lack of goals defined in observable and measurable terms, we examined all of the goals and mission statements of 84 community networks in the United States that had a web presence and were listed in Peter Scott's (n.d.) "Freenets and Community Nets" directory (recognized by community network researchers on the Communet listserv as one of the more complete and current listings of community networks as of 1997). It is not a complete list, as it is built on reports sent by network developers to the list maintainer. We looked at the goals and mission statements because they represent the public statement of what the network developers hope to accomplish. It is possible that the developers have goals that are not listed, because they assume that all community networks have these goals.

The list differentiates between community networks (N=66) and freenets (N=18), those networks associated with the National Public Telecommunications Network (NPTN). We maintained this differentiation in our analysis, in case there were specific goals prescribed by the NPTN for its members that the other more individually organized community networks did not have.

Ten community networks had no goals listed in areas open to the general public. The goals listed fell into the following categories based on a content analysis of the goals. Because each community network had several goals, the sum of the columns is greater than the number of community networks.

FreeNets (N=13)

Other CNs (N=61)

Internet/Computer Access

6

27

Information

15

38

Education/Life Long Learning

6

13

Communication

7

14

Public Involvement/Civic Participation

4

11

Community Building

4

14

Economic Development

3

13

Technical Literacy

0

15

Provide social services

0

3

Miscellaneous

2

11

Totals

47

159


Within each category there were a variety of similar, but not identical goals. For instance, in the category of providing access, the following goals were included: provide free or low-cost access (10 sites), give access to the Internet (9 sites), provide public access (7 sites), provide access to schools and libraries and nonprofits (4 sites), provide universal access (2 sites), empower by providing access (1 site). The non-NPTN-affiliated community networks had a wider range of goals in this category and were the only ones to mention providing public terminals.

Our findings indicate that there is no core set of goals agreed upon by all community networks, which makes it more difficult to develop general evaluation recommendations. For instance, providing access and information is more often listed as a goal than is increasing political participation or contributing to economic development. Another problem facing researchers is that these goals are written in such a general manner that it will be difficult to identify objective measures for them. Many key terms, such as community awareness and broad base of the community, are not defined.

Evaluators also need to consider that different types of evaluation should be done at different times in a system's development (Rossi & Freeman, 1993). LaPlaza Community Network (in Taos, New Mexico) has planned a future evaluation (Strickland, 1996); however, the plan does not attempt to assess the evaluability of the community network in order to influence network design but rather focuses on the "how" of the evaluation - surveys and user statistics. It is also possible that more conceptual studies are done informally by the people involved with starting up the community networks, and that their findings are not published as separate reports.

Few categories of stakeholders were included in the evaluations and few studies looked at multiple stakeholders. Network officials were the focus of interviews and case studies (eg. Roberts, 1996; CRTP, 1996; Molz, 1994; Guy, 1996; Law & Keitner, 1995). Users were surveyed, in person or online (eg. Patrick, 1996; Patrick & Black, 1996a,b; Patrick, Black & Whalen, 1996; Schalken, 1994, Rogers, Collins-Jarvis & Schmitz, 1994; Virnoche, 1995; DeSmet, 1995, Strickland, 1996). Where users were studied, convenience sampling often emerged as a problem: for instance, users were surveyed at obligatory orientation sessions (Virnoche, 1995), or evaluation forms were posted online (Schalken, 1994). Such sampling methods cast doubt on the findings. Non-users were not usually considered. There also were no reports of feedback from information providers about how their information was received or if they planned to continue making it available on the community network.

The majority of the studies that we found were descriptive. Some of the most recent evaluation reports described the workings of specific community networks (eg. Roberts, 1996; Guy, 1996; Avis, 1995; Law and Keitner, 1995). These evaluations were not focused on accountability, but rather on creating rich descriptions of the workings of these organizations and raising questions for future researchers to consider when evaluating community networks. The CRTP studies (1996) were designed to be ongoing self-reports of progress with all of the projects using the same criteria and reporting form to allow for comparison. These self-reports are extremely positive; few problems or setbacks are described, limiting their usefulness to others considering similar projects. A 1994 review of 24 community networks in the United States (Molz, 1994) identified the range of content available and the goals posted. The author made no attempt to see if the content was designed to meet any particular goal or whether any of the goals were met.

Some critical observations were made by the researchers. For instance, Virnoche (1995) pointed out that the Boulder Community Network was not meeting one of its goals of reaching people with few computer skills. The researchers involved in the NCF studies observed that much of the content consisted of links that led the user off of the local system, contradicting a goal of focusing on local services for the local community (Patrick, Black & Whalen, 1994). DeSmet's (1995) study of a community information system in a library discovered that 44% of the people were not satisfied with the information that they obtained from the system and fully half of the people could not find what they were seeking.

The design of the evaluations often limits the kinds of suggestions that researchers can make to network developers. For example, the survey conducted by the NCF in 1994 by Patrick did not include nonusers or people who had ceased using the system, nor did it ask current users what specifically attracted them to the service. Without that information, system developers cannot make changes with any guarantee of increasing membership.

Another feature of effective evaluations identified previously was the use of multiple data sources and the usefulness of combining both qualitative and quantitative data when possible. Several evaluations used multiple methods of evaluation. Avis (1995) did interviews, document analysis, and studied usage statistics. DeSmet (1995) analyzed usage logs and conducted online questionnaires. The CRTP reports specifically include quantitative data (number of users, number of new jobs created, cash leveraged with the project investment) and qualitative data such as anecdotal evidence supplied by network developers. Molz (1994) conducted interviews and studied the content of 24 community networks. Law and Keitner (1995) interviewed staff and project directors, observed users at public terminals, and conducted document analysis on reports prepared by networks. Other studies used just one method, primarily surveys or interviews (eg. Schalken, 1994; Rogers, Collins-Jarvis & Schmitz, 1994; Guy, 1996). Except for the CRTP, none of the studies appeared to include questions asked on other surveys so that results could be compared across community networks. No examples of periodic surveys were discovered (though several were promised).

The descriptive nature of the community network evaluations examined may be due to the fact that community network goals are too nebulous and varied at the current time to measure effectively. In general, evaluators identified few stakeholders (often only network officials). Users were surveyed in many studies, but rarely was anything resembling a random survey of users attempted, and even less frequently were non-users included in the evaluations (once). Several studies attempted to draw on multiple data sources. Finally, periodic surveys or surveys using cross-network comparison were found to be lacking, possibly because community networks are a recent phenomenon. Taken as a group, the studies paint a picture of an area of evaluation that is in its infancy, with room for both growth and improvement.



RECOMMENDATIONS FOR EVALUATING COMMUNITY NETWORKS


Taking into account the current state of community networks and the literature on evaluation of community networks, the following conclusions emerge. The first three recommendations are geared to improving the exploratory studies which are designed to increase our understanding of what a community network is, and the next four to improving the practical shorter term evaluations, usually done for a more local audience.

a) Exploratory studies of community networks, combining quantitative and qualitative approaches, should be encouraged in order to accumulate a set of rich descriptions from which others can benefit.
Numbers cannot tell the whole story of how community networks are used and what their significance is (Beamish, 1995; Odasz, 1995). At this early stage of their development, qualitative studies are appropriate ways to evaluate community networks. Deep descriptions devoid of judgment or interpretation are important for understanding new phenomena (Patton, 1990). Case studies seem to be a good method for these first studies, because they focus on ongoing events and allow researchers to investigate a series of "how" and "why" questions (Yin, 1989).

A number of descriptive, evaluative case studies of community networks have appeared recently (eg. Patterson, 1997; Roberts, 1996; Guy, 1996; Avis, 1995; Patrick, 1996). Additional academic research would provide the information to begin to identify important characteristics that define community networks (Beamish, 1995). There are currently no theories specific to community networks. However, there are a variety of theories of how people use technology in non-work settings, how people communicate online, and how people search for information online. Rich, descriptive case studies would enable researchers to see if community networks might be good subjects to further test and expand these theories (Yin, 1989).

b) More stakeholders should be involved in evaluation design and implementation.
A variety of people contribute to the success of a community network: users, board members, information contributors, advertisers, the local media, etc. A research project that aims to describe how well a community network is meeting its goals and objectives (one definition of success) should include information about and opinions from these different groups. Guba & Lincoln (1994) emphasize the importance of including multiple stakeholders in evaluative efforts. Many of the evaluations we examined failed to include users in anything but the most cursory, descriptive fashion. If users are to be included, it may be useful for the networks to maintain a list of their users (Rogers, Collins-Jarvis & Schmitz, 1994).

c) Triangulation and the utilization of multiple sources of data should be encouraged.
Since there are no standards by which to measure community network success and no theories about what constitutes a successful community network implementation, there are no guidelines about the proper type of evidence to collect. This suggests that researchers should consider as many types of evidence as possible from as many relevant sources as possible to get the most complete picture possible (Yin, 1989; Patton, 1990; McClure & Lopata, 1996; Harter & Hert, 1997). Another reason for recommending the use of multiple data collection and analysis methods is that these are systems with a variety of goals which need to be measured differently (McClure & Lopata, 1996; Bertot & McClure, 1996; Moen & McClure, 1997). Data could be collected using surveys, personal interviews, user observations, document review, usage logs, and community demographics. The researchers should consider all possible sources of data as they design their research. However, it must be noted that evaluation has to be affordable; practical limitations of time and money will curb multi-dimensionality in most cases. Decisions about which types of data not to collect should be made with an eye to affordability, but also to the impact that absence of data will have on the results.

d) Evaluations should consider community network goals.
Community networks' goals and objectives are their own declaration of what they intend to achieve, of how they define themselves. Progress toward the goals is as important for the researcher to consider as the number of met goals (Rutman, 1984; Odasz, 1995; Beamish, 1995). Evaluation will be easier if goals are clearly stated. This implies that community networks should have stated goals that they will share with the researcher and that they express these goals in measurable and observable terms (Pigg, 1996).

e) Evaluation results should be translatable into action by network developers.
If an evaluation is written for the benefit of the community network, then results should be expressed in terms of actions that the community network can take to improve the network (Rossi and Freeman, 1993; Patton, 1990; Beamish, 1995). Focusing on the goals, which are the stated aims of the network, can provide a framework for the actionable recommendations. The recommendations can be one way for the evaluators to communicate with the system designers (Patterson, 1997) and other stakeholders if the evaluation report is written with these different audiences in mind (Bertot & McClure, 1996; McClure & Lopata, 1996).

f) It may be helpful to coordinate evaluation across community networks.

Community networks may be unique creations, whose characteristics and goals are determined by the community in which they exist. However, our study of the goals of community networks showed that there are a limited number of areas in which community networks create similar goals, such as access to the Internet, access to information, economic development, etc. Certain kinds of information could be collected across community networks about how these goals are implemented, using some of the same questions and performance measures, so that community networks could then be compared. This kind of planning could enable similar community networks to learn from each other's experiences. There are other basic characteristics of community networks (such as number and types of users and information providers, or usage patterns by time of day and type of user) about which information could be uniformly collected. This would allow researchers to create a richer description of what constitutes a community network. Doctor & Ankem (1996) tried to do this with their taxonomy of information needs and services; the Colorado Rural Telecommunications Project (CRTP, 1996) is an excellent example of this kind of evaluation.

g) Ongoing forms of assessment should be built into the community networks' activities.

Community networks are still changing rapidly. The presence of ongoing growth and change indicates a need for ongoing evaluation, using different research methods at different points in the development of the networks (Pigg, 1996; Trochim, 1997). Evaluation should be a process.



CONCLUSION


Community networks are still developing, and the focus for now is on "staying afloat", but to do so they need to find their niche and build on it; both goals involve evaluation. At this point, exploratory studies may be useful in eventually developing measures that can be used in self assessments, if researchers keep in mind the fact that in order to be applied, measures must be capable of being easily administered, easily interpreted, and fit in with time and financial constraints. As Beamish (1995) observed, short term goals will be those of survival (sustainability, growth); long term goals will be more philosophical (such as access, public discussion, democratic participation). Many of the evaluations we examined were concerned with the longer term issues, and perhaps it is too soon to evaluate these issues. An evaluation cannot be or do everything; it is limited by its purpose. However, community network evaluations can be improved by careful consideration of their own purposes, as well as the purposes of the community networks they are assessing. Perhaps this assessment of the strengths and weaknesses of some community network evaluations and evaluative techniques can help improve the evaluations and thus, the networks.

ACKNOWLEDGEMENT

We would like to acknowledge and thank Professor Carol Hert for the assistance and feedback she gave us on this project.

WORKS CITED


Avis, A.(1995). Public Spaces on the Information Highway: The Role of Community Networks . URL: ttp://www.ucalgary.ca/UofC/faculties/GNST/theses/avis/thesis.html

Beamish, A. (1995). Communities On-Line. unpublished Masters thesis. URL: http://alberti.mit.edu/arch/4.207/anneb/thesis/toc.html

Bertot, J.C., & McClure, C.R. Development Assessment Techniques for Statewide Electronic Networks, Proceedings of the Association for Information Science, 1996, p. 110-117.

Burgstahler, S. & Swift, C. (n.d.) Enhanced Learning Through Electronic Communities: A Research Review. URL: http://164.116.18.39/research_report.html

Civille, R., Fidelman, M., & Altobello, J. (1993). A National Strategy for Civic Networking: A Vision for Change. October, 1993, URL: gopher://gopher.civic.net:2400/00/ssnational_strat/national_strategy.txt

Colorado Rural Telecommunications Project (1996). CRTP Case Studies. URL: http://bcn.boulder.co.us/aerie/crtp/projects/crtpproj.htm

DeSmet, E. (1995). Evaluation of a Computerised Community Information System Through Transaction Analysis and User Survey. Libri, 45, 36-44.

Doctor, R. & Ankem, K. (1996). An Information Needs and Services Taxonomy: for Evaluating Computerized Community Information Systems, presented at LaPlaza, NM Conference on Community Networks, 3/4/96. URL: http://www.laplaza.org/cn/local/doctor1.html

Guba, E., & Lincoln, Y. (1989). Fourth Generation Evaluation. Newbury Park, CA: Sage.

Guy, N.K. (1996). Community networks: building real communities in a virtual space. Masters Thesis. URL: http://www.vcn.bc.ca/people/nkg/ma-thesis/

Gygi, K. (1996). Uncovering Best Practices: A Framework for Assessing Outcomes in Community Computer Networking, presented at LaPlaza, NM Conference on Community Networks, 3/4/96. URL: http://www.laplaza.org/cn/local/gygi.html

Harter, S.P. & Hert, C.A. (1997). Evaluation of Information Retrieval Systems. Annual Review of Information Science and Technology, 32, 1997 [forthcoming].

Hernon, P. & McClure, C.R. (1990). Evaluation and library decision making. Norwood, N.J. : Ablex.

Heterick, R. (1997). "Foreword." In Cohill, A. & Kavanaugh, A. (Eds.). Community Networks: Lessons from Blacksburg, Virginia (p p. xiii-xvii). Norwood, MA: Artech House.

Law, S. & Keitner, B. (1995). "Civic networks: social benefits of on-line communities." In Anderson, R.H.. and others, Universal Access to E-mail: Feasibility and Societal Implications (Chapter 5). URL: http://www.rand.org/publications/MR/MR650/

Maslyn, B. (1996). The Alex Idea at Two Years: A Work in Progress. URL: http://www.alex.org/news/2-years.html

McClure, C.R., & Lopata, C. (1996). Assessing the Academic Networked Environment: Strategies and Options. Washington, DC: Coalition for Networked Information.

Moen, W., & McClure, C.R. (1997) Multi-Method Approaches for Evaluating Complex Networked Information Services: Report from an Evaluation Study of the GILS. ACM Digital Libraries.

Molz, R.K. (1994). Civic Networking in the United States: A Report by Columbia University Students. Internet Research, 4 (4), 52-62.

Morino Institute (1994). Assessment and Evolution of Community Networking, presented at the Ties That Bind Apple Computer/Morino Institute Conference on Building Community Computing Networks, 5/5/94. URL: http://www.morino.org/publications/assessment.html

Newman, D.R. (n.d.) How To Evaluate Electronic Community Networks. URL: http://www.qub.ac.uk/mgt/papers/infosoc/blprop.html

Odasz, F. (1994). The Need for Rigorous Evaluation of Community Networking. in Cisler, S. (ed) Ties that bind: Building community networks. Cupertino, CA: Apple Computer Corporation Library.

Patrick, A., Black, A., & Whalen, T. (1995). Rich, Young, Male, Dissatisfied Computer Geeks? Demographics and Satisfaction From the National Capital Freenet, 1995, URL: http://debra.dgbt.doc.ca/services-research/survey/demographics/vic.html

Patrick, A. (1996). Services on the Information Highway: Subjective Measures of Use and Importance From the National Capital FreeNet, 4/12/96, URL: http://debra.dgbt.doc.ca/services-research/survey/services/

Patrick, A. & Black, A. (1996a). Implications of Access Methods and Frequency of Use for the National Capital FreeNet, 4/12/96a, URL: http://debra.dgbt.doc.ca/services-research/survey/connections/

Patrick, A., & Black, A. (1996b). Losing Sleep and Watching Less TV but Socializing More: Personal and Social Impacts of Using the National Capital Freenet, 4/12/96b, URL: http://debra.dgbt.doc.ca/services-research/survey/impacts/

Patterson, S. (1997). "Evaluating the Blacksburg Electronic Village." In Cohill, A. & Kavanaugh, A. (Eds.). Community Networks: Lessons from Blacksburg, Virginia (pp.55-87). Norwood, MA: Artech House.

Patton, M.Q. (1990). Qualitative Evaluation and Research Methods. Newbury Park, CA: Sage Publications.

Pigg, K. (1996) Sustaining the CIN Through Evaluation, Missouri Express Resource Guide #25. URL: http://outreach.missouri.edu/moexpress/guides/moex25-1.html

Roberts, R. (1996). The Diffusion of Innovation: Dualities of One Electronic Free Community Network. Unpublished dissertation.

Rogers, E., Collins-Jarvis, L. & Schmitz, J. (1994). The PEN Project in Santa Monica: Interactive Communication, Equality, and Political Action. Journal of the Association for Information Science, 45, 401-410.

Rossi, P.H., and Freeman, H.E. (1993). Evaluation : a systematic approach. Newbury Park, CA. : Sage Publications.

Rutman, L. (Ed.) (1984). Evaluation research methods : a basic guide. Beverly Hills : Sage Publications.

Schalken, K., & Topps, P. (1994). The Digital City, A Study into the backgrounds and opinions of its residents, presented at the Canadian Community Networks Conference, 8/15-17/94, URL: http://www.ncf.carleton.ca/freenet/rootdir/menus/freenet/conferences/com-net94/dcity.txt

Schuler, D. (1996). New Community Networks: Wired for Change. New York: ACM Press.

Scott, P. (n.d.). Freenets and Community Nets, URL: http://www.lights.com/freenet/

Strickland, C. (1996). About LaPlaza: Kellogg Evaluation, URL: http://www.laplaza.org/about_lap/kellogg/kellogg_eval.html

Trochim, W. (1997). Evaluating Websites, URL: http://trochim.human.cornell.edu/webeval/webintro/webintro.htm.

Virnoche, M. (1995). Internet Access and Use: Boulder Community Network Users. 3/23/95, URL: http://sobek.colorado.edu/~virnoche/results2.html

Wiencko, J.A., Jr. (1993) The Blacksburg Electronic Village. Internet
Research, 3 (2), 31-40.

Wilcox, D. (1996). Community Networks (first appeared in the British Telecommunications Engineering Journal, January, 1996). URL: http://panizzi.shef.ac.uk/community/cns.html

Yin, R.K. (1989). Case study research : design and methods. Newbury Park, CA.: Sage.


Paper presented at the 1998 midyear meeting of the Association for Information Science, May 17-20, 1998, Orlando, Florida.


Return to ASIS MY98 Proceedings Table of Contents


Copyright © 1998, Association for Information Science. All rights reserved. No part of this document may be reproduced in any form whatsoever without written permission from the publisher.
The opinions expressed by contributors to this publication do not necessarily reflect the position or official policy of the Association for Information Science.

Last updated 5/14/98

Proceedings edited by Barbara M. Wildemuth.
Conference web pages maintained by Jan White.