The Journal of the American Society for Information Science and Technology (JASIST) started in 2001 after being published since 1950 under two other titles. Prior bibliometric analyses of JASIS focused on author and article characteristics and trends and on geographic and keyword distributions. The current study examines article citations from 2001 through 2010, drawing on three major citation databases and readership counts. Of 1,459 articles, 14 were cited at the top of at least one database, and seven were among both the top-cited papers and those with the highest readership counts. The top-cited papers focused on the web, informetrics, link analysis, theory and knowledge management. The most often read were on the web as a topic, theory, link analysis, informetrics and databases. Though not used in this research, alternative metrics such as mention counts in social media, Slideshare, Wikipedia and ReaderMeter can complement traditional citation analysis. 

citation analysis
information science history
bibliometrics
World Wide Web
link analysis
altmetrics

Bulletin, August/September 2012


Special Section

JASIST 2001-2010

by Judit Bar-Ilan

JASIS, the Journal of the American Society for Information Science, changed its name to JASIST, the Journal of the American Society for Information Science and Technology, beginning in 2001 with volume 52. This was the second time the journal changed its name. It started out as American Documentation in 1950 and changed its name to the Journal of the American Society for Information Science in 1970, starting with volume 21.

In this article we will provide a short bibliometric characterization of the first 10 JASIST volumes – volumes 52 to 61 for the years 2001 to 2010. This characterization includes the list of most highly cited articles published in JASIST as well as citation counts that will be compared to “readership counts” retrieved from Mendeley, an online reference manager (www.mendeley.com).

Bibliometric analyses of JASIS have been conducted before, where the main emphasis was on analyzing different characteristics of authors. In an article published in 1999 Lipetz [1] studied JASIS authorship during the first five decades of JASIS (and American Documentation) by selecting one volume from each decade. His paper appeared in a special issue of JASIS for the 50th anniversary of the journal. Another paper studying the characteristics of JASIS authors between 1970 and 1996 was published by Al-Ghamdi et al. in 1998 [2]. Different trends in the first 50 volumes of JASIS were analyzed by Koehler et al. [3] in a study published in 2000 that included article characteristics such as number and type of references, length of paper and title in addition to author characteristics. More recently, Chua and Yang [4] studied author, co-authorship and keyword distributions for two 10-year periods of JASIST publications.

He and Spink [5] analyzed the geographic distribution of JASIST and Journal of Documentation authors during a 50-year period, between 1950 and 1999, while Wormell [6] studied geographical distribution of both authors and readers (based on subscriptions) in the mid-1990s, and JASIS was among the analyzed journals. 

Only a few studies emphasized citations: Nisonger [7] analyzed the position of JASIS in various LIS journal rankings in 1999 and found that one of the most frequently employed criteria for ranking journals in the field was citation data. Earlier Harter and Hooten [8] carried out a study of nine volumes of JASIS that included citation data as well. In a study published in 1999 Cronin and Shaw [9] analyzed citation rates and uncitedness of four LIS journals, including JASIST, while in a recent work, Sin [10] studied the effects of international co-authorship in six LIS journals on citation counts. 

The aim of the current study is to analyze the citations received by JASIST articles published between 2001 and 2010. It is well known that citation counts are dependent on the citation database used for data collection, even if all the data were collected at the same time [11, 12]; thus in this study we collected data from three major citation databases: Thomson-Reuters’ Web of Science (WoS), Elsevier’s Scopus and from Google Scholar (GS).

Citations reflect only one aspect of the use of scholarly articles. Not all the articles we read appear in the reference lists of the works we publish, even though they might be influential. This of course is especially true of readers who are not writers, such as students, librarians, information professionals and other people interested in information science. Thus it is of interest to explore the readership of scientific articles. In the past, this data was gathered through library usage studies, for example [13, 14]), but today this exploration can be done by analyzing download statistics [15] or by consulting reference managers [16, 17, 18]. In this study we collected readership counts from the reference manager Mendeley and compared the readership counts with citation counts retrieved from WoS, Scopus and GS.

Mendeley readership counts are just one example of a set of alternative metrics that can be derived from the web and from Web 2.0 applications [19]. Other examples include citations or mentions of peer-reviewed journal articles on Twitter [20] or in blogs [21, 22]. In addition, mentions in CiteULike, Facebook, Delicious and Wikipedia and views and downloads on Slideshare can also be tracked through the total-impact website (http://total-impact.org). Other tools that allow easy production of altmetric measures include ReaderMeter (http://readermeter.org). Publishers are also interested in alternative measures, for example PLoS reports readership counts for Mendeley and CiteULike for all articles it publishes, besides the download and view counts that are sometimes reported by other publishers as well. One of the reasons for the growing interest in alternative metrics is that they can be calculated almost immediately after publication, thus providing almost immediate feedback on interest in the specific article, whereas citation in a peer-reviewed publication takes much longer. Eysenbach [20] has shown that the number of early tweets might be able to predict whether an article will be highly cited later on. Correlation with citations is interesting, but the value of alternative metrics is that they provide information on “impact” in different senses that compliment citations. As noted above, reading an article and thinking highly of it does not necessarily mean that the reader will actually cite it in a journal paper.

Data Collection and Analysis
Data were collected from the three citation databases, WoS, Scopus and GS, in April 2012. In WoS, articles, reviews and proceedings papers were selected (editorials, book reviews, letters, biographical items and bibliographies were excluded); similarly in Scopus, articles, reviews and conference papers were selected. An extensive search and data cleansing were conducted in GS, using Publish or Perish in order to identify relevant JASIST articles and record their citation counts. This process resulted in the identification of 1459 items: 12 were not indexed by Scopus, and one was not indexed by Google Scholar. For each item the basic bibliographic information and citation counts were recorded.

As noted above, we used the online reference manager Mendeley to collect readership information. For an article, the retrieved information includes the number of readers, that is, the number of system users who bookmarked the specific item and included it in their virtual library. Figure 1 is a screenshot from Mendeley showing that multiple Mendeley records can be associated with a single item, in which case readership counts are combined. Out of the 1459 items, 1422 (97.5%) were bookmarked on Mendeley. This finding is very meaningful because it is a result of an effort by the crowd or the community and not a centralized process like in the citation databases. 

Figure 1
Figure 1. A screenshot from Mendeley (www.mendeley.com).

Results
Table 1 displays the top cited papers according to WoS, Scopus and GS. For each citation database we tabulated the top 10 items. Because of the differences in the citation counts among the databases, this process resulted in a total of 14 papers that were top-cited in at least one of the three citation databases. The rankings based on WoS and Scopus are highly similar, while the GS-based ranking differs somewhat. The number of GS citations is consistently higher than the number of citations reported by WoS or Scopus. An interesting outlier is the paper “twitter power” that accrued 233 citations in GS, but only 18 in WoS. These top-cited papers can be categorized as web-related (5 papers), informetrics (5), link analysis (2), theory (1) and knowledge management (1).

Table 1
Table 1
. Top-cited JASIST articles

Table 2 displays the 15 JASIST articles with the highest readership counts. Only seven articles appear in both tables. A striking difference between the two lists is in the topics covered. Whereas informetric topics had considerable representation in the citation-based list, except for the paper CiteSpace II, there are no other papers in this category on the readership-counts-based list. On the other hand, theory is represented much more strongly in the readership-based list than in the citation-count-based list. The topics of the top-read papers can be categorized as: web-related (7 papers), theory (5), link analysis (1), informetrics (1) and databases (1).

Table 2
Table 2. Top-read JASIST articles

Conclusion
In this paper we studied the first decade of articles published in JASIST – after the journal changed its name from JASIS to JASIST. Articles were ranked both in terms of the number of citations they received and in terms of the number of readers who bookmarked the articles using the online reference manager, Mendeley. Remarkably, almost all of the JASIST articles were bookmarked by at least one reader. Although there are significant correlations between the Mendeley readership counts and the citation counts, the correlations are only around 0.5, indicating that reading and citing are two different scientific activities. An additional point is that we do not actually know why users bookmark articles and whether they actually read them. These issues, as well as the reliability of information retrieval from reference managers, should be further explored.

Resources Mentioned in the Article
[1] Lipetz, B. (1999). Aspects of JASIS authorship through five decades. Journal of the American Society for Information Science, 50(11), 994-1003.

[2] Al-Ghamdi, A., et al. (1998). Authorship in JASIS: A quantitative analysis. Katharine Sharp Review, 6. Retrieved June 26, 2012, from http://mirrored.ukoln.ac.uk/lis-journals/review/review/6/al_ghamdi.pdf

[3] Koehler, W., et al. (2000). A profile of statistics in journal articles: Fifty years of American Documentation and the Journal of the American Society for Information Science. Cybermetrics, 4(1), paper 3. Retrieved June 26, 2012, from www.cybermetrics.info/articles/v4i1p3.pdf

[4] Chua, A. Y. K., & Yang, C. C. (2008). The shift towards multi-disciplinarity in information science. Journal of the American Society for Information Science and Technology, 59(13), 2156-2170.

[5] He, S., and Spink, A. (2002). A comparison of foreign authorship distribution in JASIST and the Journal of Documentation. Journal of the American Society for Information Science and Technology, 53(11), 953-959.

[6] Wormell, I. (1998). Informetric analysis of the international impact of scientific journals: How "international" are the international journals? Journal of Documentation, 54(5), 584 – 605.

[7] Nisonger, T. E. (1999). JASIS and library and information science rankings: A review and analysis of the last half-century. Journal of the American Society for Information Science, 50(11), 1004-1019.

[8] Harter, S. P., & Hooten, P. A. (1992). Information science and scientists: JASIS, 1972-1990. Journal of Information Science, 43(9), 583-593.

[9] Sin, S. J. (2011). International coauthorship and citation impact: A bibliometric study of six LIS journals, 1980-2008. Journal of the American Society for Information Science and Technology, 62(9), 1770-1783.

[10] Cronin, B., & Shaw, D. (1999). Citation, funding acknowledgement and author nationality relationships in four information science journals. Journal of Documentation, 55(4), 402-408.

[11] Meho, L., & Yang, K. (2007). Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar. Journal of the American Society for Information Science, 58(13), 2105-2125.

[12] Bar-Ilan, J. (2008). Which h-index? – A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257-271.

[13] Scales, P.A. (1976). Citation analyses as indicators of the use of serials: A comparison of ranked title lists produced by citation counting and from use data. Journal of Documentation, 32(1), 17-25.

[14] Line, M. B. (1979). Rank lists based on citations and library uses as indicators of journal usage in individual libraries. Collection Management, 2(4), 313-316.

[15] Kurtz, M. J., & Bollen, J. (2010). Usage bibliometrics. Annual Review of Information Science and Technology, 44, 1-64.

[16] Haustein, S., & Siebenlist, T. (2011). Applying social bookmarking data to evaluate journal usage. Journal of Informetrics, 5(3), 446-457.

[17] Li, X., Thelwall, M., & Giustini, D. (2012). Validating online reference managers for scholarly impact measurement. Scientometrics, 91, 461-471.

[18] Bar-Ilan, J., Haustein, S., Peters, I., Priem, J., Shema, H., & Terliesner, J. (2012). Beyond citations: Scholars’ visibility on the social web. arXiv:1205.5611v1 

[19] Priem, J., Taborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: A manifesto. Retrieved June 1, 2012, from http://altmetrics.org/manifesto/

[20] Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4). Retrieved June 1, 2012, from www.jmir.org/2011/4/e123

[21] Groth, P., & Gurney, T. (2010. Studying scientific discourse on the web using bibliometrics: A chemistry blogging case study. Proceedings of the WebSci10: Extending the Frontiers of Society On-Line, April 26-27th, 2010, Raleigh, NC. Retrieved June 1, 2012, from http://journal.webscience.org/308/2/websci10_submission_48.pdf

[22] Shema, H., Bar-Ilan, J., and Thelwall, M. (2012). Research blogs and the discussion of scholarly information. PLoS ONE, 7(5): :e35869.doi:10.1371/journal.pone.0035869. Retrieved June 1, 2012, from www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0035869


Judit Bar-Ilan is a professor in the department of information science, Bar-Ilan University, Ramat Gan, Israel. She can be reached at Judit.Bar-Ilan<at>biu.ac.il.