DOI QR코드

DOI QR Code

From Signtometrics to Scientometrics: A Cautionary Tale of Our Times

  • Cronin, Blaise (School of Informatics & Computing Indiana University)
  • 투고 : 2013.11.14
  • 심사 : 2013.11.25
  • 발행 : 2013.12.31

초록

It is but a short journey from citation indexing to citation analysis and thence to evaluative bibliometrics. This paper outlines the path and describes how the time-honored practice of affixing bibliographic references to scholarly articles has paved the way for a culture of accounting to establish itself in contemporary academia.

키워드

In 1955 Eugene Garfield published his seminal (he prefers the adjective “primordial” [Garfield, 2009, p. 173]) paper, “Citation indexes for science” in, appropriately enough, the journal Science (Garfield, 1955). His proposed bibliographic tool would allow scientists to more easily and effectively access the proliferating literature of science. The Science Citation Index (SCI) differed from other secondary publication services (e.g., Chemical Abstracts) in that it enabled scientists to chain backwards and forwards in time through the literature, identifying influential papers, and by extension influential authors and ideas, whether inside or outside their home discipline, based on the references authors themselves attached to their papers. Garfield expressed the concept with admirable clarity and succinctness (1955, p. 110):“every time an author makes a reference he is in effect indexing that work from his point of view”. Fast-forward to the present and think for a moment of social tagging, where users rather than professional indexers or automatic indexing software assign index terms/tags to a document. One might thus think of the totality of references attached to an author’s oeuvre as the equivalent of a ‘docsonomy’ (the cluster of tags around any given document). But I digress. In any case, with the advent of the SCI, the humble bibliographic reference had finally come of age. Cinderella, much to everyone’s surprise, would soon be going to the Evaluators’ Ball.

From an historical perspective it is significant that Garfield’s early supporters included a number of eminent scholars, most notably the Nobel Prize-winning geneticist Joshua Lederberg, the sociologist Robert Merton, and the undisputed ‘father of scientometrics’ Derek de Solla Price, the last of whom, a veritable polymath, memorably described how he was “inoculated with Citation Fever” in the 1960s after meeting Garfield at Yale University (Price, 1980, p. vii). The SCI didn’t simply allow scientists to locate potentially relevant research—to reference is to deem relevant— by chaining though the literature, it enabled them to see in general terms whose work was exercising greater or lesser influence on any given epistemic community at any given time. The scholarly journal article’s paratext was gradually moving center stage, a point well grasped by Fuller (2005, p. 131; in this context, see also Cronin, 1995, on the acknowledgment, another paratextual device for bestowing credit), who wryly observed as follows: “Academic texts are usually more interesting for their footnotes than their main argument—that is, for what they consume, rather than what they produce” (italics added). In addition, the SCI allowed historians of science to track the development and diffusion of ideas within and across disciplines and made it possible for sociologists and others to visualize heretofore dimly perceived networks, both national and international, of socio-cognitive interaction and institutional collaboration (Cronin & Atkins, 2000; De Bellis, 2009; Price, 1965; Small, 1973).

Of course, like any system, a citation index is only as good, only as comprehensive, as the data upon which it is based. If your work was brilliant but inexplicably overlooked, or if it happened to receive only delayed recognition (“Sleeping Beauties,” as such papers have been termed by van Raan [2004]), or if you happened to be cited in journals (of which there are many) not covered by the SCI and its sister products, then youwere down on your citational luck. Uniquely, though, the SCI provided scientists with what Garfield aptly termed a “clipping service” (1955, p. 109), a way of not only tracking their own visibility within their peer communities but also an admittedly crude means of quantifying the impact of their work. The privileging of that particular function (self-monitoring/self-evaluation) over information retrieval along with the subsequent reification of (citation) count data by the scientific community at-large was not far off. 

It is important, however, not to lose sight of the fact that the Science Citation Index was conceived of originally as a search and retrieval tool; such was its intended purpose, as Garfield himself repeatedly emphasized over the years (Garfield 1979). The widespread, systematic use of the SCI and its successor products (today embodied in Web of Science [WoS]) for the purposes of impact assessment and bibliometric evaluation came somewhat later (for up-to-date overviews of the many associated reliability, validity, and ethical issues, see Cronin & Sugimoto, 2014a, b). With hindsight that development probably was inevitable. If science is about quantification and measurement, should there not be, one mightreasonably ask, a science ofscience—a guiding meta science—devoted to the measurement of the inputs, processes, outputs and effects, broadly construed, of scientific research? The general sentiment would appear to be ‘yes,’ if the establishment of, to take but a few examples, (a) the journal Scientometrics in 1979, (b) the International Society for Scientometrics and Informetrics in 1993/94, and (c) the Journal of Informetrics in 2007 is anything to go by. Furthermore, if a scientometrician (the hapless Dr. Norman Wilfred in Michael Frayn’s Skios) can be the central character in a critically acclaimed satirical novel, then it’s probably safe to assume that the field has indeed come of age (Frayn, 2012; see also Sharp, 2010, for an indication of growing public interest and awareness in the application of metrics to the conduct ofscience).

At the individual level, most researchers and scholars quite naturally want to know what kind of attention (be it positive or negative, holistic, or particularistic) their published work is attracting and in what quarters. What simpler way to do this than by checking to see who has publicly acknowledged one’s work? And what a pleasant way, at the same time, of having one’s ego boosted. Needless to say, it did not take long for the SCI to become the magic mirror on the wall telling us who was ‘the fairest of them all?’ The index’s popularity rose inexorably as online access gradually replaced the use of the unwieldy printed volumes with their microscopic print that we associate with the early days of the SCI. By way of an aside, Google Scholar’s ‘My Citations’ offers a quick and dirty alternative to both Web of Science and Scopus (see Meho & Yang, 2007 for a comparative assessment) for those who need to know how their intellectual stock is faring at any given moment, though caution is warranted (López-Cózar, Robinson-García, & Torres-Salinas, 2014). Bibliographic references could now be tallied with a few keystrokes and their distributions plotted with ease; they were, after all ‘objective’ in nature, being in effect ‘votes’ (mostly but by no means always positive, there being such a thing as negative citations), to use one of many prevalent metaphors, cast by scientists for other scientists. Before long reference counts (aggregate endorsements, if you will) were being used routinely to identify, inter alia, high-impact publications, influential authors, and productive institutions, even though authors’ motivations for referencing the work of others were inherently complex and anything but clear(e.g., Brooks, 1985; MacRoberts & MacRoberts, 1989). Validity and reliability concerns notwithstanding, the institutionalization of bibliometric indicators was proving to be irresistible.

At the institutional level, universities were not slow to recognize the practical utility of bibliometrically-derived impact measures (e.g., the Journal Impact Factor [JIF], Jorge Hirsch’s [2005] h-index, and most recently the Eigenfactor [West & Vilhena, 2014] in assessing the performance of academic departments, programs and, indeed, individuals (specifically in the context of promotion and tenure reviews). At the science policy level, national research councils are continually looking for reliable data to inform resource allocation decisions and determine funding priorities, while national governments—the UK ‘s 2014 Research Excellence Framework2(REF), a refinement of the rolling Research Assessment Exercises (RAE) begun in the mid-1980s, is a good illustration of the trend—are increasingly making use of bibliometric indicators, albeit in conjunction with established forms of peer review, in evaluating national research strengths, weaknesses, and opportunities (Sugimoto & Cronin, 2014; Owens2013) After all, data don’t lie.

Garfield’s idea (a citation index for science) spawned a successful business (the Institute for Scientific Information [ISI], subsequently acquired by Thomson Reuters), the flagship product of which (Web of Science) has become the dataset of choice for use in large-scale and longitudinal research evaluation execises, though it faces stiff competition in the marketplace from, amongst others, Elsevier’s Scopus. The bibliometric indicators derived from the WoS database are a foundational component of a growing number of institutional ranking and rating systems (e.g., the Leiden Ranking, the Shanghai Ranking). These annual listings of the world’s ‘best universities’ can all too easily influence both public perceptions and, just as important, managerial practice within academia; that is to say, their promulgation has direct,real-world consequences, as universities take note of the variables and weighting mechanisms that determine their overall scores, which, as we shall see, in turn materially affects the behavior of the professorate and, ultimately, alters the ethos of the academy (Burrows, 2012; Weingart, 2005). In similar vein, Thomson Reuters’ Journal Citation Reports (JCR) can be used to provide an ‘objective’ evaluation of the world’s leading scientific journals based on an analysis of cited references. Despite widespread recognition of its many shortcomings (e.g., Seglen, 1997; Lozano, Larivière, & Gingras, 2012), the JIF has become a commonly used expression of a scholarly journal’s presumptive quality or influence and as such shapes authors’ submitting behaviors and also the perceptions of academic review bodies. Many in the scientific community are unhappy with the use of bibliometric indicators to assess authors or journals in such fashion, as can be seen in the recent spate of editorial and opinion pieces condemning their inappropriate and ill-informed use (e.g., Brumback, 2008; and see the recent DORA manifesto, the San Francisco Declaration on Research Assessment: http://am.ascb.org/dora/, for a discussion of concerns, criticisms, and potential remedial actions).

With hindsight, it is fascinating to see how a superficially mundane, more or less normatively governed authorial practice—the affixing of bibliographic references to a scholarly text—has, unwittingly, helped create the conditions necessary for a culture of accounting, most compellingly instantiated in the RAE/REF, to take root in the world of universities (Burrows, 2012; Cronin, 2005). To properly understand how this came about we need to look a little more closely at the way in which a reference is transmuted into a citation, and the ramifications of that silent metamorphosis. Essentially, a bibliographic reference is a sign pointing to a specific published work, its referent (or extensional reference). For Small (1978), references can in certain cases function as concepts symbols; referencing a particular paper is thus equivalent to invoking a specific concept, method, idea, process, etc. A citation, however, is a different kind of sign, in that while it points at a disembodied paper it is also being pointed to by all those later publications that invoked it, in the context of a citation database such as WoS. A reference can thus be thought of, in directional terms, as recognition given and a citation as recognition received. The reciprocalrelationship always existed, of course, but priorto the development of commercial citation indexes its importance was little appreciated. Garfield’s invention altered that; a novelsign system was born.

One of the first to illuminate the subtle distinction between the reference and the citation was Paul Wouters. He described the citation as “the mirror image of the reference” (Wouters 1999, p. 562) and went on to say—simple but nonetheless insightful— that the purpose of commercial citation databases was “to turn an enormous amount of lists of references upside down” (Wouters, 1998, p. 232—for more on the semiotics of referencing and citing, see Cronin, 2000). This inverting of the reference changes its character, transmuting it from a relatively insignificant paratextual element into a potentially highly significant form of symbolic capital, with which academic reputations are built. At the risk of slipping into hyperbole, the SCI turned the dross of literary convention into career gold: no wonder Wouters spoke of “Garfield as alchemist” (Wouters, 2000, p. 65). Today, many scholars not only track their citation scores as a matter of course but unabashedly include raw citation counts and their h-index on their curricula vitae (CVs), for good measure often adding the JIF alongside the journals in which they have published. The message is simple: I count, therefore I am. The hegemony of the sign is complete: signtometrics has begat scientometrics—a case of homophones with quite different meanings.

Human behavior being what it is, this kind of signaling behavior will soon be widely imitated, and beforelong the inclusion of such ‘objective’ indicators, along with so-called alternative indicators of social presence and influence (Piwowar & Priem, 2013), will become a badge of honor to be worn on one’s sleeve, or CV: a clear sign of one’s true market value. This, I suspect, is what Day (2014, p. 73) had in mind when he spoke of the “self-commodification of the scholar” in today’s neo-liberalist society. Indeed, such is the power of peer pressure that even those who are fully cognizant of the limitations of both the h-index and the JIF, and who are by nature disinclined to engage in blatant self-promotion, may find it hard not to follow suit, particularly as assessment bodies, both inside and outside academe, increase their reliance upon standardized metrics of one kind or another. This mutual reinforcement is creating “a regime of permanent self-monitoring” (Wouters, 2014, p. 50) that engenders systematic displacement activity (Osterloh & Frey, 2009).

The emerging culture of accountability within and around academia is directing researchers’ focus away from purely intellectual concerns to extra-scientific considerations such as the career implications of problem choice, the fashionableness or ‘hotness’ of a potential research topic, channel selection for the dissemination of research results, and ways to maximize the attention of one’s peers and thereby one’s citation count (and now also download statistics, since citations are not only lagged but also capture only a portion of total readership [Haustein, 2014]). That, of course, is not to say that scientists and scholars are expected to be shrinking violets, unaccountable to those who fund them, or cavalier in the ways they communicate the findings of their research. Far from it, but these basically second-order considerations should not be allowed to dictate scientists’ research agendas, determine their work styles, or consume a disproportionate amount of their productive time. The inversion of the bibliographic reference is hardly grounds for inverting the time-honored goals of scholarly enquiry. After all, to quote the title of Thomas Sebeok’s(1991) book, a sign isjust a sign.

 

ACKNOWLEDGMENT

I am grateful to Cassidy Sugimoto for comments.

참고문헌

  1. Brooks, T. A. (1985). Private acts and public objects: An investigation of citer motivations. JASIS, 36(4), 223-229. https://doi.org/10.1002/asi.4630360402
  2. Brumback, R. A. (2008). Editorial. Worshipping false idols: The impact factor. Journal of Child Neurology, 23(4), 365-367. https://doi.org/10.1177/0883073808315170
  3. Burrows, R. (2012). Living with the h-index? Metric assemblages in the contemporary academy. Sociological Review, 60(2), 355-372. https://doi.org/10.1111/j.1467-954X.2012.02077.x
  4. Cronin, B. (1995). The scholar's courtesy: The role of acknowledgement in the primary communication process. London: Taylor Graham.
  5. Cronin, B. (2000). Semiotics and evaluative bibliometrics. Journal of Documentation, 56(4), 440-453. https://doi.org/10.1108/EUM0000000007123
  6. Cronin, B. (2005). The hand of science: Academic writing and its rewards. Lanham, MD: Scarecrow Press.
  7. Cronin, B., & Atkins, H. B. (Eds.). (2000). The web of knowledge: a Festschrift in honor of Eugene Garfield. Medford, NJ: Information Today Inc. & The American Society for Information Science.
  8. Cronin, B., & Sugimoto, C. R. (Eds.). (2014a). Beyond bibliometrics: Metrics-based evaluation of research. Cambridge, MA: MIT Press.
  9. Cronin, B., & Sugimoto, C. R. (Eds.). (2014b). Metrics under the microscope: From citation analysis to academic auditing. Medford, NJ: Information Today Inc. & The Association for Information Science & Technology.
  10. Day, R. E. (2014). "The data-It is Me!"("Les donnees-c'est Moi!). In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Metrics-based evaluation of research. Cambridge, MA: MIT Press, 67-84.
  11. De Bellis, N. (2009). Bibliometrics and citation analysis: From the Science Citation Index to cybermetrics. Lanham, MD: Scarecrow Press.
  12. Frayn, M. (2012). Skios. New York. Picador.
  13. Fuller, S. (2005). The intellectual. Cambridge, UK: Icon Books.
  14. Garfield, E. (1955). Citation indexes for science: A new dimension in documentation through association of ideas. Science, 122(3159), 108-111. https://doi.org/10.1126/science.122.3159.108
  15. Garfield, E. (1979). Citation indexing: Its theory and application in science, technology, and the humanities. Philadelphia, PA: ISI Press.
  16. Garfield, E. (2009). From the science of science to Scientometrics: Visualizing the history of science with HistCite software. Journal of Informetrics, 3(3), 173-179. https://doi.org/10.1016/j.joi.2009.03.009
  17. Haustein, S. (2014). Readership metrics. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Metrics-based evaluation of research. Cambridge, MA: MIT Press, 327-344.
  18. Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569-16572. https://doi.org/10.1073/pnas.0507655102
  19. Lopez-Cozar, E. D., Robinson-Garcia, N., & Torres-Salinas, D. (2014). The Google Scholar experiment: How to index false papers and manipulate bibliometric indicators. JASIST (in press).
  20. Lozano, G. A., Lariviere, V., Gingras, Y. (2012). The weakening relationship between the impact factor and papers' citations in the digital age. JASIST, 63(11), 2140-2145. https://doi.org/10.1002/asi.22731
  21. MacRoberts, M. H., & MacRoberst, B. R. (1989). Problems of citation analysis: A critical review, JASIS, 40(5), 342-349. https://doi.org/10.1002/(SICI)1097-4571(198909)40:5<342::AID-ASI7>3.0.CO;2-U
  22. Meho, L. I., & Yang, K. (2007) A new era in citation and bibliometric analyses: Web of Science, Scopus, and Google Scholar, JASIST, 58(13), 2105-2125. https://doi.org/10.1002/asi.20677
  23. Osterloh, M., & Frey, B. S. (2009). Research governance in academia: Are there alternatives to academic rankings? Institute for Empirical Research in Economics, University of Zurich. Working paper no. 423.
  24. Owens, B. (2013, October 16). Research assessments: Judgement day. Nature, 502(7471). Retrieved from: http://www.nature.com/news/research-assessments-judgement-day-1.13950?WT.ec_id=NATURE-20131017
  25. Piwowar, H., & Priem, J. (2013). The power of altmetrics on a CV. Bulletin of the Association for Information Science &Technology, 39(4), 10-13.
  26. Price, Derek D. J. de Solla (1965). Networks of scientific papers. Science, 149(3683), 510-515. https://doi.org/10.1126/science.149.3683.510
  27. Price, Derek D. J. de Solla (1980). Foreword. In E. Garfield, Essays of an information scientist. Vol. 3, 1977-1978. Philadelphia, PA: ISI Press, p. v-ix.
  28. Sebeok, T. A. (1991). A sign is just a sign. Bloomington, IN: Indiana University Press.
  29. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314, 498-502. https://doi.org/10.1136/bmj.314.7079.498
  30. Sharp, R. (2010, August 16). In their element: The science of science. The Independent. Retrieved from: http://www.independent.co.uk/news/science/intheir-element-the-science-of-science-2053374.html
  31. Small, H. G. (1973). Co-citation in the scientific literature: A new measure of the relationship between two documents. JASIS, 24(4), 265-269. https://doi.org/10.1002/asi.4630240406
  32. Small, H. G. (1978). Cited documents as concept symbols. Social Studies of Science, 8, 327-340. https://doi.org/10.1177/030631277800800305
  33. Sugimoto, C. R., & Cronin, B. (2014). Accounting for science. In Cronin, B. & Sugimoto, C. R. (Eds.), Metrics under the microscope: From citation analysis to academic auditing. Medford, NJ: Information Today Inc. & The Association for Information Science & Technology (in press).
  34. Van Raan, A. F. J. (2004). Sleeping Beauties in science. Scientometrics, 59(3), 461-466.
  35. Weingart, P. (2005). Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics, 62(1), 117-131. https://doi.org/10.1007/s11192-005-0007-7
  36. West, J. D., & Vilhean, D. A. (2014). A network approach to scholarly evaluation. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Metricsbased evaluation of research. Cambridge, MA: MIT Press, 151-165.
  37. Wouters, P. (2014). The citation: From culture to infrastructure. In B. Cronin, & C. R. Sugimoto (Eds.), Beyond bibliometrics: Metrics-based evaluation of research. Cambridge, MA: MIT Press.
  38. Wouters, P. (2000). Garfield as alchemist. In Cronin, B. & Atkins, H. B. (Eds.), The web of knowledge: a Festschrift in honor of Eugene Garfield. Medford, NJ: Information Today Inc. & The American Society for Information Science, 65-71.
  39. Wouters, P. (1999). Beyond the holy grail: From citation theory to indicator theories. Scientometrics, 44(3), 561-580. https://doi.org/10.1007/BF02458496
  40. Wouters, P. (1998). The signs of science. Scientometrics, 41(12), 225-241. https://doi.org/10.1007/BF02457980

피인용 문헌

  1. Meta-life vol.65, pp.3, 2014, https://doi.org/10.1002/asi.23237