Results 1 - 10
of
10
Turning the tables in citation analysis one more time: Principles for comparing sets of documents
- J. Am. Soc. Inf. Sci. Technol
"... We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are—as a rule— highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its ..."
Abstract
-
Cited by 26 (16 self)
- Add to MetaCart
(Show Context)
We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are—as a rule— highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation dis-tribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact indicators according to three degrees of freedom: the selection of the reference sets, the evaluation criteria, and the choice of whether or not to define the publication sets as independent. Bibliometric data of seven princi-pal investigators (PIs) of the Academic Medical Center of the University of Amsterdam are used as an exemplary dataset.We demonstrate that the proposed family indica-tors [R(6), R(100), R(6, k), R(100, k)] are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.
Towards a new crown indicator: an empirical analysis
, 2011
"... We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown ..."
Abstract
-
Cited by 10 (3 self)
- Add to MetaCart
We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care.
Integrated impact indicators compared with impact factors: An alternative research design with policy implications
- Journal of the American Society for Information Science and Technology
, 2011
"... In bibliometrics, the association of “impact ” with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology diff ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
In bibliometrics, the association of “impact ” with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indica-tor (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be com-pared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window.The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information sub-ject categories (“Information Science & Library Science” and “Multidisciplinary Sciences”). The library and infor-mation science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
A comment to the paper by Waltman et al
- Scientometrics
, 2011
"... The Author(s) 2011. This article is published with open access at Springerlink.com Abstract In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old ‘‘crown’ ’ indicator i ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
The Author(s) 2011. This article is published with open access at Springerlink.com Abstract In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old ‘‘crown’ ’ indicator in citation analysis into a new one. Waltman (Scientometrics 87:467–481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491–502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/ FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569–16572, 2005). Given the high correlations between the old and new ‘‘crown’ ’ indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones.
Another group of papers have discussed the proper manner of compiling field-normalized citation indicators (Leydesdorff and Opthof
"... Abstract In the recent debate on the use of Averages of Ratios (AoR) and Ratios of Averages (RoA) for the compilation of field-normalized citation rates, little evidence has been provided on the different results obtained by the two methods at various levels of aggregation. This paper provides such ..."
Abstract
- Add to MetaCart
Abstract In the recent debate on the use of Averages of Ratios (AoR) and Ratios of Averages (RoA) for the compilation of field-normalized citation rates, little evidence has been provided on the different results obtained by the two methods at various levels of aggregation. This paper provides such an empirical analysis at the level of individual researchers, departments, institutions and countries. Two datasets are used: 147,547 papers published between 2000 and 2008 and assigned to 14,379 Canadian university professors affiliated to 508 departments, and all papers indexed in the Web of Science for the same period (N=8,221,926) assigned to all countries and institutions. Although there is a strong relationship between the two measures at each of these levels, a pairwise comparison of AoR and RoA shows that the differences between all the distributions are statistically significant and, thus, that the two methods are not equivalent and do not give the same results. Moreover, the difference between both measures is strongly influenced by the number of papers published as well as by their impact scores: the difference between AoR and RoA is greater for departments, institutions and countries with low RoA scores. Finally, our results show that RoA relative impact indicators do not add up to unity (as they should by definition) at the level of the reference dataset, whereas the AoR does have that property. Introduction Although field-normalized citations rates have been used for almost 25 years
Contents lists available at ScienceDirect Journal of Informetrics
"... journa l homepage: www.e lsev ier.com / locate / jo i Letter to the Editor Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages ..."
Abstract
- Add to MetaCart
journa l homepage: www.e lsev ier.com / locate / jo i Letter to the Editor Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages
(will be inserted by the editor) Universality of Performance Indicators based on Citation and Reference
"... iv ..."
(Show Context)
Dear Sir Problems with the SNIP Indicator1
"... As is well known, citation practices differ across academic fields, especially between the science, social science and arts and humanities domains. Thus, when using citations as a measure of research impact, whether for journals, individuals or departments/institutions, it is necessary to normalise ..."
Abstract
- Add to MetaCart
As is well known, citation practices differ across academic fields, especially between the science, social science and arts and humanities domains. Thus, when using citations as a measure of research impact, whether for journals, individuals or departments/institutions, it is necessary to normalise the raw data to the general citation potential (Garfield, 1972; Garfield, 1979) of the research area.