Citation statistics

There is an interesting new report “Citation statistics“, jointly produced by the International Mathematical Union (IMU), International Council of Industrial and Applied Mathematics (ICIAM), and the Institute of Mathematical Statistics (IMS), on the use and abuse of various citation statistics (such as impact factors and h-indices) as proxies for research quality.  (One of the authors, incidentally, is Peter Taylor from the University of Melbourne, not to be confused with Peter Taylor from the Australian Mathematics Trust.)  The press release for the report is available here. The basic message is that these statistics can supplement expert judgement of the quality of one’s research, but cannot substitute for that judgement, despite being more a “objective” metric, as they are subject to various artificial distortions.  (For instance, a typical paper in the life sciences is cited six times more frequently than one in maths or computer science, due to a variety of factors, including the different academic cultures of these disciplines.)

Of course, expert evaluation by someone knowledgeable in the subject matter is a scarce resource, and it is still very tempting to rely on these statistics in the absence of such judgment.  I once was involved in applying for a large Australian grant that was open to all sciences.  One of the reviewers commented that the proposers (who were all mathematicians) had significantly fewer publications than those from competing proposals, particularly those in the life sciences (though my own publication count of 150 or so papers was deemed “acceptable”).   While statistics such as impact factors are intended to remove some of the distortions coming from using raw publication count as a measure of research quality and output, they are still far from perfect, especially when it comes to comparisons across disciplines.  (For the record, our proposal was not funded, though this was probably a result of many other factors than the above comment.)

[Via The Funneled Web and the Australian Mathematical Society.]

7 Responses

  1. A long time ago, while advisig me o where, what, and how much to publish, my then-mentor told me this: “some places weigh them, some count them, and some actually read them; choose according to your kind of place”.

    Two of my pet peeves in the matter of citatations: people who use them without realizing that impact factors can be and sometimes are manipulated, people who compare impact factors for different subjects.

  2. The UK Government is planning to adopt a research assessment system more heavily reliant on such indicators from now on. I commented on Alexandre Borovik’s blog post about this, and I quote from my comment here:

    The UK Computing Research Committee (UKCRC), which comprises leading computer scientists from academia and industry [in Britain], has also strongly criticized the proposed use of citation indexes for the assessment of research quality and achievements in UK universities. See their report here:

    Click to access 2008-01.pdf

    It is worth quoting two short sections from this report:

    “It would be incompetent and unprofessional to introduce a citation-based Research Excellence Framework until it has been established that there is an adequately complete, consistent and auditable set of data, available from multiple sources free of any commercial bias, that can be relied on to be kept up to date, that includes citations in journals, conferences, PhD theses, industrial reports and institutional repositories — and that assessments based on citation counts from these sources leads to cost-effective assessment of research quality that does not lead to undesirable changes in the way research is carried out or published or on standards or variety of teaching. We do not see any convincing evidence that these criteria have been met.”


    “Note that UKCRC members include internationally renowned experts in the automated collection, processing analysis and storage of information – the theories, tools and methods that underlie the proposed bibliometric indicators. Our authoritative view is that the bibliometric indicators are not currently fit for the proposed purpose.”

  3. One thing I liked about this report is its sheer clarity. I wish more people would write like that!

  4. On counts: the South Korean government currently appears to have two major problems–US beef and a senior presidential aide having the same publication counted twice.

    From the Korean Times:

    “Presidential Chief of Staff Chung Chung-kil is embroiled in allegations that he was involved in the multiple publication of an article he wrote in two different academic journals.

    Chung, a former Seoul National University (SNU) professor, turned in a paper, titled “Lag Effect on the Transformation of Policy and Institutions” to a SNU academic journal in 2003 with his name on it as the first author and his former student the second author.

    According to the Segye Ilbo newspaper, a similar article was submitted to another academic journal published by the Korean Association for Public Administration a year ago, where Jung Joon-keum, Chung’s student who was second author of the 2003″

  5. An effect of these statistics: Last year, Yonsei University (ranked no. 2 in South Korea) computed the citation counts, h-indices, etc. of its academic staff. These were then compared with those of Nobel Prize winners. The conclusion was that the university has Nobel-Prize types who just need a few bucks to get them over the hump. Accordingly, they will each receive a few million dollars apiece and be set loose to do their thing. Somewhere, it has been forgotten that this was the kind of pressure that led Hwang to claim that he had cloned just about anything that moved.

  6. There is a news item today on the IMU/ICIAM/IMS citation statistics report in the Australian, and also on the related issue of journal rankings:,25197,23990703-12332,00.html


Comments are closed.

<span>%d</span> bloggers like this: