Research Wisdom: Janet Katz
Research Wisdom? How well do these words go together? Research is a constant state of not knowing but seeking to know while wisdom is a profound knowing gained by years of living.
Speaking of not knowing and asking questions–here is my question:
Should journal impact factor be used to evaluate faculty scholarship? I would answer no to the question and here is why:
As some of you know, one of my personal professional pet peeves (did you know that term comes from the 14th century word “peevish?” To be irritated and “pet” comes from being peevish about a person you are close to?) is impact factor, let me provide some background as foundation to the criticism.
- 1960s, Eugene Garfield created the journal impact factor (JIF) so that libraries could judge which journals to buy
- 1992, Per Seglen found that only 15% of articles account for the JIF rating
- 85% have below average citations. In other words those 15% had lots of citations making the journal have a high JIF
- What was meant to guide librarians’ purchasing decisions has come to determine grant funding and promotions
What is Impact Factor? IF= # times an average article is cited in 2 years
total # “citable” articles published in 2 years
There are different ways to look at this numerical designation, but what makes me peevish is the idea that what you have to say means more if published in one journal over another. What about innovation? What about novel ideas? Science is built on the idea that inquiry can and should take many different twists and turns. What if you are one of the few people studying something so that no one will cite you because no one is working on the same thing?
There are many who criticize IF use for judging anything. It is essentially a marketing tool for journals. Even the editor of Nature- with the highest JIF, dismisses its use. Many have said that impact factor is a time waster that impedes the progress of science by adding to the congestion of papers trying to get published in journals with relatively high IFs.
- Editors can manipulate I – review articles tend to have the most citations so they may publish more of these
- Editors may return asking for more references from their journal
- It is not discipline specific – nursing is a good example of this where an IF of 1 is very good (Nature is like 54).
- Only journals indexed in the Thompson Reuters Web of Science are used cutting out non-English language research.
- What counts as a citation differs – letters to the editor, using your own references (cite yourself), dissertations, critiques of articles counted the same as glowing endorsements.
The San Francisco Development of Research Assessment (DORA) initiated by the American Society of Cell Biology has taken real action to get rid of IF. Check out the website: http://www.ascb.org/dora/
If we must judge other than by reading the work for quality here are a few possibilities – neither of these are whole heartedly supported.
The h Factor shows importance and significance of all of a researcher’s work. Total articles and how many times they are cited. But, for a new researcher the quality may not look as good as someone who has been around along time.
The RCR or Relative Citation Ratio was developed by George Santangelo, Director of the NIH Office of Portfolio Analysis just last year.
Its development was based on the analysis showing that:
- only 11% of articles with high RCRs were published in journals with a high IF
- about 90% of all really significant scientific breakthroughs were published in journals without a high IF.
RCR is supposed to measure the quality of work against comparable work within a field. It is calculated by dividing the number of citations a paper receives by the average number of citations an article in that field usually receives, plus some other things- it is a complex formula. The number is also compared to all NIH-funded papers. Ludo Walthman, a bibliometrics researcher at Leiden University in the Netherlands thinks the metric is too complicated for most researchers to use. In addition, nurses, whose funding may come from many other sources than NIH, may face a road block when citations are compared.
Essentially, the problem with impact factor’s use to assess faculty or student performance is that it judges the quality of work by where it is published, not by content of what is published. The misstep is assuming that a high IF journal will only publish quality work. Many scientists dispute this and I read that one person said it was a lazy way to assess.
Faculty scholarship is partially judged by article output, but it is not the only measure. Teaching, mentoring, products such as data sets and instruments, and project outcomes are alternative measures. There are many variables to consider when assessing research and journal impact factor should not be one of them.
Impact factor was never intended to assess or evaluate an individual’s scholarship. As a college, let’s be creative about judging rigor and consider all the variables that go into scholarship including innovation and being outside the norm. There are many examples of well-known and important scientists whose original work was thoroughly rejected by established arbiters of quality scholarship. Gregor Mendel’s groundbreaking work in genetics in the 1800’s is one such example. He could not get published. Next time you get a rejection remember him and think yourself an unrecognized genius instead of a loser. Remember wisdom comes with recognizing greatness in others and yourself.
- E. Hirsch (2016) An index to quantify an individuals scientific research output. Proceedings of the National Academy of Sciences, 102 (46). doi: 10.1073/pnas.0507655102
- Declaration of Research Assessment (DORA)
- Frank, M. (2003). Impact factors: arbiter of excellence? Journal of the Medical Library Association, 91(1), 4–6. http://ntserver1.wsulibs.wsu.edu:2156/ehost/pdfviewer/pdfviewer?sid=79d1e236-1d84-4273-8af9-3f8ce783db0a%40sessionmgr4004&vid=1&hid=4114
- Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: unintended consequences of journal rank. Frontiers in Human Neuroscience, 7, 291.
- Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: a valid measure of journal quality?. Journal of the Medical Library Association, 91(1), 42.