Skip to content
January 9, 2013 / Damien Irving

Journal envy (and rankings)

I’ve got a confession to make. I get journal envy. Bad.

It strikes when I’m reading an article from a prestigious journal, or when I’m browsing somebody’s publication list. I think to myself, “my work (published in a lesser journal) is as good as that,” or “how did that article get in such a good journal?” I try to rationalise these publishing injustices with self talk like, “quality work speaks for itself no matter the journal,” or “the selection of reviewers is so random.” Not surprisingly, this self talk doesn’t really help. Getting published in top class journals is the name of the game, so I imagine that we all get journal envy to some degree.

For the purposes of this post, the moral of the story is not that I have jealousy issues, but rather that we’ve all (consciously or unconsciously) developed an internal ranking of the journals in our field. Most of the time our own subjective rankings are probably pretty accurate, but when an interesting article pops up in an unfamiliar journal, or when it comes time to decide which journal to publish in, it’s often useful to refer to an objective metric.

The most well-known metric for ranking journals is the impact factor, which is used by Thomson Reuters in their annual Journal Citation Reports (if you’re a regular Web of Science user, navigate from the main search page to the ‘additional resources’ tab to browse the latest report). This statistic essentially represents the average number of citations per article. While it’s certainly a good starting point, it’s generally considered to be a little too simplistic. For instance, Nature and Science often don’t come out on top using the impact factor, despite the fact that they are pretty much the undisputed best journals.

Inspired by the Google PageRank algorithm, a number of more sophisticated metrics are now available, that account for both the number of citations received by a journal and the prestige of the journals from which the citations came (among other factors). The two best known metrics in this category are the eigenfactor and SCImago Jounral Rank. The journal rankings for both are freely available at their respective websites.

So that’s journal rankings covered, but how would one go about ranking an individual scientist (or research institution)? Well, the most popular index is known as the h-index, which attempts to measure both the productivity of the scientist and impact of their published work. A scientist with an index of h has published h papers, each of which has been cited in other papers at least h times. Numerous variations to the h-index have been proposed since it was published in 2005, including the m-, g- and e-indices (for a good summary of these alternatives, see here). Although very useful, these alternatives aren’t as widely used. You can find instructions on how to calculate the h-index using Web of Science or SCOPUS here.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: