Blog Archives

How many citations are actually a lot of citations?

In a previous blog post, I suggested to my younger colleagues that while they should not care so much about the impact factor of the journals they published in (as long as these journals are well-read in their respective fields of research), they should care quite a lot about these papers being cited, and cited by others not self-cited!

A few months ago, I was listening to the introductory talk of for a prestigious award from our national organization when one statement hit me: a physicist with 2000 or more citations is part of the 1% most cited physicists worldwide. There might have been a bit more to that statement but let’s work with it.

Read the rest of this entry

Papers 3 for Mac and iOS now available

I will post more once I get to try them but version 3 for both Mac OS and iOS of Papers are available for download. There are paying upgrades.

Journal Impact Factor: why you should not care… too much

I have been publishing scientific manuscripts for the past 22 years. My educated comments with regard to journal impact factor has always been the same (If you do not know what JIF is, please have a look at this Wikipedia entry). First order, you should publish in the most important journals for your field. If their JIF are low, who cares as long as your work is important to your field and well cited. For example, the scientific discovery of 2012 according to Science (very high JIF) is the publication of the experimental finding of the Higgs boson… in Phys Lett B (low JIF relative to Science)!

Do not forgot that from an historical perspective, we are awfully bad at predicting what will be the next important discovery down the road. A number of fundamental discoveries and early engineering feats were discarded at first. Similarly there are numerous example of scientists having had tremendous issues in getting those game-changing results published, even those ending up winning Nobel prizes. Karry Mullins’ PCR work is one of many examples of work was rejected by journals having top JIF but for which the application of this very technique was published in Science and Nature and the citations counts of these second generation papers also receiving higher numbers than the original, award-winning work!)

Now, you do not have to agree with this lone scientist opinion but certainly you should have a look at the The San Francisco Declaration on Research Assessment or DORA petition, which is supported by the “big boys” (no discrimination intended). The declaration statement is actually a very interesting read and it covers the historical origin of the JIF (which was not for evaluating researchers at all) and further call for dropping journal-based metrics in assessing scientific productivity for funding and promotion. Over 240 organizations and 6000 individuals have already signed the declaration.

In conclusion, do not loose a good night sleep over your favorite journals’ impact factors…

Applying the 80/20 principle to scientific productivity?

The secret [to scientific success] is comprised in three words— Work, Finish, Publish.
— Michael Faraday

One of the thing I really like to do when waiting for a connecting flight at a major airport is to spent time at a book store. Not too long ago, I came to this book about the 80/20 principle.

80-20-principle

It stands just about 200 pages, which means a quick read and it had reference to Pareto. Being involved in computer optimization problems, in particular involving two or more opposing constraints, the notion of Pareto front is fresh to my mind. Similarly the notion that 80% of  the work can be achieve with only 20% of the feature of a software or 80% of the riches is held by 20% of the population or that is takes 80% of effort to accomplish the most demanding 20% of a project are all well-known applications of the discovery made by Pareto.

The book

The book explains the above principle with examples and also discusses how it apply to business, project managements and personal life. As you can expect, it take about 20% of the book to reach at least 80% (if not more!) of the goals set forth by it 😉

Still, overall an interesting and very fast read.

Can it be applied to science?

Well, a lot of what we do in research is program (collection of projects) and project-based. Therefore, it is always worth the effort to ask yourself why you are undertaking a new project, if it will contribute significantly to your overall research program and if the resources needed to accomplish it are available. It may very-well be that you will need to spent an enormous amount of effort  (let say 80%!) on a given project such that you will have to halt almost everything else. It better mare sense and pay off!

Can it be apply to analyze scientific productivity?

While reading the book I was wondering if only a small portion of my research program was really contributing to citations and impact on the field. I decide to quickly look at this by using Google Scholar. GS can track citations and h-index base on all of your papers and it takes last than 5 minutes to set-up (go over to scholar.google.com and chose “my citations” at the top right)

I will not providing my absolute numbers here. Still, fair enough my h-index is such that the value corresponds exactly to 20% of my published papers i.e. 20% of my published papers contribute to my h-index value. For example, for my h-index was 20, this would means that 20 papers have 20 or more citations and, it would also corresponds to the 20% most cited among 100 published manuscripts.

Next I look at the citations of each paper individually. On the figure below, you will find the fraction of total citations as a function of the fraction of manuscripts published.

Fraction-of-citations

It is quite interesting to see that a small fraction of all papers account for the majority of the citations. In my case, 13% of the manuscripts contribute to 50% of the citations and 42% contribute to 80% of the citations. So yes the Pareto principle is at play, but…

Limitations

If you were to ask me about each paper included in the 13% that gather 50% of the citations, I would reply:

  • Some I knew as we were preparing it that it would be important to the field.
  • Some I thought would be important but are not cited so much.
  • Some I thought were curiosities that would be of interest to only a few but ended-up as my most cited papers.

I think you get the message…

Conclusion

I can prove anything by statistics except the truth.
— George Canning

Yes, you can make statistics say anything. In the context of a creative process, predicting which of the creative action (here paper) will become a hit is actually rather easier after the fact than the other way around. Therefore, the concept might be interesting to track your resources (grant dollars, materials, projects to start, …) but it cannot be used, as expected I guess, to help you predict your future creative hit wonder!

%d bloggers like this: