Hier eröffne ich einen thread zur Diskussion über Maße, den Scientific Impact von wissenschaftlichen Artikel bzw. von der Lebensleistung von Wissenschaftlern durch automatisierte formale Maße zu messen.
The present discussion on how to measure the Science Impact of individual papers and of individual scientist's professional life work has a high time.
Automatized measuring of science importance (science impact) of a person's life span
(or any individual paper, which is less ambiguous) needs a subtle and careful analysis.
The Nobel Prize committee is wise to do this by a several step procedure (propositions by the community, referees, committee weighing the votes). To be as good as them would mean to take care of as many parameters as enter such multihuman weighing,- bolstered by experts as referees reading the content!.
Thus the human measuring impact is highly more useful, foolproof and less apt to cheating as a set of parameters calculated by a counting on a set of papers references etc. The closest would be to have a highly complex multiparameter measure, tested and adapted over years.
The present paper of J. E. Hirsch , An index to quantify an individual's scientific research output, as a nice individual physics-tools and thinking using piece of work as such, proposes one single parameter.
To give counter examples would be thus extremely easy (e.g. the best Noble Prize candidate is at present Higgs: he is by now possibly about 65, has written just this one single paper, and would have a Hirsch-index of an idiot.)
Some related papers doubting this specific proposition may be found by Citebase: citing papers.
At present, papers which propose one single index to measure scientific impact of persons or individual papers come out daily. As said, their impact on the science of bibliometrics is and will be minimal, I think.
But let me know your comments and thoughts. Also, please give further suitable references here.
To most scientists, determining the scientific impact of an individual is not a matter of vital importance. There are, however, exceptions:
+ Merit-based promotions, when discussed by persons unfamiliar with an individual;
+ Identifying suitable partners in cross-disciplinary collaborations.
Of course, trying to find a "magic metric" capable of reducing a human's activity to a single number is a futile exercise. Even the most clever schemes, including the h-index, have their flaws, as pointed out by Eberhard Hilf. Still, I am convinced that a set of metrics, including the h-index, is a valuable tool in the above two cases. In retrospect, having access to "digested" bibliometric information about a potential research partner and paying attention to it could have prevented forming a collaboration that ultimately did not work out.
Even though every bibliometric index must fail in somce instances, I still consider a set of such indices a valuable tool to judge the scientific impact of an individual, when considered in addition to (not as replacement of) other criteria.