@reiver

Goodhart's Law and Science

Science began small and informal but is now performed on an industrial scale. Industrialisation requires quantification and targets, so during the 20th century, funding bodies began to assess scientists based on metrics such as their citation index – the number of times their work has been cited by others [1]. Today, scientific achievement is synonymous with publication in the most cited journals – an aspiring scientist must ‘publish or perish’.

Imagine you are trying to maximise your publication rate and citations without any regard for its utility to other scientists or the public. How do you go about it? What are the scientific equivalents of tiny nails?

Fraud is the obvious answer and fraud is indeed on the rise [2]. A close second to straight up fraud is selective publishing - perform many studies and report only the positive results. Here, statistics provide a host of ways to 'massage the data', particularly in fields such as biology and the social sciences where one's peers often lack mathematical expertise.

The majority of scientists, however, are too scrupulous or cautious for the above. How else can you optimise your publication record for maximising citations? One way is to produce 'minimum publishable units' - the smallest measurable quanta of information acceptable to a journal. You can exaggerate the importance and/or novelty of each MPU by exploiting tenuous links to human disease, neglecting relevant prior research, and good old-fashioned hyperbole. You can also exaggerate the certainty of your conclusions by not performing replicate experiments, or experiments that might disprove your hypothesis. Finally, you can increase your citations by citing yourself and your friends whenever possible.

Now, as people go, scientists are a fairly principled bunch and most try to avoid these practices as much as they can. However, Goodhart's law is deeply embedded in the system [3]. PhD students must publish, and quickly, to be competitive. Senior scientists are in a constant battle for funding and job security. Even those with tenure employ scientists on short-term contracts, who need publications, and quickly, for their next grant. All of these factors drive the desire to publish ‘tiny nails’. When biological research is proving to be so profoundly unreliable to the private sector [4], something is very wrong.

It has been suggested that the scientific literature is 'self-correcting' - that fraud or lax experimentation gets discovered eventually. But science is becoming more and more expensive, and replication is becoming increasingly difficult to perform. Correction may or may not take place, but in the meantime, the public's faith in science has been eroded, and with it any and all benefits that science brings.

[...]

1 – Hirsch index. The most commonly scientific output measure

http://en.wikipedia.org/wiki/H-index

2 – Fraud on the Rise

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738

http://www.pnas.org/content/109/42/17028

http://www.nature.com/news/2011/111005/full/478026a.html

3 – Poor practice in Science

http://www.nature.com/nature/journal/v483/n7391/full/483509a.html

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010068

4 – Lack of reproducibility of biomedical results

http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

from "Tiny Nails"

Quoted on Sat Jun 8th, 2013