The Scientific method requires that scientists communicate their findings, clearly, accurately, honestly, and truthfully. Sir Isaac Newton said “If I have seen further it is by standing on ye shoulders of Giants.” Each insight builds upon the framework of countless insights before, and the observations that vouched for their truth. Insight leads to hypothesis, hypothesis leads to predictions, predictions lead to experiments, observations, and measurements, and we compare the predictions to the empirical data and accept, adjust, or discard the hypothesis. Reporting on this cycle of predict, measure, compare, adjust is how scientists disseminate knowledge.
Since no one is infallible, modern scientific scholarship relies on the evaluation of people with similar competence to the original researchers; their peers. The peers review the research report for relevance, quality, and accuracy. This process of peer-review helps expose any flaws and areas that are unclear or need improvement. Research work may be accepted, sent back for revision, or even rejected. Only when a researcher’s peers are satisfied with the quality of the research is the resulting report said to be peer reviewed. It is the gold standard of scientific quality. But the peer reviewers are human as well, and in many cases are in competition with the researchers whose work they are judging. It is a testament to the intellectual integrity of the vast majority of researchers that the peer-review system works as well as it has to date.
A very interesting paper, peer-reviewed!, appeared three decades ago, entitled “Peer-review practices of psychological journals: The fate of published articles, submitted again” I found the results fascinating.
A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.
The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.
So, what do you think happened?
With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.
There is a lot of food for thought here. I did a quick check and found a single citation of this paper. I think I’ll dig a little deeper and see if other studies of the peer review process itself have been done over the last couple of decades.