The case for peer auditing
Since the 17th century, when gentlemen scientists were typically seen as trustworthy sources for the truth about humankind and the natural order the tenet is generally accepted that science is based on trust.
This refers to trust between scientists, as they build on each other’s data and may question a hypothesis, or a conclusion, but not the quality of the scientific method applied or the faithfulness of the report, such as a publication.
But it also refers to the trust of the public in the scientists which societies support via tax-funded academic systems. Consistently, scientists (in particular in biomedicine) score highest among all professions in ‘trustworthiness’ ratings.
Despite often questioning the trustworthiness of their competitors when chatting over a beer or two, they publically vehemently argue against any measure proposed to underpin confidence in their work by any form of scrutiny (e.g. auditing).
They instead swiftly invoke Orwellian visions of a ‘science police’ and point out that scrutiny would undermine trust and jeopardize the creativity and ingenuity inherent to the scientific process. I find this quite remarkable.
Why should science be exempt from scrutiny and control, when other areas of public and private life sport numerous checks and balances?
Science may indeed be the only domain in society that is funded by the public and gets away with strictly rejecting accountability.
why do we trust scientists, but not bankers?
I suspect that this is because of the generally accepted believe that the product of science – knowledge – for example in the form of novel treatments, unequivocally benefits society. Another reason, albeit related to the latter, is that the motives of scientists are perceived as noble, rather than egoistic.
But what if those assumptions are naive? Scientists, like bankers, pursue a personal agenda and have vested interests. It is true that most scientists do not get rich doing research. But they aspire to recognition, tenure, and sometimes even power (e.g. to decide over the fate of fellow scientists and their research as referees or committee members).
Science is rife with conflicts of interest – and not only in the trivial and well-known form of potential monetary rewards that may result from membership in industrial advisory boards. A prototypical conflict of interest most scientist experience but usually are unaware of results from the ‘currency’ with which they advance in their scientific careers: spectacular findings, high-level publications (measured as Journal impact factor), and third party funding, all three of which are intimately related and mutually exchangeable.
When modern science, and trust in it, was born in the 17th century in England, independently wealthy gentlemen scientists were immune to such biases. Western scientists and journal editors frown at the recently discovered practice of some Chinese universities, which apparently reward their scientists with personal gifts: An article in the journal ‘Neuron’ might be remunerated with a Lexus, and one in ‘Nature’ with a Lexus plus $20.000 etc.
I find this practice cheap. In Germany (and I trust in other Western countries as well) we do not give away cars or cash, but we reward scientists with tenure and a retirement plan. Depending on the field of research, Nature and a Neuron paper will make you a professor.
Such a ‘currency’, easily computed by the impact factor, facilitates the marginalization of the actual content, robustness, reproducibility, or general quality of a piece of scientific work (for ideas about alternative reward systems, see this post).
Hence, scientists may be biased because certain types of results may help them to accumulate more of the currency which advances their careers. But there are other biases and conflicts of interest at play. One of them is even less obvious than the pursuit of a lustrous career – and maybe so intrinsic to the scientific endeavour that we should not even try to dispose of it: A bias towards the correctness of our hypotheses.
We do experiments or conduct studies to prove our theories. This deeply rooted confidence in the constructs of our scientific ideas is a prime mover, but it may quite often conflict with the harsh realities of biology. Hence, not all conflicts of interest are intrinsically bad or preventable – but we need to be aware of them, and strive to keep them in check.
So if science is riddled with biases and conflicts of interest, shall we trust each other, in fact, shall (or can) we trust ourselves? Certainly not naively, or with what Neuroskeptic calls ‘idealistic trust’: We trust others just because of who they are, for example, ‘scientists’. But it would make sense to trust others if we could be reasonably confident that there are effective disincentives for them to be dishonest.
This is what Neuroskeptic calls ‘pragmatic trust’. Such disincentives could be sanctions by peers, funders, editors, or any other stakeholder in the system. But for this to work we would need to make sure that good scientific practice is upheld. Because of the multitude of potential biases and questionable incentives this is only possible through transparency – for example by the sharing of original data – and some form of auditing, as recently suggested in an opinion article in Nature by Mark Yarborough:
“We need to routinely conduct confidential surveys in individual laboratories, institutions and professional societies to assess the openness of communication and the extent to which people feel safe identifying problems in a research setting. Some research institutions, to their great credit, are already conducting these kinds of assessments, but most do not. It is crucial that we start to make them the norm.”
But why do most scientists try to avoid such forms of transparency and inspection like the plague? Why is it not already common practice, as for example in randomized controlled clinical trials, where auditing and monitoring is the norm, and no major journal would even consider to publish an article without?
Common arguments against such measures include that they would lead to a pervasive scientific surveillance culture, or the risk that our peers may steal our best ideas. I posit that these apprehensions are misguided, and that quite the opposite is true. We should indeed inspect laboratories and check on the congruency of laboratory notes and research practice with corresponding publications, or verify the implementation of methods to prevent bias (such as randomization, or blinding), if the use of such measures was claimed in publication.
This should be done by peers, in a reciprocal and systematic manner, either random or periodically. Such a practice needs to be developed, guided by Universities, large scale research collaborations, or professional societies. This would create a culture that has the added benefit that it fosters much-needed sharing of data, exchange of knowledge, and intellectual discourse, and may help to detect honest errors which may have distorted the interpretation of findings and may go unnoticed without external scrutiny.
Some scientists have argued that this is a doomed approach because it will be systematically undermined by dishonest research groups auditing each other. This is a remarkable argument as it implies widespread criminal energy among colleagues while at the same time arguing against checks and balances in the system. It is certainly true that auditing or other forms of control cannot prevent misconduct altogether. But this is not the point. Rather, the goal is to increase (some might say restore) the credibility and confidence in the work of well-meaning scientists and to reduce the waste resulting from biased results of methodologically questionable experiments or studies.