chasing significance

Success-vs-SignificanceAn interesting study from journal Addictionon studies:

ABSTRACT
Background and Aims
The low reproducibility of findings within the scientific literature is a growing concern. This may be due to many findings being false positives which, in turn, can misdirect research effort and waste money.

Methods
We review factors that may contribute to poor study reproducibility and an excess of ‘significant’ findings within the published literature. Specifically, we consider the influence of current incentive structures and the impact of these on research practices.

Results
The prevalence of false positives within the literature may be attributable to a number of questionable research practices, ranging from the relatively innocent and minor (e.g. unplanned post-hoc tests) to the calculated and serious (e.g. fabrication of data). These practices may be driven by current incentive structures (e.g. pressure to publish), alongside the preferential emphasis placed by journals on novelty over veracity. There are a number of potential solutions to poor reproducibility, such as new publishing formats that emphasize the research question and study design, rather than the results obtained. This has the potential to minimize significance chasing and non-publication of null findings.

Conclusions
Significance chasing, questionable research practices and poor study reproducibility are the unfortunate consequence of a ‘publish or perish’ culture and a preference among journals for novel findings. It is likely that top–down change implemented by those with the ability to modify current incentive structure (e.g. funders and journals) will be required to address problems of poor reproducibility.

They offer an interesting solution:

Journals such as Cortex and Drug and Alcohol Dependence have introduced new manuscript submission formats that place the emphasis on the research question and study design, rather than the results obtained. Manuscripts (essentially protocols, containing the introduction, hypotheses, methods, analysis plan and sample size justification) are reviewed before data collection takes place, and judged on whether the results will be informative regardless of how they ultimately turn out. If acceptance-in-principle is offered, then the authors can conduct their study safe in the knowledge that, as long as they adhere to their plans, their results will eventually be published.

Deciding on whether or not to publish the results of a study before the results are known offers several important advantages. First, it ensures that publication depends on the importance of the research question being addressed, and the appropriateness of the methods chosen, rather than novelty and P-values. Secondly, it minimizes research practices that inflate the likelihood of false positives (e.g. ‘significance chasing’), given the requirement to adhere to pre-declared methods. Thirdly, the requirement for a priori power calculation to justify the sample size minimizes problems of low statistical power.