110 likes | 121 Views
This article discusses the common statistical error of analyzing "differences of differences" between groups in neuroscience studies and suggests the use of interaction contrasts for more accurate analysis. It highlights the importance of properly interpreting interaction effects to avoid misleading conclusions.
E N D
Vol. 1, Number # 2 , September 23, 2011 MV Stats News Bringing multivariate data analysis and data visualization to your breakfast table Today’s topic: The statistical error that just keeps on coming Phil Chalmers, Staff Reporter Filed: 12/20/2019
Resource • An online article entitled ‘The statistical error that just keeps on coming’ by Ben Goldacre, inspired by Nieuwenhuis et al. (2011) • Points out that articles in prestigious neuroscience journals are too often analyzing the ‘differences of differences’ between groups incorrectly • http://www.guardian.co.uk/commentisfree/2011/sep/09/bad-science-research-error
Example • Measuring cellular firing frequency between two groups of mice (controls and mutant) and a drug or placebo is administer to both groups • In the mutant group that received the drug a 30% average firing rate decrease is observed (p<.05), while only a 15% drop in controls (p>.05) • Conclusion: Mutant mice respond differently to the drug than controls
Graphically p<.05 p>.05
Main Points • But is this ‘difference of differences’ really statistically significant? (i.e., is 30% - 15% = 15% a real and detectable difference?) • What if the groups were both sig, p<.01 and p<.05, respectively. Would the conclusion change (and should it)? • Nieuwenhuis et al. (2011) reviewed 513 papers published in five neuroscience journals over two years. There were 157 articles that could have made this analysis mistake, and 79 of them did
Breakdown It Down • How do we statistically test if the drug is affecting the mutant group differently than the control group? • Multiple independent t-tests using one group at a time (i.e., select only the control group data and test drug effect, then mutant) and compare these results? • Unfortunately this method only seems clear in the fortunate circumstance where mutants are sig and controls are not
Alternative: Interaction Contrasts • Create all unique 2x2 analysis combinations and run 2-way ANOVA on each to determine if and where the nature of the interactions are • Simple case for our example, since original ANOVA would have included interaction term. Simply observe a means table (or plot) and observe how the interaction takes place
My Thoughts • In my experience, implementing and interpreting ICs still aren’t emphasized to undergraduates or even many graduates in psychology….. • In our example the mutants reacted to the drug differently (p < .05) than the controls (p > .05), but does not reporting the interaction effect render this finding meaningless? I think not, but a 2x2 IC would make interpretation easier for cases that are less obvious, wouldn’t require ‘gluing’ pieces of information together, and would be more statistically consistent • On the other side, many ICs are required when combinations become larger (e.g., a 4x4 ANOVA design), which may be equally difficult to interpret
Conclusion • Online article exaggerated? Certainly. Nieuwenhuis et al. (2011) better at presenting this problem, and their point boils down to researchers not using ICs and instead opting for simple main effects • Better off using interaction contrasts compared to simple effects analyses and segregated t-tests • Applies equally well to MANOVA analyses when IV interactions arise
References • Nieuwenhuism S., Forstmann, B., and Wagenmakers, E. (2011). Erroneous analyses of interactions in neuroscience: A problem of significance. Nature Neuroscience, 14(9), 1105-1107.