Sunday, October 01, 2006

1.96 is (still) the magic number

I stumbled across a neat blog today "Trade Diversion" by Jonathan Dingel that provides a "Commentary on development, globalization, and trade" .The posts have good academic content.

A recent post relates to an issue that that should be familar to all applied econometricans - the magic 1.96 t-stat for statistical significance and the bias in economics against publishing "non-results". We have (possibly) fallen foul of this bias in the past.

This is an age old problem but it is worth highlighting again. Note that the post below relates to political science which of course is different to economics - is one group more honest that the other? Do econometric techniques differ greatly?

Below is the post in full. We have blogrolled "Trade Diversion".

1.96 is the magic number

---------------------------------------------------------------

Deirdre McCloskey has long emphasized the warping effects of the "statistical significance" hurdle to publication in economics. Alan Gerber and Neil Malhotra survey the top two journals in political science to produce this finding:



There are plenty of publications with findings that are barely statistically significant and a noticeable absence of papers that fall just short of the goalline. Figure 2a is more damning.

What's the implication? Andrew Gelman thinks it shows why hypothesis testing is problematic. Kevin Drum says it demonstrates massaging of data. The authors say:

The goal of this paper is to raise awareness of publication bias in political science. We have found that many more results are published just over the p=.05 threshold than below it, implying a certain amount of bias in parameter estimates. Our results suggest that as reviewers, editors, and researchers, political scientists appear to be far too conscious of the .05 significance level, and that this might cause important distortions in how knowledge advances in political science.

Full paper here.