Following hard on the heels of the previous post, looking at t-stats and the prevalence of 1.96's, comes another topical issue that empirical economists need to face - the question of replication.
In recent work with a PhD student of ours we have had to face this very problem. No matter how hard we try to replicate a previous studies results with a view to extending/improving on their work we cannot get the same results.
This article in the New Economist provides an excellent overview of the issue and the problems the profession faces.
For the full post see replication.
--------------------
Does empirical economics follow the scientific method?
Some activities of economists are valued more than others in the academic pecking order. Research trumps teaching. Journal articles often have more impact than books (unless you are well known as Mankiw or Krugman and can corner the 'principes' textbook market). As John Quiggin remarks:
..books are a high-effort, low-payoff exercise for economists, unless you have something you really need to say at book length.
But surely lowest of the low activity must be attempting to replicate others work? If you get the same result, journals certainly won't want to publish it - and even if you don't, there's no guarantee they will (some journals won't touch them). Yet we all know that published results are often not as robust as authors like to portray them. Results can vary widely depending on the period chosen, the dummies used, the regression method, treatment of outliers etc. And documentation is often surprisingly sloppy.
If one were to venture down the replication path, though, what are your chances? Not that high, according to a new paper B. D. McCullough, Kerry Anne McGeary and Teresa D. Harrison from Drexel University: Do Economics Journal Archives Promote Replicable Research? (Hat tip: Craig Newmark).
All the long-standing archives at economics journals do not facilitate the reproduction of published results. The data-only archives at Journal of Business and Economic Statistics and Economic Journal fail in part because most authors do not contribute data. Results published in the FRB St. Louis Review can rarely be reproduced using the data/code in the journal archive. Recently-created archives at top journals should avoid the mistakes of their predecessors. We categorize reasons for archives’ failures and identify successful policies.
The authors attempted to replicate the results from 138 empirical articles in the Federal Reserve Bank of St. Louis Review. Of the 138 articles, 29 employed software they did not have or used proprietary data, and were excluded. Of the remaining 117, they were able to replicate only 9 of them: a lousy 8 per cent. In 2001 the National Research Council wrote that:
The ability to replicate a study is typically the gold standard by which the reliability of scientific claims are judged.
Judging by these results economics would not even meet a bronze standard, let alone gold. Many economists think that most articles are replicable, and hence the issue is not a matter for concern. They are wrong.
------------------------
There is more if you follow the link. The conclusion is that empirical economists need to embrace the idea of archiving and sharing data and code. This may take some time.
No comments:
Post a Comment