Meta-analysis and publication bias: How well does the FAT-PET-PEESE procedure work? A replication study of Alinaghi & Reed (Research Synthesis Methods, 2018).
by Sanghyun Hong
A meta-analysis is a tool for aggregating estimates of a similar “effect” across many studies. Publication bias is the phenomenon where literature is sample selected in favor of studies having statistically significant results and/or having estimates that satisfy pre-conceived expectations. A popular procedure used for conducting meta-analyses in the presence of publication bias is the FAT-PET-PEESE (FPP) procedure. In a recent paper published in Research Synthesis Methods, Alinaghi and Reed (2018), utilizing Monte Carlo simulations, report that the FPP procedure does not work well when used in “realistic” data environments where true effects differ both across and within studies. AR’s findings are important because the FPP approach is dominant in the economics meta-analysis literature. I replicate their results and discover two mistakes, which I subsequently correct. The first mistake is found in a descriptive statistics table, misrepresenting the overview of simulated dataset. The second is associated with the fixed effect estimation, generating erroneous estimated effects and Type I error. Further, I extend their analysis by making their simulation environment even more realistic. Despite producing somewhat different results, my replications generally confirm AR’s conclusions about the unreliability of the FPP procedure in realistic data environments.