If we adopt the conventional 5% level of statistical significance and 80% power level, as well, then the ‘true effect’ will need to be 2.8 standard errors from zero to discriminate it from zero. The value of 2.8 is the sum of the usual 1.96 for a significance level of 5% and 0.84 that is the standard normal value that makes a 20/80% split in its cumulative distribution. Hence, for a study to have adequate power, its standard error needs to be smaller than the absolute value of the underlying effect divided by 2.8. We make use of this relationship to survey adequate power in economics.
All that remains to calculate power are the values of the standard error and an estimate of ‘true’ effect. Because our survey of empirical economics produced 64,076 effect size estimates and their associated standard errors from 159 meta-analyses…, we have much information from which to work.
Simple weighted or unweighted averages of all reported estimates do much to eliminate sampling error and random misspecification bias, because the average number of estimates per meta-analysis in our survey is 403 (median = 191).
Several statistical methods have been developed to identify and accommodate potential publication and related reporting biases and others have proposed methods to detect and evaluate the extent of p-value hacking. With information from 159 meta-analyses, these statistical methods can be used to approximate the genuine empirical effect, or at the least, to filter out some of the selection bias should it be present in a given area of research.
Table 1 reports the percentage of empirical economics findings that have ‘adequate power’, defined by the widely accepted convention that power is adequate if it is 80% or higher. It is clear that most of empirical economics is underpowered…. half of the areas of economics have approximately 10% or fewer of their estimates with adequate power.
Blogs I Follow
- An error has occurred; the feed is probably down. Try again later.