The recent paper “Fifty Shades of QE: Conflicts of Interest in Economic Research” by Fabo et al. investigates the possible biases of economic research when analysing QE effectiveness. Part of the research efforts on this subject is led by economists who work at the very central banks that design and put into operation unconventional monetary policies. Comparing their results to those obtained by academic economists may shed some light on the drivers of scientific consensus.
Fabo et al. have collected the estimated effects of QE on GDP and Inflation in the UK, the USA and the Euro Area from 54 studies across the 2010-2018 timespan. The 116 authors of these studies are academics or central bank economists.
To begin with, let us consider the aggregate data. In regard to GDP, a QE shock standardised to 1% of GDP is on average expected to generate a 0.24% rise of GDP at its peak, which drops to 0.14% at the end of the period. As to inflation, on average a standardised QE shock is expected to generate a 0.19% rise, which falls to 0.12% at the end of the period.
Interestingly, the estimated effects seem to depend heavily on the affiliation of the researchers. Indeed, statistical extrapolations show that the presence of a central bank researcher among the authors seems per se to be responsible for about 3/4 of the average effect on GDP, while a full central bank authorship is expected to push the effect on inflation well beyond the estimated average.
The paper also examines the results obtained by authors belonging to specific central banks. For example, the Bundesbank economists estimated that the peak effect on GDP is -0.082%, while other central banks researchers’ claim it is +0.142%, compared to academic results. In regard to inflation, Bundesbank affiliation implies a standardised +0.094% peak effect; other Euro area affiliations imply a standardised +0.088% peak effect; while affiliation to non-Euro Area central banks’ implies a +0.141% effect. Once again, in all these cases the effects relate the deviations from the academic results. Indeed, it seems that the Bundesbank divergence from other central banks is greater in terms of GDP (estimates fall below academic ones), rather than of inflation (lowest expectations among central banks).
The article by Fabo et al. offers various explanations for these results. The sample may be too small (e.g., just two “Bundesbank” papers are included). Moreover, publishers are only interested in papers that prove the existence of something. If so, the literature could then be biased, since it would be missing out on meaningful zero-result papers. This may interact with the “publish or perish” problem (researchers have to prop up their reputation by constantly increasing the number of publications).
A researcher may also be trying to pander to his boss to promote his career. In this respect, the paper finds that – statistically — the greater is the QE effect on GDP published, the greater are the author’s chances to progress in his career. As this does not work with inflation, it could not be significant. However, since the Bundesbank has always been very critical of non-conventional monetary policies, a Bundesbank researcher might have an incentive to demonstrate that QE is relatively ineffective. Given that the same mechanism may inversely apply to researchers employed in central banks favouring QE, academics should thus fall somewhere in between these two positions. Well, the paper shows that the academics’ estimations on GDP are indeed between the two theoretical extremes characterising the authors affiliated to the central banks. Instead, the academic estimates on inflation are the lowest.
A different view claims that “by releasing a study supportive of this policy, [researchers] could potentially enhance the policy effectiveness”. This can explain why “central banks tend to get it wrong” when forecasting the dynamics of GDP and inflation: “either the underlying model is garbage, or central banks deliberately aim at driving public expectations in order to provoke given economic responses”. In fact, managing expectations is part of monetary policy. Fabo and his colleagues state that “the involvement of bank management in the production of bank research extends far beyond that of university management in academic research”. Of course, such a use of “scientific publication” may cast a shadow on the reliability of results.
The paper also finds that central bank researchers are more likely to use a specific estimation technique: the so-called Dynamic Stochastic General Equilibrium Model (DSGE). Fabo et al. write that “one weakness of DSGE models is the fragility of their parameter estimates across empirical studies […] a user aiming for a particular outcome can pull on a number of levers to get closer to that outcome”. In short, the final results seem to depend heavily on some of the parameters the researcher sets before the computer runs the estimations. Such a weakness has since long been highlighted, among others, by Nobel laureate Paul Romer (his amusing paper here). Within limits, it is then possible to “drive” the model toward certain outcomes and help a central bank economist – possibly “senior enough to have participated in the formation of the bank’s policy” — to produce a “scientific though biased” assessment. All this can be possibly conveyed into the current discussion on the so-called replication crisis in the social sciences.
The paper offers a final explanation of the central bank bias: “central banks have superior information about their own products, exceptionally strong expertise in the subject matter”. That is to say: more knowledge, stronger results. This hypothesis clashes against the “Bundesbank divergence”, unless you believe that a certain central bank has “more superior” information. For example, if the stronger QE effect on GDP is caused by “superior information”, we should conclude that central banks are better informed than academics, and that these are better informed than the researchers at the Bundesbank. It does not make much sense, not to mention the fact that this hypothesis also clashes against the fact that “central banks tend to get it wrong” (as mentioned above).
Of course, one can also conclude that economic research is not biased, and that neither researchers nor institutions manipulate the data to suit their goals and preferences. If so, we must conclude that economics is not hard a science, and that analysing the data is not enough to explain reality.
Photo by Fabian Kurz