Perhaps no discipline is as in crisis mode as macroeconomic business cycle theory. I am a layman with regard to the competing theories. I have largely been educated via economists’ blogs, podcasts, and several scholarly articles. But I do know a thing or two about theory choice, and the foundations behind it.
How do we decide whether a theory is correct? The short answer is we don’t. Theory choice operates a bit differently in say, physics or biology, so I’ll stick to theory choice in economics, which is still largely based in a philosophy of science framework. The most influential economic methodologists were Karl Popper and Milton Friedman despite the fact that neither of their proposed theories are worth jack squat.
Even though it has been rejected over and over again for half a century, Karl Popper’s framework is still the basic way that people think about scientific theories being “correct” or “incorrect” (those are the words I’ve seen thrown around; I usually take them to mean “true” or false, though there is even disagreement about that in philosophy of science!). Here’s how it works: a scientist freely proposes a theory. He or she checks the theory against the evidence. If the theory corresponds to the evidence, the theorist says “Yippee!” and goes on her merry way. If it doesn’t fit the evidence, that counts against the theory—the scientist must makes revisions to the theory or at least some of its auxiliary assumptions.
To be frank, everyone agrees that economics doesn’t operate this way. Economists get to “play tennis with the net down.” They get to say “Yippee!” when their theories are correct, but when they aren’t (which is often the case), they can say, “It’s a complex world” and chalk their theories failure up to the ridiculous amount of causal factors involved in economic phenomena.
What I mentioned above is empirical adequacy, namely, how well a theory fits the data. To be sure, achieving a high degree empirical adequacy is a relatively easy task. Economists who construct models are able to match them to historical data sets without much pain. However, for every data set, there are infinitely many theoretical models that can sufficiently explain the data. Prediction, in this sense, while easy, shouldn’t be taken lightly. Indeed, there’s no good reason by true predictions about the future should be privileged over predictions about the past (except perhaps that future predictions cannot possibly be so-called “fudged fit”), e.g., historical data sets.
But, no worries. It turns out there are other criteria we can use for theory choice. Simplicity and predictive power are two such criteria. (For reasons I shall not delve into here, these two go hand-in-hand). Simplicity is Ockham’s razor: don’t multiply entities beyond necessity. The reason is that complex models more easily accommodate data because there are more adjustable parameters, which makes the potential for “fudged fit” more likely. The classic example is Ptolemy’s theory of planetary motion, a model that could accommodate any alignment of the planets and stars, but only because it had many adjustable parameters (so-called epicycles). Predictive power refers to a theory’s ability to make “observable” predictions by which one could test a theory. Again, I won’t go into the reasons, but simple theories turn out to have more predictive power, and they are also more testable (they make more predictions!)
Woopdeedoo! What does it all mean Basil? Well, Austrian business cycle theory postulates that booms and busts are created by artificially low interest rates, and whatever economic downturn we experience is deserved, because, well, we didn’t really deserve all that economic growth during the boom. The theory has been criticized by Keynes (naturally), Kaldor, Friedman, Krugman, and yes, even Bryan Caplan! But, it still has many defenders after the recent downturn, and a lot of what is suggested by ABCT is agreed upon by economists (e.g., the assertion that low interests created a housing bubble). Usually, where economists disagree is that monetary policy in particular (Keynesians include fiscal policy) can play a substantial role in softening the blow of an economic downturn. This logarithmic graph (via Yglesias) is good evidence of this (in my view, anyway).
The main problem with Austrian economics in general, as I’ve said previously, is not that it’s not empirical adequate (it fits basically anything that could possibly happen), but that it doesn’t have ANY PREDICTIVE POWER. Indeed, it throws the mere thought of using quantitative models out the window, and resultantly, makes empirical adequacy a proverbial piece of cake. But who cares? Empirical adequacy is not our only concern in theory choice, and Austrian theory is pretty weak sauce in other measures of theoretical adequacy.