Why The "Friedman Thermostat" Analogy Should Be Uncomfortable For The Mainstream
Nick Rowe’s article on Milton Friedman’s Thermostat has popped up in online conversation. For those of you unfamiliar with it, it is about inferring statistical relationships between inflation and interest rates. Although I agree that it is possible to do some silly correlation analyses between those variables1, if we think about the analogy more carefully, it points to concerns with the mainstream approach.
My Restatement of the Analogy
Imagine that I was teaching you some version of “introduction to engineering mathematics” and for some insane reason I gave you and your project partners access to a typical Canadian house with an oil furnace in the winter. I want your team to develop empirical formulae for the determination of interior temperature near the thermostat. For simplicity (and to avoid awkwardness to my argument that I discuss in the appendix), let us assume that the furnace works in a straight on/off fashion: it fires up to 100% heat output, runs, then shuts off completely.
We can then imagine that Nick Rowe submitted this response (from the linked article).
If a house has a good thermostat, we should observe a strong negative correlation between the amount of oil burned in the furnace (M), and the outside temperature (V). But we should observe no correlation between the amount of oil burned in the furnace (M) and the inside temperature (P). And we should observe no correlation between the outside temperature (V) and the inside temperature (P).
An econometrician, observing the data, concludes that the amount of oil burned had no effect on the inside temperature. Neither did the outside temperature. The only effect of burning oil seemed to be that it reduced the outside temperature. An increase in M will cause a decline in V, and have no effect on P.
This response would not make me happy if I graded it. To be fair to Nick, he is setting this response up as being from a dim econometrician. But even as a straw man, it leaves a massive logical problem, and his statements about the alleged correlations are in fact incorrect/misleading. We Canadians put furnaces inside houses for a reason, and any engineering undergraduate should know what that reason is.
Canadian outdoor winter temperatures range from uncomfortable to lethal. Any house other than a “passive house” will lose heat and eventually the indoor temperature will start to approach the outdoor temperature. (Unless there are gaping holes in the building envelope, there will be some heat trapped, so it would remain somewhat warmer. It also protects from wind chill, which is the real safety issue, not the absolute temperature.)
For a furnace that swaps between “fully on” and “off” modes, if it is possible for the thermostat to work as expected, we have two regimes. For now, we assume that there are no other major factors driving hear flow — it is night, no woodburning stove, limited electricity consumption, no cattle providing body heat in the basement.
When the furnace is off, temperatures generally decay towards some steady state value that largely depends on the outside temperature and wind velocity/direction (since building envelopes in practice leak).
When the furnace is on, the temperature decays towards what is expected to be an uncomfortably warm temperature (depending on the sizing of the furnace, outdoor temperature, insulation, etc.).
Despite Nick Rowe’s assertions to the contrary, if we select the correct time scale, we do see a “correlation” between temperature change and instantaneous fuel usage. (This “correlation” would break down if the sun/wood stove/cattle are providing enough heat to independently raise the temperature, or outdoor temperatures are not constant, etc.)
The only way Nick Rowe’s statement that “But we should observe no correlation between the amount of oil burned in the furnace (M) and the inside temperature (P)” is true if M is monthly or seasonal oil burning. But even there, if we took a bunch of similar new houses in a neighbourhood with the same North/South facing, total energy consumption (need to factor heating from electricity consumption within the house, not just the furnace) would in fact be correlated with the average thermostat settings. Middle aged dads grumble to family member to turn down the thermostat and wear a sweater for good economic reasons.
You Need to Have An Idea How the System Works
If one were just handed the raw data from the house without any explanation of what they were, it would likely be difficult to pick out a sensible relationship between the variables. You would need some knowledge of the system dynamics to realise that outdoor temperature, wind speed/direction, and other heating sources would need to be tracked and quantified in order to get a model that goes beyond “when the furnace turns on, room temperature generally goes up.”
In other words, you cannot just use statistical analysis to do magic, you need some fundamental analysis of the system as well.
Back to Interest Rates
The true lesson from this analogy is that in order to do statistical tests on the relationships between the variables of a system, you need at least some idea what the dynamics of the system are supposed to be. Realistically, you need a model of system dynamics that you can test against the data. Although there are attempts to use mathematical magic to infer system models solely based on observed data, my bias is that this is not going to replace fundamental analysis.
The reason why correlating the policy rate with inflation is silly is that we know something about the dynamics of the system. The policy rate is a variable set by a small clique in a boardroom, and the setting is somewhat arbitrary. They certainly believe that there are more than one possible option to take in most meetings. Conversely, the inflation rate is some sort of weighted average of price changes taken by entities across the whole economy. The correlation just tells us about the psychology (reaction function) of central bankers.
In order to sensible statistical analysis of the effect of interest rates, you need a quantitative theory/model to test. But it has to be an actual quantitative theory. For example, saying that “lags are long and variable” can only be described as pseudo-scientific hand-waving to excuse model failure.
The problem is that there is an infinite number of potential models in which interest rates appear. We cannot test them all within the finite lifespan of the solar system. All we can do is look at proposed models (or families of models that can be captured as a group).
I will then run through some basic classes of models, in increasing order of complexity.
The Policy Rate Matters
These models are probably what a lot of people have in mind when thinking about the central bank, and the earliest ones tested. (I assume the original Friedman analogy referred to these attempts.) The basic concept is that if the nominal policy rate is above some “trigger” value, then inflation and/or GDP growth will fall (with some lag). The policy rate is a single-valued lever which drives the economy up or down.
(Given that most conventional economists think in terms of a real “trigger value” for the policy rate, we need to make the appropriate inflation adjustment to get the nominal trigger value.)
You can then test the predictions of this framework against actual policy rate and observed inflation/growth data. Note that the existence an inflation target does not matter: you are testing what actually happens on a sensible frequency for business cycle dynamics (monthly, possibly quarterly).
Without any knowledge of that literature, I think it is safe to summarise it as that no stable relationship was discovered. I base that assessment on the existence of the r* literature — as r* is an attempt to measure the trigger level for the policy rate. The r* model outputs tells us that the estimated r* is moving so fast that we do not have a reliable guidepost to policy. By construction, it fits the historical data, but going forward, the estimate tends to move if the real policy rate is moving. (If the economy enters a steady state, the predicted r* converges to actual.) Until there is a model that predicts r* based on other variables available in real time, it cannot be subjected to falsification tests.
Expectations
Given that treating the policy rate as a single-valued policy lever is a dead end, we need to retreat into more complex model structures. The theoretical assumptions of neoclassical economics pushes towards “expectations” to augment/replace the spot policy rate.
Within a standard modern neoclassical model with “Real Business Cycle” roots, “expected values” are easy to work with. All entities in the model are aware of an alleged “equilibrium” that determines the probability distribution of forward prices of everything (including interest rates and the price level) out to infinity. You just read off the expected values of the price distributions.
Back here in the real world, we do not have forward prices of everything (never mind the full probability distribution). And the forward prices that exist are generally believed to have biases. We can use surveys, but then we do not know how representative or biased those surveys are.
One can be bloody-minded and try to use observed conventional and inflation-linked bond pricing to determine “risk neutral” expectations for interest rates and inflation. The immediate problem is how to feed this information into a statistical test. A continuous yield curve on a single day contains a lot of information. (Theoretically infinite, but really closer to the number of instruments in the fitting.) The simplest way to compress this information is to just use a single bond price, typically the 10-year yield.
Although most countries have a decent data set of 10-year conventional bonds, inflation-linked data generally only appears after inflation dynamics were generally uninteresting (until 2020, anyway). So either one needs to use surveys, or use models to make expectations up (which creates model selection problems).
Since there is a lot of ways of compressing market-based expectations series, the jury remains out whether we can find one that works. I am in the camp that we will run into the same problem as r* for any variation of “expected rates.”
Reaction Function Changes Matter
One of the distinctive features of modern DSGE macro is the argument that changes to the central bank’s reaction function matters for outcomes. That is, the outcome of a single rate meeting does not matter, rather what matters is what this says about the future behaviour of the central bank. This effect is certainly going to be true in DSGE models by construction (although we run into the calendar time versus forward time gap). However, I am uncertain how this is operationalised into the real world.
The only measurable quantities related to “reaction functions” I can point to is the observed yield curves. One can interpret the yield curve as the implied reaction function output based on the “expected” economic outcomes that is embedded in the yield curve. (This expected economic outcome reflects what market participants forecast, not the central bank’s forecast.) Changes to the reaction function would show up as changes to the shape to the forward curves.
I have not followed this area enough to comment further (beyond my concerns about the non-measurability of the reaction function).
Expectations Fairy
The final and hardest to test theories involve the dreaded Expectations Fairy. The central bank allegedly determines outcomes almost instantaneously by changing its target. Although this sounds crazy, it is actually how DSGE models are supposed to work (outcomes are driven by expected values at equilibrium, and the current central bank target drives the equilibrium).
Most neoclassicals skate over the awkward implication of this, but the “Market Monetarists” pushed this idea to its limit in their writings in the 2010s. I do not want to put any words into Nick Rowe’s mouth, but I think it is safe to say that the arguments in his linked article have Market Monetarist ideas embedded in them.
In the early 2010s, it was possible to point to the apparent success of inflation targeting — on average, most central banks hit their target since the inception of formal targets until that era. At present, there are a lot of holes in that theory — Japan’s persistent miss, the undershoots of targets in the 2010s, and then the pandemic experience. Although I imagine that the Market Monetarists can find a way to explain this, I remain skeptical.
In any event, if one believes that inflation will always hit the inflation target because of expectations, Nick Rowe’s claims about the non-existence of correlations makes sense. The problem is that we can easily reject that claim — inflation certainly missed target, and so we can do analysis on the misses.
Concluding Remarks
The policy rate is not set at random, nor is it the result of “economic forces.” At each meeting, it is set by a small group of human beings who follow a set of beliefs about interest rates. If they set it the “wrong” level, it is very hard to find examples of there being immediate negative consequences, so they do have freedom of action. (If there is a pre-existing crisis, “wrong” levels of interest rates can generate reactions.) Given the content of Economics 101 teaching, we cannot be surprised that the policy rate follows the inflation rate with a lag. Given that inflation is normally thought of as a lagging economic variable, we end up with interest rates often exhibiting the same cyclical behaviour as other variables. (Although this is not universal, the Japanese policy rate was remarkably non-cyclical.) If everything is pro-cyclical, deciphering statistical relationships between them is inherently difficult. You need policymakers rejecting Economics 101 and moving interest rates counter-cyclically to get interesting data for testing.
The funny thing about this topic is that using interest rates to control inflation2 is the main ideological plank of neoclassical economics from 1975-2010 (at least), yet there is a remarkable inability to give a quantitative demonstration that can convince outsiders about the effectiveness of the policy. Engineers can generate quantitative guidelines for furnace sizing that stand up to outside scrutiny, the same cannot be said for changes to the level of interest rates.
Appendix: Non-Saturating Heating
It is possible to get closer to Nick Rowe’s ideas about correlation if we look at what happens if have heaters whose heat output was adjusted continuously between zero and some upper limit that is not normally hit. (The furnace I described only operated at either the lower or upper limit.) It would be easy to rig up such a system with baseboard heaters, although I am not sure about the wisdom of the approach.
If the control law in the thermostat were properly tuned3, we could end up in a situation where the temperature hits a steady state at the target temperature — the heater output would be calibrated to match the heat loss. Although we would have sensor noise, we might get runs of near-constant interior temperature.
We can then do the following exercise: identify each period of near constant temperature, and just record the average temperature (which is supposed to equal the thermostat setting) and the heater average power consumption setting. If we look at those pairs of numbers over time, we would see that the power consumption would (most likely) vary.
Ah ha, we get the claimed data distribution — indoor temperature constant at target, with varying power inputs.
The problem is that we achieved this result by deliberately obliterating the time axis. If we examine the time series during the “steady state” intervals, they are both constants (within limits of noise), and constants are correlated. Instead, we would do our model estimation based on the rest of data set, when the interior target has moved away from target, and the heater is acting to bring it back. Given that we do not embed time machines in thermostats, we know that there will be deviations from target that allows this model fitting (i.e., there is no “expectations fairy” that always causes temperature to remain on target at all times).
Meanwhile, the fact that the steady state power consumption changes over time is not surprising given that we know that houses face different heating needs over time. If room temperature depended solely on power consumption, nobody would put on/off switches on furnaces. In other words, we knew in advance that the steady state power consumption would change. We need a model — such as looking at other factors that affect room temperature (differential with outside temperature, wind, other heat sources) if we want a way of predicting the steady state power consumption. Correspondingly, we need a model to predict “steady state” interest rates if we follow conventional beliefs.
The reason to do a correlation analysis between the policy rate and inflation is that it is guaranteed to indignant replies from mainstream economists.
I am lumping using the money supply to control inflation with interest rates given that they were linked in practice, and not everyone entirely bought the Monetarist story.
At the minimum, would most likely need a full PID controller to eliminate the steady state offset.