Large Scale Econometric Models: Do they Have a Future?

Here is an intriguing question: How is the Large Hadron Collider like the National Institute Global Economic Model? Read on!

It was a great pleasure to organize a session on econometric models for the Royal Economic Society Conference at the University of Sussex. In my new role as Research Director at the National Institute of Economic and Social Research (NIESR) I have inherited the responsibility for research and development of the National Institute Global Economic Model, NiGEM; the preeminent model of the world economy. As you might expect, given my role at NIESR, the answer to the question posed in this session is a resounding yes!

Screen Shot 2018-03-27 at 7.11.03 PM.png

For the session at Sussex, in addition to my own presentation, I assembled three outstanding speakers, Tony Garratt from the Warwick Business School (WBS), Marco Del Negro of the Federal Reserve Bank of New York and Garry Young, Director of Macroeconomic Modelling and Forecasting at NIESR.

Tony kicked off the session with a description of the work he’s been engaged in at WBS along with his co-authors Ana Galvao and James Mitchell. The University of Warwick is collaborating with the National Institute of Economic and Social Research in a partnership that gives Warwick graduate students access to the expertise of the applied economists at NIESR and where NIESR gains from the academic expertise of Warwick economists. As part of that partnership, the WBS team have agreed to publish their forecasts each quarter in the National Institute Review as a benchmark against which to measure the performance of the NiGEM team. Tony gave us a fascinating account of what it is the WBS team does!

Their approach is reduced form and eclectic. WBS has a stable of more than twenty-five models that are averaged with weights, updated in real time, by past forecast performance. Tony showed us how the WBS forecasts had performed in the past relative to the Bank of England and the Bank of England  Survey of External Forecasters. He described different ways of evaluating forecasts, both by comparing point forecasts and density forecasts, for output growth and inflation. Perhaps the most interesting result, for me, was that judgmental forecasts often outperform econometric models at short horizons.

Tony’s talk was followed by Marco Del Negro from the New York Fed who described the behaviour of a medium scale Dynamic Stochastic General Equilibrium (DSGE) model they’ve been running at the NY Fed since 2008. DSGE models have received quite a bit of bad press lately as a result of the failure of almost all of the experts to predict the 2008 financial crisis. Marco gave a spirited defence of DSGE models by showing us the forecast performance of the NY Fed’s DSGE model from 2008 to the present. The model is written in a relatively new computer language; JULIA. The code is open source, blindingly fast and widely used in research publications in leading journals. For the MatLab users out there: perhaps it’s time to switch?

In the third presentation of the day we were treated to an entertaining interlude when the projection facility malfunctioned and Garry Young ad-libbed for ten minutes with a cricketing anecdote. When he resumed, Garry gave us an account of the use of NiGEM to forecast the effects of Brexit. NiGEM has more than 5,000 equations, covers 60 countries and is widely used by central banks and national treasuries around the world for scenario analysis. NiGEM has a lot more in common with the NY Fed’s DSGE model than most people realize.

In the final presentation of the day, I tied the three presentations together by explaining the history of econometric modelling beginning with Klein Model 1 in the 1940s and ending with the NY FED’s DSGE model and with NIESR’s NiGEM. For me, the main story is continuity. With the publication of Robert Lucas’ celebrated critique of econometric modelling in 1976, large-scale models disappeared from the halls of academia. But they never disappeared from central banks, treasuries and research institutes where, as Garry reminded us, they have been used as story-telling devices for more than fifty years.

The version of NiGEM we work with today has come a long way from the backward looking equations of Klein model 1. It has been lovingly tended and developed by distinguished teams of researchers who have passed through the National Institute over the years. Past NIESR researchers include among their number, some of the leading applied economists and applied econometricians in the UK and the model they developed includes state-of-the art assumptions including the ability to add forward looking elements and rational expectations in solution scenarios.

Large-scale econometric models are here to stay. Policy makers use models like NiGEM to analyse policy alternatives and that is unlikely to change soon. In my presentation I argued for a closer dialogue between economic theorists and applied economists, similar to the dialogue that currently exists between theoretical physicists and applied physicists. I argued that NiGEM located at NIESR, is to economics as the Large Hadron Collider (LHC) located at CERN, is to physics.  Just as physicists use the LHC to test new theories of subatomic particles so economists should use NiGEM to test new theories of macroeconomics. I hope to put that idea into practice in the future at the National Institute.

In a separate presentation at the Royal Economic Society Conference this year, I discussed work I am engaged in with a research team at UCLA where we have developed a new theory of belief formation. This is an example of one of the theories we hope to test using NiGEM as a laboratory. 

According to Forbes, the operating budget of the Large Hadron Collider is approximately one billion US dollars a year. NiGEM is entirely funded from subscriptions and the operating budget is well south of half a million US dollars. Funding agencies take note: we could make some pretty cool improvements for a billion a year.

What Does it Mean to Have Rational Expectations?

This is a follow up to my ergodicity post from last week. Both posts are inspired by conversations I had with my Co-Hub-Leader Jean Philippe Bouchaud (for the Rebuilding Macroeconomics Hub: Why are Economies Unstable?) on the role of the ergodicity assumption in science. Content warning: This is more technical than many of my posts with no apologies. It is a technical subject.

Figure 1: The Tent Map

Figure 1: The Tent Map

I became interested in Chaos Theory in the early 1980s when I attended a conference in Paris organized by Jean Michel Grandmont. Jean Michel had been working on non-linear cycle theories, as had I, and the conference was an opportunity to explore the idea that plain vanilla general equilibrium models with rational agents, each of whom held rational expectations, might lead to complicated dynamic paths for observable variables. As I pointed out here, many of us at the conference were persuaded by the work of Buzz Brock, who argued that even if the economic data live on a complicated non-linear attracting set, we don’t have enough data to establish that fact.

The simplest example of a complicated non-linear attracting set is the tent map displayed in Figure 1. The map F(x) (plotted as the red triangle) maps the [0,1] interval into itself. The map has a steady state at 0 and a steady state at XS but both steady states are unstable. Trajectories that start close to either steady state move away from them. However, all paths that start in the [0,1] interval stay there. The tent map is a perpetual motion machine.

While these facts are interesting, my eventual response was: So What? If you generate random numbers on a computer, those numbers are generated by more sophisticated versions of the tent-map. If we lived in a world where the shocks to GDP were generated by a chaotic deterministic system it should not influence our behaviour. It would simply explain the origin of what we treat as random variables. Data generated by the tent-map have predictable behaviour. They obey statistical laws. If there is a small degree of uncertainty of the value of x at date 1, that uncertainty is magnified the further you move into the future. In the limit, as T gets larger, x(T) is a random variable with an invariant distribution and the best guess of where you would expect to see x(T) as T gets larger is the mean of x with respect to that invariant distribution.

Jean-Philippe introduced me to the work of Philip Anderson, a Nobel Laureate in physics who worked on solid state electronics and has written a series of illuminating posts on phenomena known as spin glasses. Without getting too far into the details, the basic idea is that for a large class of physical phenomena, it is not just the state variables that describe the world that are random. It is the probabilities that those variables will live in any particular state.

Here is a question for all of you out there who have thought about these ideas. Imagine that you are offered a sequence of gambles in which you may bet on the outcome of a coin toss where the coin comes up heads with probability p(t) and tails with probability 1-p(t) where t=1,2,.. and where p(t) is generated by the tent map. Suppose we allocate the value 0 to heads and 1 to tails. I conjecture that, for any finite sample of T coin tosses, the sample mean of the random variable that takes the value 0 with probability p(t) and 1 with probability 1-p(t) does not converge to a number as T gets larger. If the world is like this, and I believe there is a sense in which financial market data are very much like this: What Does it Mean to Have Rational Expectations?

The Household Fallacy

My new working paper, joint with Pawel Zabczyk, is out now as an NBER working paper, a CEPR discussion paper and a NIESR discussion paper. Here is the abstract: 

house.jpg

We refer to the idea that government must ‘tighten its belt’ as a necessary policy response to higher indebtedness as the household fallacy. We provide a reason to be skeptical of this claim that holds even if the economy always operates at full employment and all markets clear. Our argument rests on the fact that, in an overlapping-generations (OLG) model, changes in government debt cause changes in the real interest rate that redistribute the burden of repayment across generations. We do not rely on the assumption that the equilibrium is dynamically inefficient, and our argument holds in a version of the OLG model where the real interest rate is always positive.

Screenshot 2018-03-12 19.21.31.png

Figure 1 will be helpful if you know something about difference equations. What this illustrates is the dynamics of debt adjustment in a model with two generations where preferences are relatively standard. The picture illustrates a case where the interest rate is positive and where governments do  not need to actively balance their budgets. Unlike some examples where this happens, we are not relying on the idea that forward looking agents select the only equilibrium that uniquely pins down the price level and prevents debt from exploding. In other words, we do not appeal to what the literature refers to as the Fiscal Theory of the Price Level (FTPL).We claim that this situation is not a crazy way to think about the world; it is a generic and common property of a large class of overlapping generations models. That fact is an embarrassment for the FTPL since it implies that, in monetary models where the dynamics are described by Figure 1, the FTPL is incapable of selecting a unique equilibrium. Stay tuned for two more papers coming soon on this topic with more realistic preference and endowment structures.

Lets also be clear about what we are NOT saying. We do not claim that governments do not face constraints: They do. In our model, the government runs a primary surplus on average as a fraction of potential GDP just as it does in the real world. What we claim is that the government does not need to actively alter the fiscal surplus in response to booms or recessions. 

Ergodicity

Last Thursday, Jean-Philippe Bouchaud and I held the first meeting of the Rebuilding Macroeconomics project’s Instability Hub. There were roughly twenty people, some in attendance in person and some attending through Zoom. As expected, what emerged was an eclectic mix of ideas, some better presented or better formed than others. The overarching theme that emerged from this meeting, is that macroeconomics needs to deal with the issue of non-ergodicity.

Image from Jean-Philippe's Talk on Equilibrium

Image from Jean-Philippe's Talk on Equilibrium

Ergodicity is a technical term used by statisticians to reflect the idea that we can learn something about the future by looking at the past. It is an idea that is essential to our use of probability models to forecast the future and it is the failure of economic systems to display this property that makes our forecasts so fragile.

This idea emerged in several different ways at the meeting. First, the post-Keynesians point to the fact that the mainstream has not satisfactorily incorporated Keynes’ ideas on probability theory. Second, psychologists point to the fact that the Von-Neumann Morgenstern model of expected utility is a very bad characterization of human action in uncertain situations. Third, the agent-based modellers and the econo-physicists are perplexed that anyone would imagine that ergodicity would be a good characterization of the social world when it was abandoned in the physical sciences decades ago. So how do we make progress?

One possible avenue is to change our model of human behaviour. We must stop assuming that people are expected utility maximizers and assume instead that they act in ways that we will learn about from the behavioural psychologists. While it is certainly possible that an approach of that kind might be productive; I am skeptical. Even if one takes a reductionist approach to science, it is not true that our knowledge of the world can be reconstructed from the bottom up. At each level of aggregation, natural scientists have learned that they must use new theories to understand emergent properties that arise from the interactions of constituent parts. Just as chemistry is more than aggregate physics so we should expect macroeconomics to be more than aggregate microeconomics.

Agent-based modellers have gone some way in this direction; but they have not gone far enough and ABMs  are similar to the macro models we were constructing in the 1950s in the sense that the behaviours of the agents in these models are reflexive and crude.  They are more sophisticated than 1950s aggregate macroeconomics in the sense that at least there are multiple agents each with possibly different behaviours. But they are, at present, incapable of capturing the kinds of announcement effects that we know are characteristic of real world data. In the real world, an announced future tax increase will bite immediately. In the current generation of ABMs it will not.

Is it possible to construct a macroeconomic theory that allows for more sophisticated individual behaviours but does not preserve the constraints of the representative agent approach? I believe so and the mainstream is already moving in that direction by incorporating heterogeneous agents into simple DSGE models. What the mainstream is missing, is that the future behaviour of prices and quantities may not be governed by stationary probability laws, even if the fundamentals of the economy are governed by such laws.

Some have argued that the social world, like the weather, is obviously governed by chaotic processes; the so-called butterfly effect. What I have learned in my discussions with the applied mathematicians and physicists who attended our meeting, is that the natural world is far less predictable than that. It is not just the variables themselves that evolve in non-linear chaotic ways; it is the probabilities that govern this evolution.

So where do we go from here? First, there is a convergence already happening between DSGE models with heterogeneous agents and ABM modellers. These people need to talk more to each other. Second, a theme that emerges in my own work, is that heterogeneous agent models are replete with multiple equilibria. In multiple equilibrium environments, there is not much to be gained from a more sophisticated view of what we mean by ‘rational’ beliefs about the future. Subjective beliefs are themselves fundamentals.

Finally, what I have learned from talking with smart people from many fields is that words mean different things to different groups. We all must take some time to learn each other’s language and to be a little more humble in our perceptions that our own tribe is the unique repository of all useful knowledge.

 

 

 

Confidence and Crashes

correction.jpg

The Dow dropped 4.6% on Monday February 5th.  This was the biggest recorded point drop, 1,175, in history. The markets regained some ground on Tuesday and, as of writing this post, we have simply wiped out the gains that have accumulated since the beginning of January. But we are not yet out of the woods. If the markets continue on their precipitous decline there is real cause for concern.

The vagaries of the market are caused by the animal spirits of market participants. They have little or nothing to do with the ability of the economy to efficiently produce value. Most market participants buy and sell stocks not because they see value in the underlying companies: They buy and sell stocks because they believe that future market participants will be willing to pay more or less for the same shares. There is, after all, a sucker born every day.

But although the market does not reflect social value, it does reflect economic value. My research has shown that the ups and downs of the stock market are followed by ups and downs in employment and I have provided a theory to explain why. When we feel wealthy we are wealthy.  When we feel rich, we buy more goods and services, employment increases, and unemployment falls. There is a causal mechanism from market psychology to tangible economic outcomes.

Normally, the Fed and other central banks around the world would react to a market crash by lowering the interest rate. The cause for concern arises from the fact that they have no room to react to a market drop in the traditional manner as interest rates in the US, the UK, Europe and Japan are at historically low levels. We may be approaching a crisis of the kind I warned of in my book, Prosperity for All.  The solution, as I argue there, is for the Fed to put a floor (and a ceiling) on movements in the S&P by actively buying and selling the market.