Real business cycle theory and the high school Olympics

I have lost count of the number of times I have heard students and faculty repeat the idea in seminars, that “all models are wrong”. This aphorism, attributed to George Box,  is the battle cry  of the Minnesota calibrator, a breed of macroeconomist, inspired by Ed Prescott, one of the most important and influential economists of the last century.

All models are wrong... all models are wrong...

Of course all models are wrong. That is trivially true: it is the definition of a model. But the cry  has been used for three decades to poke fun at attempts to use serious econometric methods to analyze time series data. Time series methods were inconvenient to the nascent Real Business Cycle Program that Ed pioneered because the models that he favored were, and still are, overwhelmingly rejected by the facts. That is inconvenient.

Ed’s response was pure genius. If the model and the data are in conflict, the data must be wrong. Time series econometrics, according to Ed, was crushing the acorn before it had time to grow into a tree. His response was not only to reformulate the theory, but also to reformulate the way in which that theory was to be judged. In a puff of calibrator’s smoke, the history of time series econometrics was relegated to the dustbin of history to take its place alongside alchemy, the ether, and the theory of phlogiston.

How did Ed achieve this remarkable feat of prestidigitation? First, he argued that we should focus on a small subset of the properties of the data. Since the work of Ragnar Frisch, economists have recognized that economic time series can be modeled as linear difference equations, hit by random shocks. These time series move together in different ways at different frequencies.

For example, consumption, investment and GDP are all growing over time. The low frequency movement in these series is called the trend. Ed argued that the trends  in time series are a nuisance if we are interested in understanding business cycles and he proposed to remove them with a filter. Roughly speaking, he plotted a smooth curve through each individual series and subtracted the wiggles from the trend. Importantly, Ed’s approach removes a different trend from each series and the trends are discarded when evaluating the success of the theory.

After removing trends, Ed was left with the wiggles. He proposed that we should evaluate our economic theories of business cycles by how well they explain co-movements among the wiggles. When his theory failed to clear the 8ft hurdle of the Olympic high jump, he lowered the bar to 5ft and persuaded us all that leaping over this high school bar was a success.

Keynesians protested. But they did not protest loudly enough and ultimately it became common, even among serious econometricians, to filter their data with the eponymous Hodrick Prescott filter.

Ed’s argument was that business cycles are all about the co-movements that occur among employment, GDP, consumption and investment at frequencies of 4 to 8 years. These movements describe deviations of  a market economy from its natural rate of unemployment that, according to Ed, are caused by the substitution of labor effort of households between times of plenty and times of famine. A recession, according to this theory, is what Modigliani famously referred to as a ‘sudden attack of contagious laziness’.

The Keynesians disagreed. They argued that whatever causes a recession, low employment  persists because of ‘frictions’ that prevent wages and prices from adjusting to their correct levels. The Keynesian view was guided by Samuelson’s neoclassical synthesis which accepted the idea that business cycles are fluctuations around a unique classical steady state.

By accepting the neo-classical synthesis, Keynesian economists had agreed to play by real business cycle rules. They both accepted that the economy is a self-stabilizing system that, left to itself, would gravitate back to the unique natural rate of unemployment. And for this reason, the Keynesians agreed to play by Ed’s rules. They filtered the data and set the bar at the high school level.

Keynesian economics is not about the wiggles. It is about permanent long-run shifts in the equilibrium unemployment rate caused by changes in the animal spirits of participants in the asset markets. By filtering the data, we remove the possibility of evaluating a model which predicts that shifts in aggregate demand cause permanent shifts in unemployment. We have given up the game before it starts by allowing the other side to shift the goal posts.

We don't have to play by Ed's rules. We can use the methods developed by Rob Engle and Clive Granger as Giovanni Nicolò and I have done here. Once we allow aggregate demand to influence permanently the unemployment rate, the data do not look kindly on either real business cycle models or on the new-Keynesian approach. It's time to get serious about macroeconomic science and put back the Olympic bar.

John, Paul and Say's Law

I've followed, with a great deal of interest, the debate between John Cochrane and Paul Krugman. I have a lot in common with both of them.

I agree with Paul that, for the most part, the IS-LM model provides the right answer to policy questions. I agree with John, that we have learned a lot since 1955, when Paul Samuelson invented the Neo-classical synthesis.

But there were a couple of ideas in the General Theory that have been buried by MIT macro. The first, and most important, is that high unemployment is an equilibrium. Repeat after me. E-Q-U-I-L-I-B-R-I-U-M. The second is that animal spirits are an independent causal factor that determines which equilibrium the private economy will select.

Let me ask a simple question that you should feel free to answer. And do please also try to guess the PK and JC answers. (To answer this question, you will need to arm yourself with a knowledge of the textbook IS-LM model. A good introduction would be Greg Mankiw's textbook or, the book I learned from, the intermediate text by Dornbusch, Fischer and Starz.)
Figure 1

Start from a world with no inflation, and no expected inflation. Suppose that the IS curve intersects the LM curve at a position where the interest rate on T-Bills is 5% and unemployment is 4.4%. This is pretty much where we were in December of 2006, as depicted in Figure 1. (Yes I know the zero expected inflation assumption is not quite right, but thats a bell and a whistle).

If you measure Real GDP (equal to real income) by dividing nominal GDP by the money wage (as I do here and as Keynes advised in the GT) you may plot the stationary value of GDP in wage units on the horizontal axis. That will be at 95% of its maximum value. (Incidentally, this is not synonymous with GDP deflated by the price level, even in a one-good economy).

Now suppose that the IS curve shifts to the left as a consequence of a crash in house prices caused by a loss of confidence that prices will continue to keep going up. Suppose that there is no corrective fiscal action and that the Fed allows the interest rate to fall by lowering interest rates as GDP collapses.  

The textbook theory says that we will slowly track down the LM curve, and as people lose their jobs, demand will fall, supply will fall, and we will end up at a new lower level equilibrium. The key word here is equilibrium. This is the prediction of the Hicks-Hansen model before it was polluted by Samuelson’s neoclassical synthesis. That's pretty much what happened between December of 2006 and September of 2008, and thats what I show in Figure 2.

Figure 2:
Now suppose that the initial drop in asset values gets much much worse as the crash in house values hits the balance sheets of financial institutions.  Initially, those institutions were baled-out by the treasury but after the collapse of Lehman Brothers, US financial institutions were left to fend for themselves. The situation, post Lehman Brothers, is depicted in Figure 3.
Figure 3:
Ok. Here’s the question.

If the Fed keeps the interest rate at zero, and IF ANIMAL SPIRITS REMAIN PESSIMISTIC: What will happen to GDP, the interest rate, inflation and real wages, once we have reached the new lower steady state (Y3 on Figure 3)? My answer is nothing. Repeat after me. N-O-T-H-I-N-G.

Long before it became fashionable, I made the distinction between old Keynesian and new Keynesian economics. Using my definition, old Keynesians would assert that there are many steady state unemployment rates. In contrast, new Keynesians view high unemployment as a disequilibrium caused by sticky prices. They agree with John that there is a unique natural rate of unemployment, determined by supply side factors, and that the private economy is gravitating towards that rate. They disagree on the speed of adjustment and in the role of government in achieving that adjustment.

It ain't so. There is no natural rate of unemployment in the sense that Friedman used that term. But by accepting some version of the Neo-classical synthesis,  both John and Paul are agreeing that Say’s Law holds in the long run. Supply creates its own demand. By accepting Samuelson’s interpretation of the GT, Paul is playing in John’s backyard. 

If we don't accept the MIT worldview: how do we reconcile Keynesian economics with Walras? My answer explained here, is that multiple equilibria arise as a result of missing factor markets. I explain WHY there can be multiple equilibria where there is no incentive for firms to change wages and prices. High unemployment, in the absence of a recovery in animal spirits, is an equilibrium in the sense in which physicists use this term. This  is not rocket science. But you do have to read my work, rather than assume you know what it says, in order to get this point.

Risk and Return in the Bond Markets

This is the second post to advertise the work of a UCLA graduate student who is looking for a job this year. My first post introduced Sangyup Choi who is working on uncertainty shocks in emerging markets. This post introduces Chan Mang who is working on the implications of term structure models for the foreign exchange market.

Chan Mang
Chan Mang graduated from UCLA two years ago. In 2012 he was awarded a post doc position at the prestigious National University of Singapore and last year he worked in the private sector.  Chan's research builds on the  widely cited bond pricing model developed by John Cochrane and Monika Piazzesi
Finance economists seek to explain the term structure of bond prices. Why do long bonds typically earn a higher yield than short bonds and how do the yields for bonds of different maturities move over time? A graph of these yields as a function of duration  is called the yield curve.

Figure 1 is a graph of the yield curve, taken from the Treasury Website, for December 14th 2014. 

Figure 1:
The graph shows that one month treasuries are currently paying an interest rate of zero. However, longer denomination treasuries have higher yields that increase monotonically with duration and thirty year bonds are currently paying 3%.  This pattern is not time invariant and there have been periods when the yield curve is flat or even downward sloping over some regions.

Figure 2 compares the yield curve from December 2014, with that from February 2006. Back in 2006 the yield on six month treasuries was over 4.6%, but the yield on five year securities was lower at 4.5%. When long yields are lower than short yields, the yield curve is said to be inverted, and historically, an inverted yield curve has been the harbinger of a recession.
Figure 2
Back to the main story.  Finance theorists explain the yield curve with what they call 'factor models'. They look at the evolution of yield curves over time and they seek common components that help to explain how all of the yields move over time.  Cochrane and Piazzesi developed the state of the art factor model to understand these phenomena.

Enter Chan Mang. In Chan's words

In my work, I show that the affine term structure model of Cochrane and Piazzesi (2008) has inconsistent predictions when I compare different financial markets. ... [because] the additional information in the term structure ... generates an implausible amount of predictability in exchange rates and currency excess returns. 
[I find that] ... it is either the case that bond excess returns or currency returns are predictable, but not both at the same time.
To understand these features of the data, Chan is developing a theoretical model that connects the Cochrane-Piazzesi explanation with what monetary policy makers think they are doing.

Repeat After Me: The Quantity of Labor Demanded is Not Always Equal to the Quantity Supplied

I've been teaching a class on intermediate macroeconomics this quarter. Increasingly, over the past twenty years or more, intermediate macro classes at UCLA (and in many other top schools), have focused almost exclusively on economic growth. That reflected a bias in the profession, initiated by Finn Kydland and Ed Prescott, who persuaded macroeconomists to use the Ramsey growth model as a paradigm for business cycle theory. According to this Real Business Cycle view of the world, we should think about consumption, investment and employment 'as if' they were the optimal choices of a single representative agent with super human perception of the probabilities of future events. 

Although there were benefits to thinking more rigorously about inter-temporal choice, the RBC program as a whole led several generations of the brightest minds in the profession to stop thinking about the problem of economic fluctuations and to focus instead on economic growth. Kydland and Prescott assumed that labor is a commodity like any other and that any worker can quickly find a job at the market wage. In my view, the introduction of the shared belief that the labor market clears in every period, was a huge misstep for the science of macroeconomics that will take a long time to correct.

In my intermediate macroeconomics class, I am teaching business cycle theory from the perspective of Keynesian macroeconomics but I am grounding old Keynesian concepts in the theory of labor market search, based on my recent books (2010a, 2010b) and articles (2011, 2012, 2013a, 2013b).  I am going to use this blog to explain some insights that undergraduates can easily absorb that are adapted from my understanding of Keynes' General Theory. Today's post is about measuring employment.  In later posts, I will take up the challenge of constructing a theory to explain unemployment.

Ever since Robert Lucas introduced the idea of continuous labor market clearing, the idea that it may be useful to talk of something called 'involuntary unemployment' has been scoffed at by the academic chattering classes. It's time to fight back. The concept of 'involuntary unemployment' does not describe a loose notion that characterizes the sloppy work of heterodox economists from the dark side. It is a useful category that describes a group of workers who have difficulty finding jobs at existing market prices. 



The idea that the labor market is well described by a model in which a market wage adjusts to equate the quantity of labor demanded with the quantity supplied bears little resemblance to anything we see in the real world. What makes me so confident of that claim?
Figure 1: Average Weekly Hours and the Unemployment Rate
 (c) Roger E. A. Farmer

Employment varies over time for three reasons. First, the average number of hours fluctuates. Second people enter and leave the labor force and third, those people who are in the labor force flow into and out of unemployment. Figure 1 (taken from my 2013 Bank of England article) plots data from 1964 through 2012 on average weekly hours and the unemployment rate.  The blue line, measured on the right scale, is average weekly hours. The pink line, on the left scale, is the unemployment rate. The grey shaded areas are NBER recessions.

The facts are clear. Although hours do fall during recessions, the movements in hours are swamped by movements in the unemployment rate. Consider, for example, the 2008 recession. Average weekly hours fell from 34 to 33. The unemployment rate, in contrast, increased from 4% to 10%.  

The main story in the data on average weekly hours is that they declined from 39 hours per week in 1964 to 34 hours per week in 2012. As American workers got richer they collectively chose to take a larger share of their wages in the form of leisure.  These movements are important if our goal is to understand long term trends: they do not tell us much about recessions.

What about the participation rate? Recently, there has been a great deal of angst amongst policy makers  who are asking if the fall in the participation rate that occurred during the 2008 recession was cyclical or structural. Figure 2 sheds some light on that question. The graph demonstrates that there is no clear tendency for participation rates to drop in recessions. For example, participation was higher at the end of the 1973 recession than at the beginning and in a number of other post-war recessions it has remained flat. As with average weekly hours, this figure shows that movements in hours during recessions are almost entirely caused by movements in the unemployment rate.
Figure 2: Participation and the Unemployment Rate
(c) Roger E. A. Farmer
So what does cause the participation rate to vary over time? I look at Figure 2 and I see a parabola. Participation went up from 1960 to 2000 as women entered the labor force. It started to fall again in 2000 as the baby boomer generation  began to retire. These secular trends swamp business cycle movements in the participation rate  and they are largely explained  by sociology and by demographics. 

What do I take away from these data? There are three reasons why employment fluctuates over time. People vary the average number of hours worked per week. Households send more or less members to look for a job. And those people looking for jobs find it more or less difficult to find one. The first two reasons for fluctuating employment could perhaps be modeled as the smooth functioning of a market in which the demand and supply of labor respond to changes in market prices.  I cannot see any simple way to model unemployment fluctuations as the operation of a competitive market for labor in the usual sense in which economists use that term. 

Repeat after me: the quantity of labor demanded is not always equal to the quantity supplied.

The Impact of Financial Market Volatility on Emerging Market Economies

Early in the New Year, economists from all over the world will congregate in Boston for the 2015 annual meetings of the American Economics Association. The main purpose of these meetings is to interview new Ph.D. candidates for potential jobs as academics and in the public and private sectors as research and/or policy economists.  

Sangyup Choi
As an academic economist at UCLA, my job includes teaching undergraduates, carrying out economic research for publication in books and journals and, (my favorite part), training new Ph.D. economists. Teaching graduate students is a rewarding experience for an academic as we get to watch our students progress from undergraduates to colleagues. What begins as a teaching experience in year 1 ends up as a learning experience in year 5. 

Today's blog features my student, Sangyup (Sam) Choi, who is working on  the impact of financial market volatility on emerging market economies.  My colleague Aaron Tornell and I are Sam's principal advisors.

Sam is studying the VIX and its impact on economic activity. This is a hot topic amongst macroeconomists ever since Nick Bloom showed, in a paper published in Econometrica,  that shocks to uncertainty are a causal factor in US. recessions. What, you ask is the VIX?


The VIX is an index of volatility that goes up when traders are less certain about the future. In his Econometrica paper, Nick showed that shocks to the VIX are an independent causal factor that helps to predict future U.S. output. Here is a graph of the VIX for the period 2000 to 2014.
Figure 1: The VIX from 2000 to 2014
In a paper published last year in Economics Letters, Sam showed that Nick’s results are sensitive to the period of study. The VIX does predict future output in data from 1950 through 1982, but that result goes away after 1983. The largest recession in post war history in which the VIX jumped by a factor of four, (see Figure 1), did not have a significant independent impact on the U.S. economy, once other explanatory variables have been accounted for. That in itself is surprising. But it gets better.


In his most recent work, Sam has looked at the impact of the VIX on emerging market economies. He finds that although shocks to the VIX do not have much impact on US output, they do have a noticeable impact on the output of emerging market economies. Figure 2 presents the evidence for that claim.
Figure 2: The Impact of a VIX shock on the U.S. & Korea (c) Sangyup Choi
In Sam’s own words.
My job market paper, entitled “The real impact of VIX shocks on emerging market economies: flight to quality mechanism,” starts from an observation that fluctuations in the VIX have had a much larger impact on emerging market economies than they have had on the US economy for the last two decades.  This finding is puzzling as the VIX measures U.S. stock market volatility. 
To understand why an increase in the VIX has a much larger impact on output fluctuations in emerging market economies than on the U.S. economy, I build a small open economy model with credit market imperfections. The model incorporates a portfolio decision by international investors and an increase in the VIX makes these investors withdraw their funds from emerging markets. 
In Sam’s theory, VIX shocks, in a world of integrated capital markets, cause investors to pull their money from emerging markets. Because emerging market economies have poorly developed credit markets, the sudden outflow of cash causes domestic firms to cut back on production and lay off workers.

Back to Sam…
[In my dissertation] … I build a theoretical model that helps understand my empirical findings.

… I confirm the prediction of my model by estimating structural Vector Autoregressions using data from 18 emerging market economies between 1994 and 2013. … I find that VIX shocks are followed by a statistically significant increase in the real interest rate, a fall in domestic credit, and a real currency depreciation. In contrast, in the U.S. economy VIX shocks are followed by a (statistically insignificant) fall in the real interest rate, an increase in domestic credit, and a real currency appreciation.

…the new empirical findings from my two papers expand our understanding of the importance of uncertainty shocks. When combined with credit market imperfections, the VIX serves as a real-time indicator of risk to emerging market economies.
Here is a link to the online appendix to Sam's paper, which has a dazzling array of evidence to support his claim.

Sam is a terrific economist and will make a great colleague. Hire him! You won't be disappointed.