Macroeconomics: Religion or Science?

religion.png

Writing in 1999 in a widely cited paper “The Science of Monetary Policy”, three leading economists, Richard Clarida, Jordi Galí and Mark Gertler, CGG, make the case that monetary policy is a science. Although there is some truth to that claim, CGG could equally well have titled their paper; “Macroeconomics: Religion or Science?”

Science and religion are strange bedfellows. Science dates from the enlightenment. Religion has been around since the dawn of history. Science is supported by rationalism. Religion is supported by dogma. Science is challenged by experiment. Religion is codified by scholars and protected by a priesthood. Macroeconomics has aspects of both.                                                           

Macroeconomists build theories codified by systems of equations. We use those equations to explain patterns in economic data. Unlike experimental sciences, chemistry and physics for example, macroeconomists cannot easily experiment. That does not mean that we cannot challenge existing theories; but it makes it much harder. Like astronomers waiting for the next supernova to explode; macroeconomists must wait for big recessions or large bouts of stagflation to help us sort one theory from another.

The inability to experiment is more serious than most macroeconomists realise. When CGG wrote their paper on monetary policy they put forward a New Keynesian (NK) theory. That theory was  codified by three equations that they used to explain GDP, the interest rate and inflation. The NK equations are widely used today by policy makers in every major central bank to help guide policy. What if those equations are wrong?

Economists select one theory over another using a statistical procedure called maximum likelihood. We say that theory A is better than theory B if the data we observe has a higher probability of being generated by A than B. In research with my co-author Andreas Beyer of the European Central Bank, (Beyer and Farmer 2008) we showed how to produce theories that cannot be distinguished in this way. If you come up with theory A to explain data set X, our procedure will produce another, theory B, that has the identical probability of having generated the observed data as theory A.

It gets worse. We provide an example of this problem where theory A and theory B provide contradictory policy conclusions. The only way to tell them apart would be for a policy maker to experiment by changing the way they react to economic data. The Bank of England could, for example, raise the Bank Rate while, at the same time, the Federal Open Market committee lowers the US Federal Funds Rate.

Macroeconomists can explain past data relatively well. But we are not very good at explaining new events and our theories are always evolving. In that sense, economics is a science. The way that our models are allowed to evolve is controlled by a group of high-priests who maintain doctrinal purity. In that sense, economics is a religion. The religious aspect is important during normal times, when we have not recently experienced a big event. At other times, after we observe an economic supernova, the grip of the high-priests becomes counterproductive and it is a fertile time to explore ideas that the priesthood considers heretical. Now is one of those times.

Why the MPC will and should raise interest rates

bank-of-england-museum.jpg

Simon Wren-Lewis has a very nice post on why the MPC should not raise interest rates on Thursday and there is much that he says that I agree with. But the Bank has been signaling a rate rise now for sometime and if it fails to deliver on Thursday, the credibility of the MPC will be greatly diminished.

Simon argues from a conventional New-Keynesian macroeconomic framework in which labour market tightness triggers wage inflation through a Phillips curve. That, as I argued here, is a discredited framework.

Here is what I said in August of 2016 as the Fed was about to embark on a rate tightening cycle. I have substituted 'MPC' for 'Fed' in places. The reference to Friedman's optimum quantity of money can be found here and the link to Prosperity for All (now published) is here.

Conventional New-Keynesian macroeconomists assert that, to increase the inflation rate, the [MPC] must first lower the interest rate. A lower interest rate, so the argument goes, increases aggregate demand and raises wages and prices. As economic activity picks up, the [MPC] can raise the interest rate without fear of generating a recession. Some economists advocate that the Fed should raise the interest rate to meet the inflation target, a position that for reasons that escape me, has been labelled as neo-Fisherianism on the blogosphere .... My body of work, written over the past several years, (see my book Prosperity for All) explains how to raise the interest rate without simultaneously triggering a recession and, I suppose, that makes me a ‘neo-Fisherian’.

... the [MPC] should raise the interest rate on reserves and the [repo rate on overnight loans] simultaneously, thereby keeping the opportunity cost of holding money at zero and enacting Milton Friedman’s proscription for the optimal quantity of money. In addition, the [MPC] should be given the authority to buy and sell an exchange traded fund (ETF) over a broad stock portfolio with the goal of achieving an unemployment target. This is an argument I have been making for some time but it is becoming more relevant as it becomes apparent that the world does not work in the way the New-Keynesians claim.

The argument I made in August of 2016 applies equally to the MPC decision this coming Thursday. Raising the Bank Rate in an environment where the Bank pays interest on reserves is not the same as raising the Bank Rate in an environment where the interest rate on reserves is zero. The opportunity cost of credit is the difference between these two rates and, when they are equal, holding assets in the repo market is pretty much equivalent to parking reserves with the Bank. Raising both rates simultaneously will have little or no effect on the cost of credit.

If reserves and repos were the only assets; that would be the end of the story. But it doesn't end there since over half of outstanding regulated mortgages are currently on fixed rates. Banks and building societies have been lending long and borrowing short and that business will be squeezed as rates increase. There will be some impact on aggregate demand, albeit a much smaller one than if the repo rate were raised and the reserve rate left unchanged.  But the effect on demand of a rate rise can potentially be offset by use of the Bank's considerable off balance sheet asset holdings to step in and support a possible asset price crash, should it occur.

So should the MPC raise rates? I believe so.  Indeed, if the MPC wants to hit the inflation target they have to raise rates eventually. The only question is whether a rate rise on Thursday is in some sense premature. In my view it is not. The Bank has been signaling a rate rise for months and the markets expect one to occur. This is why the MPC will and should raise interest rates. Failure to act now will be damaging for future Bank communication and it will prolong the current period of stagnation.

How to Fix the Curse of the Five

I recently came across this video link to a session held at the 2017 ASSA meetings on the ‘Curse of the Top Five’. The session was organised by Jim Heckman and involves a panel discussion with participation by Heckman, George Akerlof, Angus Deaton, Drew Fudenberg and Lars Hansen. I’m going to concentrate here on the presentations by Heckman and Akerlof.

Screen Shot 2017-10-28 at 5.16.33 PM.png

Heckman made several points in a talk informed by a series of fascinating slides that you can find linked here. He pointed out that, although many top economists publish important highly cited papers outside the top five journals, the influence of the top five is increasingly important in promotion and tenure decisions, a point I also made here.

Why is that a bad thing? One of the most insidious aspects of the curse of the five is that it concentrates power in the hands of a small group of insiders and that makes it much harder for new ideas to emerge. Figure 11, taken from Heckman’s talk, illustrates a density plot of the number of years served by editors in four of the top five journals. The QJE, a journal dominated by Harvard, is an outlier with slow turnover in editorial control. But the influence of the other top journals is also pervasive and entry to the club depends on success determined by its established members.

A friend of mine who is a senior academic at a top business school related the following story which encapsulates much that is wrong with the current system. A junior colleague, coming up for tenure, was waiting for a decision from the AER. In a departmental discussion, the point was made that hir tenure decision would be contingent on whether the paper was accepted there. As my friend remarked; why would we delegate our tenure decision to the editor of the AER?

George Akerlof has five recommendations, all of which I agree with. 1. Editors should take more responsibility for decisions by overruling referees more often. 2. We should revert to a situation where referees are advisors rather than the current situation where they often get to rewrite the paper. 3. We should work to diminish the role of top-five publications in tenure decisions. 4. We should ‘shame’ deans who act as top-five bean counters.  And 5. We must broaden the scope of areas that we deem to be intellectually acceptable to be admitted as a tenured member of our tribe.

I have two recommendations of my own for possible ways to fix the curse of the five.

First, those of us with influence on granting agencies should recommend that more than five journals be given equal weight when ranking research. In the UK, the research output of academic departments is assessed on a regular basis and referees are given guidelines in which they are encouraged to give more weight to articles published in the top five journals. That guidance should be broadened and referees should be advised instead to broaden the base to fifteen or twenty journals, selected for example, by RePEc rankings.

Second, when junior faculty come up for promotion they should be judged on their best three articles where the three articles are self-selected and, in some cases, might be replaced by a book. The current system provides incentives for junior scholars to publish large numbers of derivative works, much of which contribute little or nothing to the social good.

When I first moved to UCLA in the late 1980s, the senior faculty would read the work of our junior colleagues and make tenure decisions based on the content of their research papers. Slowly, over the years, it became more common to rely on the decisions of others by placing weight on where papers were published as opposed to their content.

I am encouraged by the positive message that arose from the ASSA panel. As the profession grows and journal space becomes more valuable, it is time to broaden the scope of those journals we judge to be the gatekeepers of knowledge. We should trust our own judgement and carefully read the work of our colleagues. That, I believe, is the right way to fix the curse of the five.

Reflections on My Interview with Cloud Yip: Part 2

Roger.png

Cloud Yip is running a series of interviews under the title of “Where is the General Theory of the 21st Century” and I was privileged to be included in that series. Last week I put up my first post about the interview. This week’s post is the second in a series where I expand on my answers to Cloud. Here, I discuss my views on rational expectations and I talk about a new version of search theory, Keynesian Search Theory, that underpins my joint papers with Giovanni Nicolò on “Keynesian Economics without the Phillips Curve” and with Konstantin Platonov, “Animal Spirits in a Monetary Model”. The paper with Konstantin uses Keynesian Search Theory to provide an updated version of the IS-LM model which we call the IS-LM-NAC model. The paper with Giovanni estimates a version of this model on U.S. data and demonstrates that it provides a better way of explaining data than the failed Phillips curve. 

I have been making the argument in my books, academic articles and op eds for at least seven years that the Phillips curve is broken and there is a better alternative that I call the belief function. I presented this work at a conference in New York in honour of Edmund Phelps where the paper was discussed by Olivier Blanchard. I’m pleased to see that the importance of this topic is now being widely recognised and my Phillips Curve scepticism has become mainstream.  

Here is what I said on the topic in a previous blog post...

Policy makers at central banks have been puzzled by the fact that inflation is weak even though the unemployment rate is low and the economy is operating at or close to capacity. Their puzzlement arises from the fact that they are looking at data through the lens of the New Keynesian (NK) model in which the connection between the unemployment rate and the inflation rate is driven by the Phillips curve…
…The research programme we are engaged in should be of interest to policy makers in central banks and treasuries throughout the world who are increasingly realising that the Phillips curve is broken. In Keynesian Economics Without the Phillips Curve, we have shown how to replace the Phillips curve with the belief function, an alternative theory of the connection between unemployment and inflation that better explains the facts. 

That leads me to the main focus of today’s post: What’s wrong with rational expectations and how is that connected with my replacement for the Phillips Curve? Over to Cloud…

Q: What is your view on the role of the rational expectations approach in macroeconomics?

“F: The classical reformulation of macroeconomics developed by Lucas and Prescott required a radical reformulation of expectations. In the Keynesian model of the 1950s, expectations were determined with a separate equation called adaptive expectations. In the Keynesian model beliefs about future prices might be different from the realization of the future prices. Because of that, those models needed another equation to explain how beliefs or expectations were determined.
Lucas, writing in 1972, removed the adaptive expectations equation and he argued that beliefs are not independent; they are endogenous and must be explained within the model. He argued the world is random. As a consequence of randomness, prices aren’t always equal to what people expect them to be and he introduced the idea of rational expectations into macroeconomics. Instead of adding an equation, adaptive expectations, to determine beliefs, Lucas closed his model by arguing that beliefs should be right on average. He argued that people wouldn't be expected to be fooled in the long run, and that we can model beliefs or expectations as probability distributions that coincide with the distribution of the actual realizations.
That all sounds very sensible, but it only makes sense in models where there is a unique equilibrium. Even in the model that Lucas wrote down in 1972, there were multiple equilibria. For me, the existence of multiple equilibria is not a problem. It is an opportunity.” 

I discussed the role of rational expectations in a world of animal spirits in a 2014 blog linked here. When I describe multiplicity as an opportunity, I mean that it opens the possibility to marry psychology with economics in a new and interesting way. If economic models have multiple possible equilibria, we can model how stories are transmitted through social networks to explain which equilibrium occurs in practice. Economists are good at building models of the macro economy. Psychologists are good at understanding the spread of beliefs. There are clearly gains from collaborative research which was the topic of the conference I helped organize at the Bank of England in July of 2017.

I have been working on models of multiple equilibria since the early 1980s but my early work on this topic dealt with models where there is a unique steady state and the economy is self-stabilizing. In my survey paper on Endogenous Business Cycles I described these models as first-generation models of endogenous fluctuations and I contrasted them with second-generation models in which there is a continuum of steady state equilibria. To explain why there may be many steady state equilibria, I developed a version of search theory that I call Keynesian search theory. That is the topic that Cloud asked me about next.  Back to Cloud…

Q: What is the "Keynesian search model" that you are advocating in your book “Prosperity for All”? How is it different from the mainstream search model that you refer to as classical search theory?

“The Keynesian search model is a variant of what I call classical search models. By classical, I mean the work that evolved from Peter Diamond, Dale Mortensen and Chris Pissarides. In the classical search model, there is a unique equilibrium in the labour market pinned down by the bargaining power of workers relative to firms. In the Keynesian search model, there is a continuum of equilibria and the equilibrium that occurs is selected by aggregate demand, just as in the Keynesian models of the 1950s.
The Keynesian Search Model maintains Keynes' idea, which I think is important, that beliefs are fundamental. Animal spirits, confidence and self-fulfilling beliefs can influence outcomes. In every single equilibrium of the Keynesian search model there is no incentive for either firms or workers to change their behaviours. The reason has nothing to do with sticky prices; it has to do with the fact that there are incomplete factor markets.
The search model has a search technology, separate from the production technology, that moves people from home to jobs. That technology has two inputs; the searching time of workers and the searching time of the recruiting department of a firm. Because there are two inputs, for the market to function well, there must be two prices. One price for the searching time for workers and another for the searching time for recruiters.
You could imagine a recruiting firm which would offer to purchase the right to find an unemployed worker a job and offer to buy the right to fill the vacancy of the company. This market would operate a little bit like a dating website, where the firm would take the two searching parties, match them and sell the match back to the worker-firm pair.
We do not see the market working in that way, largely because there are moral hazard issues. If I am unemployed and you are paying me to be unemployed, I do not see why I would ever accept a job. As a consequence of the failure of that market, there are equilibria with search externalities that can support equilibria with any level of unemployment.
My Keynesian search model solves the problem of understanding Keynes's General Theory in a way that is different from the sticky price approach that Samuelson initiated and that continues to be perpetuated by New Keynesian economists today.”

Next week, I will talk about why economists should stop pretending that unemployment is voluntary. It’s time to reintroduce the term, ‘involuntary unemployment’. Stay tuned!

My Interview with Cloud Yip: Part 1:

Roger.png

A couple of months ago, I had the pleasure of speaking with Cloud Yip. Cloud is running a series of interviews under the title of “Where is the General Theory of the 21st Century” and I was privileged to be included in that series. The interview was published in its entirety a couple of weeks ago but, because it is quite long, I will be serialising it on my blog over the next few weeks.

In his series, Cloud asks prominent macroeconomists: “Why haven’t economists come up with a new General Theory after the Great Recession?” Those of you who have been following my blog will not be surprised by my answer. The theory of macroeconomics, described in my book Prosperity for All, makes fundamental changes to the dominant paradigm. And it leads to fundamentally different policy conclusions from either classical or New Keynesian alternatives. 

Q: Do you think that there have been "revolutionary" changes in macroeconomics since the Great Recession?

F: Yes and no. In my own work, I have made some major changes to macroeconomics. I will leave it to others to decide if they are revolutionary. But in my view, most macroeconomists are carrying on with business as usual. And that is discouraging because macroeconomics needs to change.
The dominant paradigm before the Great Recession was New-Keynesian economics. That paradigm is widely perceived to have failed in two key dimensions. It didn’t include a financial sector and it had no role for unemployment. New Keynesian economists have tried to fix the NK model by adding in these features and there have been some notable contributions. But for the most part, attempts to fix the NK model are akin to rearranging the deckchairs on the Titanic.
Economics is not an experimental science. As a consequence, frequently, people pursue avenues of research that are simply wrong or mistaken.
book.png
In my view, economics took a wrong path in the 1950s. Back in 1928, there was a book published by Pigou called "Industrial Fluctuations." It is a very rich verbal theory about the causes of business cycles. According to Pigou, there are six different causes of business cycles. Those include what we would now call productivity shocks, monetary disturbances, sunspot shocks, that is, shocks to business confidence; agricultural disturbances, changes in tastes and news shocks.
Then in 1929, there was the stock market crash, and in 1936, Keynes wrote the General Theory. The General Theory was a revolutionary change in the way we think about the world. It was revolutionary because, instead of thinking of the economic system in a capitalist economy as self-stabilizing, Keynes's vision was of a dysfunctional world in which high unemployment can persist for a very long time.
A few years ago, I wrote a book called "How the Economy Works". In it I described two metaphors. The first was that of Pigou's book in which the economy is like a rocking horse hit repeatedly and randomly by a kid with a club. The movement of the rocking horse is partly caused by the shocks of the club and partly caused by the internal dynamics of the rocker. We've modelled this system for decades using linear stochastic difference equations.
In my book, I provide a different metaphor to capture Keynes' insight that the economy can get stuck in an equilibrium with high unemployment. I call that metaphor the "windy-boat model". The economy is not like a rocking horse; it is like a sailboat on the ocean with a broken rudder. When the wind blows the boat, instead of always returning to the same point, the boat can become stranded a long way from a safe harbour. 
In the language of equilibrium theory, Frisch's analogy leads to a model with a unique steady-state equilibrium: the rocking horse always comes to rest at the same point. In the windy-boat model, which is, I think, the essence of the General Theory, the economy can get stuck with high unemployment for an extended period.
In the immediate aftermath of the Great Depression in the 1940s and 1950s, the economic model we were using was based on ideas from the General Theory. Then in 1955, Samuelson wrote the third edition of his introductory textbook, in which he introduced the concept of the neo-classical synthesis.
In Samuelson's view, a view that has dominated the discipline since 1955, the economy is classical in the long run but Keynesian in the short run. Samuelson defined the short-run as the period over which prices don't adjust. He defined the long-run as the period over which the economy has had enough time to return to a classical full-employment equilibrium. According to the neo-classical synthesis, the economy is temporarily away from the "social planning optimum", but only temporarily.
In 1982, with the birth of Real Business Cycle Theory (RBC), economists gave up on Keynesian economics and we returned to the ideas of Pigou. Real Business Cycle theory formalized Pigou's model of the economy, but instead of the rich verbal theory of Industrial Fluctuations, RBC theorists constructed complicated mathematical models. And because the mathematics was complicated, the models were very simple and, initially, driven by a single productivity shock. In the period from 1982 up through 2008, most macroeconomists were engaged in a research program that was, essentially, adding the shocks back to Pigou's vision of the rocking horse model.
What happened in 2008 and in the aftermath of the Great Recession has, or should, cause us to rethink the entire enterprise of macroeconomics. In my work, I have formalized the main ideas in Keynes' General Theory. These ideas are vastly different to those that preceded Keynes and they are very different from the ideas that have guided macroeconomics since the 1980s. Keynes argued that there are multiple steady-state equilibria and that the economy can get stuck in an equilibrium with high persistent involuntary unemployment. In my work, I have formalized that idea.

Q: Why, in your view, did economists, in the 1980s, give up on Keynesian economics?

F: The General Theory was incomplete. It was incomplete because it eliminated the idea of the labour supply curve but didn’t replace it with any convincing alternative.
Keynes argued that the economy is on the labour demand curve, but he threw away the labour supply curve and replaced it with the idea of involuntary unemployment. That was always somewhat unsatisfactory theoretically.  Involuntary unemployment is open to a number of criticisms. For example, why don’t firms offer to employ unemployed workers for lower wages when those workers would willingly accept a lower wage if they are involuntarily unemployed? This is a theoretical problem that was left hanging in the General Theory.

 

Then the other issue in the General Theory is that there was never a theory of what determines the price level. Hicks and Hansen, who interpreted the General Theory, considered it to be a short-term theory in which prices are temporarily fixed. Around the time that Samuelson was writing the third edition of his textbook, a New Zealander, William Phillips, published the article "The Relation between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom, 1861-1957". This empirical article demonstrated that there had been a stable relationship between wage inflation and unemployment in nearly a century of UK data. This has been known ever since as the Phillips curve.
Samuelson used the Phillips curve to bring together the short run and the long run. He saw it as a wage adjustment equation which explained how excess demand pressure would cause wages to rise. As wages and prices changed, the economy would return to its long-run steady state. The problem with that explanation is that as soon as Phillips had written the article, the Phillips Curve disappeared. There hasn't been a stable Phillips Curve in data anywhere in any advanced economy that I know of since the mid 1960s.

Giovanni Nicolò and I wrote a paper recently (Farmer and Nicolò 2017) that replaces the Phillips Curve with an alternative equation, the Belief Function,  that I introduced in my 1993 book, The Macroeconomics of Self-fulfilling Prophecies. Giovanni and I showed in our paper that a three-equation model closed with the Belief Function instead of the Philips Curve provides a much better fit to US data. We find that a Bayesian economist who placed equal weight on both theories before confronting them with data would find overwhelming evidence that the Belief Function was the better approach.

Next week I will continue this serialisation of my interview with Cloud, and among other things, I will discuss my views on rational expectations.