Tax Reform: A Proposal for the Chancellor

Given the upcoming autumn budget, I have a proposal for the Chancellor to consider. Replace taxes on dividends, capital gains and inheritance with a tax on wealth. Currently these three taxes combined raise £41b in revenue. A 1.2% wealth tax on those with net wealth greater than £700,000 would raise approximately this amount with £2b to spare to help pay down the deficit. A 2% wealth tax would raise £72b and give the Chancellor breathing room to lower taxes on wage income or to provide much needed additional resources for our nurses, firefighters and police men and women.

Chart 1: Source, Office for Budget Responsibility and Authors Calculations (c) Roger Farmer 2017

Chart 1: Source, Office for Budget Responsibility and Authors Calculations (c) Roger Farmer 2017

In 2017-2018 the Office for Budget Responsibility (OBR) expects that tax revenues will be equal to 37% of national income. Chart 1 breaks this down by whether those revenues came from taxes on labour, capital or sales.

To construct this chart, I took the major sources of revenue from OBR’s tax by tax – spend by spend data and I allocated each major revenue source to either labour, capital or sales taxes. Sales taxes include VAT and petrol and taxes on alcohol and tobacco.<1>

The UK generates approximately £1.9 trillion in income of which 2/3 is paid as wages and 1/3 goes to the owners of capital in the form of profits, rents, dividends and capital gains. If tax revenues were raised in proportion to these two sources of income, we would expect that the revenue raised from capital should be approximately 1/3 of the revenue from income taxes. In reality, taxes on labour income account for 61% of tax revenues and taxes on capital for only 8%. 

To put my proposal for a wealth tax in context, let me revisit some of the principles of the public finances. For at least two hundred years, there has been a public consensus in favour of progressive taxation. In other words, a rich person should not only pay a higher amount in taxes; that person should also pay a higher percentage of his or her income in taxes. That principle is encoded into the current system of income taxation whereby those earning less than £11,500k pay nothing and those earning more than £11,500 pay income tax at a graduated rate. That rate increases in steps with the highest rate currently set at 45%.

So far so good. But there is a growing consensus that the very wealthy are not paying their fair share. The offshore schemes unearthed in the recent Paradise Papers scandal, may all be legal. But to many, they do not seem just. It is relatively easy to raise taxes from wage income. It is much harder to raise taxes from capital income.

Suppose that you own shares in a company that pays little or no dividends. The company nevertheless makes a healthy profit that it reinvests. Because you receive no dividends, you pay no income tax. But because the company is profitable, its shares keep appreciating and, when you sell those shares, you will make a capital gain. Currently, the maximum tax you will pay on that capital gain is 20%.  The very rich earn little of their income from wages and thus the top marginal rate of 45% never applies.

The Office for National Statistics estimates UK wealth to be roughly £9 trillion of which £3.5 trillion is in the form of factories and machines and £5.5 trillion is in the form of property. The Institute for Fiscal Studies (IFS) estimates that 40% of that wealth is held by the top 5% of wealth owners and 20% is held by the top 1%. £9 trillion generates an annual flow of income of roughly £450 billion. 40% of £450 billion is £180 billion. Assuming that the top five percent of wealth owners are in the 45% tax bracket, one could argue that the Chancellor should be receiving an income of £81 billion from those individuals. My proposal is more modest.

According to my calculations, a wealth tax of 1.2% levied on net assets greater than £700,000 would raise approximately £43b in revenue, enough to replace existing taxes on capital and still have £2b to spare. I chose a threshold of £700,000 because the IFS estimates that it is the cut-off for the top 5% of the UK wealth distribution. A person with net assets of £700k is at the 95th percentile and his or her assets would, under this plan, be tax free.  There is, of course, no need for a new proposed tax to be revenue neutral. A tax of 2% on net wealth greater than £700k would raise £72 billion and allow the Chancellor to lower the top rate of income tax on earned income or to fund important spending programmes such as defence, education or health care.

Any major tax reform is likely to have unforeseen consequences and this plan is no exception. One major change from existing policy is that the wealth tax I am proposing would apply not only to wealth held in the form of factories and machines, but also to wealth held in the form of residential property. Currently, those with net assets of £700,000 or more own property worth £2.2 billion (40% of £5.5b) for which they are receiving no income. It is likely that a wealth tax would provide incentives to convert housing wealth into income earning assets. That would put downward pressure on property prices and increase investment in productive capital thereby making houses more affordable and increasing labour productivity at a single stroke.

The British people are tired of austerity and there is a growing concern that our nurses, firefighters and police men and women deserve better. Mr Hammond, I respectfully ask that you consider my proposal for tax reform. It might just provide a simple and popular way to balance the books. 


<1> The three major sources of government revenues are pay as you earn (PAYE) income taxes, 20%, National Insurance (NI) contributions, 17.5%, and VAT 16.9%.  Together, these three taxes make up 54% of all treasury receipts. To get to the labour tax data I added two-thirds of self-employment income and two-thirds of corporation tax. To get to the capital tax figure I added one-third of self-employment income, one-third of corporate taxes and all of capital gains taxes and inheritance taxes. The two-third, one-third division is based on the rough calculation that two-thirds of national income goes to labour and one-third to capital. 

Macroeconomics: Religion or Science?

religion.png

Writing in 1999 in a widely cited paper “The Science of Monetary Policy”, three leading economists, Richard Clarida, Jordi Galí and Mark Gertler, CGG, make the case that monetary policy is a science. Although there is some truth to that claim, CGG could equally well have titled their paper; “Macroeconomics: Religion or Science?”

Science and religion are strange bedfellows. Science dates from the enlightenment. Religion has been around since the dawn of history. Science is supported by rationalism. Religion is supported by dogma. Science is challenged by experiment. Religion is codified by scholars and protected by a priesthood. Macroeconomics has aspects of both.                                                           

Macroeconomists build theories codified by systems of equations. We use those equations to explain patterns in economic data. Unlike experimental sciences, chemistry and physics for example, macroeconomists cannot easily experiment. That does not mean that we cannot challenge existing theories; but it makes it much harder. Like astronomers waiting for the next supernova to explode; macroeconomists must wait for big recessions or large bouts of stagflation to help us sort one theory from another.

The inability to experiment is more serious than most macroeconomists realise. When CGG wrote their paper on monetary policy they put forward a New Keynesian (NK) theory. That theory was  codified by three equations that they used to explain GDP, the interest rate and inflation. The NK equations are widely used today by policy makers in every major central bank to help guide policy. What if those equations are wrong?

Economists select one theory over another using a statistical procedure called maximum likelihood. We say that theory A is better than theory B if the data we observe has a higher probability of being generated by A than B. In research with my co-author Andreas Beyer of the European Central Bank, (Beyer and Farmer 2008) we showed how to produce theories that cannot be distinguished in this way. If you come up with theory A to explain data set X, our procedure will produce another, theory B, that has the identical probability of having generated the observed data as theory A.

It gets worse. We provide an example of this problem where theory A and theory B provide contradictory policy conclusions. The only way to tell them apart would be for a policy maker to experiment by changing the way they react to economic data. The Bank of England could, for example, raise the Bank Rate while, at the same time, the Federal Open Market committee lowers the US Federal Funds Rate.

Macroeconomists can explain past data relatively well. But we are not very good at explaining new events and our theories are always evolving. In that sense, economics is a science. The way that our models are allowed to evolve is controlled by a group of high-priests who maintain doctrinal purity. In that sense, economics is a religion. The religious aspect is important during normal times, when we have not recently experienced a big event. At other times, after we observe an economic supernova, the grip of the high-priests becomes counterproductive and it is a fertile time to explore ideas that the priesthood considers heretical. Now is one of those times.

Why the MPC will and should raise interest rates

bank-of-england-museum.jpg

Simon Wren-Lewis has a very nice post on why the MPC should not raise interest rates on Thursday and there is much that he says that I agree with. But the Bank has been signaling a rate rise now for sometime and if it fails to deliver on Thursday, the credibility of the MPC will be greatly diminished.

Simon argues from a conventional New-Keynesian macroeconomic framework in which labour market tightness triggers wage inflation through a Phillips curve. That, as I argued here, is a discredited framework.

Here is what I said in August of 2016 as the Fed was about to embark on a rate tightening cycle. I have substituted 'MPC' for 'Fed' in places. The reference to Friedman's optimum quantity of money can be found here and the link to Prosperity for All (now published) is here.

Conventional New-Keynesian macroeconomists assert that, to increase the inflation rate, the [MPC] must first lower the interest rate. A lower interest rate, so the argument goes, increases aggregate demand and raises wages and prices. As economic activity picks up, the [MPC] can raise the interest rate without fear of generating a recession. Some economists advocate that the Fed should raise the interest rate to meet the inflation target, a position that for reasons that escape me, has been labelled as neo-Fisherianism on the blogosphere .... My body of work, written over the past several years, (see my book Prosperity for All) explains how to raise the interest rate without simultaneously triggering a recession and, I suppose, that makes me a ‘neo-Fisherian’.

... the [MPC] should raise the interest rate on reserves and the [repo rate on overnight loans] simultaneously, thereby keeping the opportunity cost of holding money at zero and enacting Milton Friedman’s proscription for the optimal quantity of money. In addition, the [MPC] should be given the authority to buy and sell an exchange traded fund (ETF) over a broad stock portfolio with the goal of achieving an unemployment target. This is an argument I have been making for some time but it is becoming more relevant as it becomes apparent that the world does not work in the way the New-Keynesians claim.

The argument I made in August of 2016 applies equally to the MPC decision this coming Thursday. Raising the Bank Rate in an environment where the Bank pays interest on reserves is not the same as raising the Bank Rate in an environment where the interest rate on reserves is zero. The opportunity cost of credit is the difference between these two rates and, when they are equal, holding assets in the repo market is pretty much equivalent to parking reserves with the Bank. Raising both rates simultaneously will have little or no effect on the cost of credit.

If reserves and repos were the only assets; that would be the end of the story. But it doesn't end there since over half of outstanding regulated mortgages are currently on fixed rates. Banks and building societies have been lending long and borrowing short and that business will be squeezed as rates increase. There will be some impact on aggregate demand, albeit a much smaller one than if the repo rate were raised and the reserve rate left unchanged.  But the effect on demand of a rate rise can potentially be offset by use of the Bank's considerable off balance sheet asset holdings to step in and support a possible asset price crash, should it occur.

So should the MPC raise rates? I believe so.  Indeed, if the MPC wants to hit the inflation target they have to raise rates eventually. The only question is whether a rate rise on Thursday is in some sense premature. In my view it is not. The Bank has been signaling a rate rise for months and the markets expect one to occur. This is why the MPC will and should raise interest rates. Failure to act now will be damaging for future Bank communication and it will prolong the current period of stagnation.

How to Fix the Curse of the Five

I recently came across this video link to a session held at the 2017 ASSA meetings on the ‘Curse of the Top Five’. The session was organised by Jim Heckman and involves a panel discussion with participation by Heckman, George Akerlof, Angus Deaton, Drew Fudenberg and Lars Hansen. I’m going to concentrate here on the presentations by Heckman and Akerlof.

Screen Shot 2017-10-28 at 5.16.33 PM.png

Heckman made several points in a talk informed by a series of fascinating slides that you can find linked here. He pointed out that, although many top economists publish important highly cited papers outside the top five journals, the influence of the top five is increasingly important in promotion and tenure decisions, a point I also made here.

Why is that a bad thing? One of the most insidious aspects of the curse of the five is that it concentrates power in the hands of a small group of insiders and that makes it much harder for new ideas to emerge. Figure 11, taken from Heckman’s talk, illustrates a density plot of the number of years served by editors in four of the top five journals. The QJE, a journal dominated by Harvard, is an outlier with slow turnover in editorial control. But the influence of the other top journals is also pervasive and entry to the club depends on success determined by its established members.

A friend of mine who is a senior academic at a top business school related the following story which encapsulates much that is wrong with the current system. A junior colleague, coming up for tenure, was waiting for a decision from the AER. In a departmental discussion, the point was made that hir tenure decision would be contingent on whether the paper was accepted there. As my friend remarked; why would we delegate our tenure decision to the editor of the AER?

George Akerlof has five recommendations, all of which I agree with. 1. Editors should take more responsibility for decisions by overruling referees more often. 2. We should revert to a situation where referees are advisors rather than the current situation where they often get to rewrite the paper. 3. We should work to diminish the role of top-five publications in tenure decisions. 4. We should ‘shame’ deans who act as top-five bean counters.  And 5. We must broaden the scope of areas that we deem to be intellectually acceptable to be admitted as a tenured member of our tribe.

I have two recommendations of my own for possible ways to fix the curse of the five.

First, those of us with influence on granting agencies should recommend that more than five journals be given equal weight when ranking research. In the UK, the research output of academic departments is assessed on a regular basis and referees are given guidelines in which they are encouraged to give more weight to articles published in the top five journals. That guidance should be broadened and referees should be advised instead to broaden the base to fifteen or twenty journals, selected for example, by RePEc rankings.

Second, when junior faculty come up for promotion they should be judged on their best three articles where the three articles are self-selected and, in some cases, might be replaced by a book. The current system provides incentives for junior scholars to publish large numbers of derivative works, much of which contribute little or nothing to the social good.

When I first moved to UCLA in the late 1980s, the senior faculty would read the work of our junior colleagues and make tenure decisions based on the content of their research papers. Slowly, over the years, it became more common to rely on the decisions of others by placing weight on where papers were published as opposed to their content.

I am encouraged by the positive message that arose from the ASSA panel. As the profession grows and journal space becomes more valuable, it is time to broaden the scope of those journals we judge to be the gatekeepers of knowledge. We should trust our own judgement and carefully read the work of our colleagues. That, I believe, is the right way to fix the curse of the five.

Reflections on My Interview with Cloud Yip: Part 2

Roger.png

Cloud Yip is running a series of interviews under the title of “Where is the General Theory of the 21st Century” and I was privileged to be included in that series. Last week I put up my first post about the interview. This week’s post is the second in a series where I expand on my answers to Cloud. Here, I discuss my views on rational expectations and I talk about a new version of search theory, Keynesian Search Theory, that underpins my joint papers with Giovanni Nicolò on “Keynesian Economics without the Phillips Curve” and with Konstantin Platonov, “Animal Spirits in a Monetary Model”. The paper with Konstantin uses Keynesian Search Theory to provide an updated version of the IS-LM model which we call the IS-LM-NAC model. The paper with Giovanni estimates a version of this model on U.S. data and demonstrates that it provides a better way of explaining data than the failed Phillips curve. 

I have been making the argument in my books, academic articles and op eds for at least seven years that the Phillips curve is broken and there is a better alternative that I call the belief function. I presented this work at a conference in New York in honour of Edmund Phelps where the paper was discussed by Olivier Blanchard. I’m pleased to see that the importance of this topic is now being widely recognised and my Phillips Curve scepticism has become mainstream.  

Here is what I said on the topic in a previous blog post...

Policy makers at central banks have been puzzled by the fact that inflation is weak even though the unemployment rate is low and the economy is operating at or close to capacity. Their puzzlement arises from the fact that they are looking at data through the lens of the New Keynesian (NK) model in which the connection between the unemployment rate and the inflation rate is driven by the Phillips curve…
…The research programme we are engaged in should be of interest to policy makers in central banks and treasuries throughout the world who are increasingly realising that the Phillips curve is broken. In Keynesian Economics Without the Phillips Curve, we have shown how to replace the Phillips curve with the belief function, an alternative theory of the connection between unemployment and inflation that better explains the facts. 

That leads me to the main focus of today’s post: What’s wrong with rational expectations and how is that connected with my replacement for the Phillips Curve? Over to Cloud…

Q: What is your view on the role of the rational expectations approach in macroeconomics?

“F: The classical reformulation of macroeconomics developed by Lucas and Prescott required a radical reformulation of expectations. In the Keynesian model of the 1950s, expectations were determined with a separate equation called adaptive expectations. In the Keynesian model beliefs about future prices might be different from the realization of the future prices. Because of that, those models needed another equation to explain how beliefs or expectations were determined.
Lucas, writing in 1972, removed the adaptive expectations equation and he argued that beliefs are not independent; they are endogenous and must be explained within the model. He argued the world is random. As a consequence of randomness, prices aren’t always equal to what people expect them to be and he introduced the idea of rational expectations into macroeconomics. Instead of adding an equation, adaptive expectations, to determine beliefs, Lucas closed his model by arguing that beliefs should be right on average. He argued that people wouldn't be expected to be fooled in the long run, and that we can model beliefs or expectations as probability distributions that coincide with the distribution of the actual realizations.
That all sounds very sensible, but it only makes sense in models where there is a unique equilibrium. Even in the model that Lucas wrote down in 1972, there were multiple equilibria. For me, the existence of multiple equilibria is not a problem. It is an opportunity.” 

I discussed the role of rational expectations in a world of animal spirits in a 2014 blog linked here. When I describe multiplicity as an opportunity, I mean that it opens the possibility to marry psychology with economics in a new and interesting way. If economic models have multiple possible equilibria, we can model how stories are transmitted through social networks to explain which equilibrium occurs in practice. Economists are good at building models of the macro economy. Psychologists are good at understanding the spread of beliefs. There are clearly gains from collaborative research which was the topic of the conference I helped organize at the Bank of England in July of 2017.

I have been working on models of multiple equilibria since the early 1980s but my early work on this topic dealt with models where there is a unique steady state and the economy is self-stabilizing. In my survey paper on Endogenous Business Cycles I described these models as first-generation models of endogenous fluctuations and I contrasted them with second-generation models in which there is a continuum of steady state equilibria. To explain why there may be many steady state equilibria, I developed a version of search theory that I call Keynesian search theory. That is the topic that Cloud asked me about next.  Back to Cloud…

Q: What is the "Keynesian search model" that you are advocating in your book “Prosperity for All”? How is it different from the mainstream search model that you refer to as classical search theory?

“The Keynesian search model is a variant of what I call classical search models. By classical, I mean the work that evolved from Peter Diamond, Dale Mortensen and Chris Pissarides. In the classical search model, there is a unique equilibrium in the labour market pinned down by the bargaining power of workers relative to firms. In the Keynesian search model, there is a continuum of equilibria and the equilibrium that occurs is selected by aggregate demand, just as in the Keynesian models of the 1950s.
The Keynesian Search Model maintains Keynes' idea, which I think is important, that beliefs are fundamental. Animal spirits, confidence and self-fulfilling beliefs can influence outcomes. In every single equilibrium of the Keynesian search model there is no incentive for either firms or workers to change their behaviours. The reason has nothing to do with sticky prices; it has to do with the fact that there are incomplete factor markets.
The search model has a search technology, separate from the production technology, that moves people from home to jobs. That technology has two inputs; the searching time of workers and the searching time of the recruiting department of a firm. Because there are two inputs, for the market to function well, there must be two prices. One price for the searching time for workers and another for the searching time for recruiters.
You could imagine a recruiting firm which would offer to purchase the right to find an unemployed worker a job and offer to buy the right to fill the vacancy of the company. This market would operate a little bit like a dating website, where the firm would take the two searching parties, match them and sell the match back to the worker-firm pair.
We do not see the market working in that way, largely because there are moral hazard issues. If I am unemployed and you are paying me to be unemployed, I do not see why I would ever accept a job. As a consequence of the failure of that market, there are equilibria with search externalities that can support equilibria with any level of unemployment.
My Keynesian search model solves the problem of understanding Keynes's General Theory in a way that is different from the sticky price approach that Samuelson initiated and that continues to be perpetuated by New Keynesian economists today.”

Next week, I will talk about why economists should stop pretending that unemployment is voluntary. It’s time to reintroduce the term, ‘involuntary unemployment’. Stay tuned!