The Fed did not have enough room to cut interest rates before hitting the zero lower bound when the recession hit. Raising the target inflation rate, which would increase average interest rates and give the Fed more space for rate cuts, is something the Fed ought to seriously consider.He's not quite as blunt as I might like, but he's saying that, if a central bank wants to hit a higher inflation target, it has to set nominal interest rates higher, on average. So, in the course of transitioning to a higher inflation target, the central bank must, at some time, have to raise nominal interest rates in order to produce higher inflation. But then, it must be true that, if the central bank has an inflation target of x%, and inflation is persistently y%, where y < x, then the central bank must raise its nominal interest rate target.

# Stephen Williamson: New Monetarist Economics

Disclaimer: Any opinions expressed, potshots taken, or scientific views articulated are mine, and need not represent the opinions, potshots, or scientific views of the Federal Reserve Bank of St. Louis, or the Federal Reserve System.

## Friday, August 26, 2016

### We are All Neo-Fisherites

I was looking at this piece by Mark Thoma on increasing the inflation target. In some discussions of this issue, people seem to have a hard time getting to the core of the argument, but Mark does not. He has a good discussion of the Fisher effect, and his concluding paragraph is:

## Monday, August 22, 2016

### Danger!! Crazy Neo-Fisherians on the Loose!!

Not sure how I missed this Narayana post, but better late than never. I may not be Jacques Derrida, but a little deconstruction can be good fun. Here goes.

Opening paragraph:

The last sentence in the quote offers you a false choice. That choice is

Let's move on. Two paragraphs later, we have this claim:

Narayana says that neo-Fisherism leads in the direction of "unusual policy recommendations."

But, from Narayana's point of view, standard macroeconomics is not standard - it's crazy and dangerous. His claim:

The best I can come up with in terms of a genuine theory of deflationary spirals, is what can happen in Narayana's NK model if inflation expectations are sufficiently sticky, and initial inflation expectations are sufficiently low - basically, people have to start off expecting a lot of deflation. Further, in order to support a "deflationary spiral," i.e. sustained deflation, conventional asset pricing tells us that there has to be sustained negative growth in output. But if there's a lower bound on output, which is natural in this type of environment, then the deflationary spiral isn't an equilibrium. Conclusion: Narayana has things turned around. Traditional macroeconomics gives us a long run Fisher effect. Deflationary spirals are not part of any "traditional" (i.e. serious) macroeconomic theory.

On the empirical front, the "deflationary spiral ... that afflicted the U.S. in the 1930s" looks like this: There's a body of macroeconomic history that ascribes that deflationary episode to the workings of the gold standard. Indeed, the deflation stops at about the time the U.S. goes off the gold standard. Not sure why we're using a gold standard episode to think about how monetary policy works in the current context. In modern economies, I have no knowledge of an instance of anything consistent with Narayana's "traditional model" in which increases in nominal interest rates by the central bank cause a "deflationary spiral" (if you know of one, please let me know). But, when "deflation" enters the conversation, some people will mention Japan. Here's the CPI

So, that's an instance in which a form of "traditional" macroeconomics doesn't work. That traditional macroeconomics is textbook IS-LM/Phillips curve with fixed inflation expectations. In that world, a low nominal interest rate makes output go up, and inflation goes up through a Phillips curve effect. A standard claim in world central banking circles is that a low nominal interest rate, sustained for a long enough time, will surely make inflation go up. I don't know about you, but if I want to catch a bus, and I go down to the bus stop and find someone who has been waiting for the bus for twenty years, my best guess is that sitting down in the bus shelter with that person has little chance of making a bus appear any time soon.

Next, in Narayana's post, he shows a time series plot of the fed funds rate and a breakeven rate. To be thorough, I'll include other breakeven rates, and focus on the post-2010 period, as the earlier information isn't relevant: Narayana says:

But how should we interpret the movements in the breakeven rates in the chart? On one hand, breakeven rates have to be taken with a grain of salt as measures of inflation expectations. They can reflect changes in the relative liquidity premia on nominal Treasury bonds and TIPS; they're measuring breakeven rates for CPI inflation, not the Fed's preferred PCE inflation measure; when inflation falls below zero, the inflation compensation on TIPS is zero; there is risk to worry about. On the other hand, what else can we do? There are alternative market-based measures of inflation expectations, but it's not clear they are any better than what I've shown in the chart.

So, suppose we take the breakeven measures in the chart seriously. The 5-year and 10-year breakevens can be interpreted as predictions of average inflation over the next 5 years, and the next 10 years, respectively. The five year/five year forward rate can be interpreted as the average inflation rate anticipated over a five-year period that is 5 to 10 years from today. Given that the interest rate Narayana is focused on here is the overnight fed funds rate, what matters for these market inflation expectation measures is the course of monetary policy for up to the next 10 years - in principle, the structure of the Fed's policy rule over that whole period. There are plenty of other things that matter as well - world events, shocks to the economy, and how those events and shocks matter for the Fed's policy rule. Narayana seems to think that the Fed "tightened" in May 2013, but I remember that episode - the "taper tantrum" - as a prelude to a period in which the public perception of the future course of interest rate hikes was constantly being revised down. A downward path for long-term inflation expectations seems to me consistent with a neo-Fisherian view of the world, with the market putting increasing weight on the possibility that nominal interest rates and inflation will remain persistently low.

Narayana finishes off in true hyperbolic fashion by raising the twin specters of the Great Depression and Great Recession:

Opening paragraph:

Some economists argue that the Federal Reserve should take a highly unconventional approach to ending a long period of below-target inflation: Instead of keeping interest rates low to spur economic activity and push up prices, it should raise rates.Clicking on "argue" takes you to my St. Louis Fed

*Regional Economist*piece on neo-Fisherism. This is about as low-tech an elucidation of these ideas as I've been able to muster - it's got one equation, two figures, and 3,000 and some words. In any case, apparently I'm "some economists." John Cochrane is also well out of the Fisherian closet, and we have certainly received some sympathy from others (to whom the idea is obvious - as it should be), but neo-Fisherism is hardly a movement. As you can see, particularly in Narayana's post, there's plenty of hostile resistance.The last sentence in the quote offers you a false choice. That choice is

*either*a world of low interest rates, which*obviously*spurs economic activity and pushes up prices*or*the alternative: increases in interest rates which, the reader would naturally assume, would give us the opposite - less economic activity and lower inflation. The basic neo-Fisherian idea is that this is not the choice we're faced with. Let's put aside the issue of how monetary policy affects real economic activity, and focus on inflation. Neo-Fisherism says that conventional central banking wisdom is wrong. A lower nominal interest rate pushes inflation down, and no one should be surprised if an extended period of low nominal interest rates produces low inflation. Indeed, that's consistent with what we're seeing in the world right now.Let's move on. Two paragraphs later, we have this claim:

Neo-Fisherites believe that modern economies are self-stabilizing.I've been staring at that sentence for several minutes now, and I'm still not sure what it means, so I don't think I could "believe" it. But let's give this a try. In conventional Econ 101 macroeconomics, students are typically told that, in the short run, wages and prices are sticky, and there is a role for short-run "stabilization" policy, which corrects for the short-run inefficiencies caused by stickiness. The Econ 101 story is that, in the long run, prices and wages are perfectly flexible, and the inefficiencies go away. So, the standard story people are giving undergraduates is that modern economies are indeed self-stabilizing, but "in the long run we're all dead" as Keynes said. What's this have to do with neo-Fisherism? Nothing. Indeed, most conventional models have neo-Fisherian properties, whether those models have a role for short-term government intervention or not. In this post I worked through the neo-Fisherian characteristics of Narayana's favorite model, which is certainly not "self-stabilizing" in the short run. Bottom line: Basic neo-Fisherism is agnostic about the role for government intervention. It just says: Here's how to control inflation. You've been doing it wrong.

Narayana says that neo-Fisherism leads in the direction of "unusual policy recommendations."

Suppose, for example, the long-run equilibrium real rate is 2 percent. Neo-Fisherites would predict that if the Fed holds nominal rates at 0.5 percent for too long, people's inflation forecasts will eventually have to turn negative -- to minus 1.5 -- to get the real rate back to 2. Conversely, if the Fed raises its rate target to 4 percent and keeps it there, inflation expectations will rise to 2 percent. Because such expectations tend to be self-fulfilling, the result will be precisely the amount of inflation that the Fed is seeking to generate.That doesn't describe "unusual policy recommendations" but is actually the prediction of a host of standard monetary models. For example, there is a class of representative agent monetary models (money in the utility function, cash in advance, for example) in which, if the subjective discount rate is .02, in a stationary environment, then the long run real interest rate is indeed 2%. In those models, it is certainly the case that a sustained nominal interest rate of 0.5%, supported by open market operations, transfers, whatever, must ultimately induce a deflation equal to -1.5%. Indeed, those models also tell us that deflation at -2% would be optimal - that's the Friedman rule. So, Narayana's thought experiment is not controversial in macroeconomics - that's the prediction of baseline monetary models. Things can get more interesting with fundamental models of money that build up a role for asset exchange from first priciples - e.g. overlapping generations models from back in the day, or Lagos-Wright constructs. New Keynesians seem to like taking the money out of models altogether, in "cashless" frameworks. NK models, and fundamental models of money typically have many equilibria, which presents some other problems. Multiple equilibria can also be a feature of cash-in-advance models. But, as I discuss in this post and this one, multiple equilibria need not be a serious problem for monetary policy, as we can design policies that give us unique equilibria - with Fisherian properties.

But, from Narayana's point of view, standard macroeconomics is not standard - it's crazy and dangerous. His claim:

Traditional economic models, by contrast, predict the opposite. If the central bank raises rates and credibly commits to keeping them high, people and businesses become less willing to borrow money to invest and spend. This undermines demand for goods and services, putting downward pressure on employment and prices. As a result, the economy can plunge into a deflationary spiral of the kind that afflicted the U.S. in the early 1930s.What "traditional models" could he be talking about? This can't be some textbook IS/LM/Phillips curve construct, as he's discussing a dynamic process, and the textbook model is static. The only "tradition" I know of is a persistent narrative that you can find if you Google "deflationary spiral." Here's what Wikipedia says:

The Great Depression was regarded by some as a deflationary spiral. A deflationary spiral is the modern macroeconomic version of the general glut controversy of the 19th century. Another related idea is Irving Fisher's theory that excess debt can cause a continuing deflation. Whether deflationary spirals can actually occur is controversial, with its possibility being disputed by freshwater economists (including the Chicago school of economics) and Austrian School economists.The closest thing to actual economic theory supporting the idea is Irving Fisher's debt-deflation paper from 1933. That's just another narrative - you won't find an equation or any data in Fisher's paper.

The best I can come up with in terms of a genuine theory of deflationary spirals, is what can happen in Narayana's NK model if inflation expectations are sufficiently sticky, and initial inflation expectations are sufficiently low - basically, people have to start off expecting a lot of deflation. Further, in order to support a "deflationary spiral," i.e. sustained deflation, conventional asset pricing tells us that there has to be sustained negative growth in output. But if there's a lower bound on output, which is natural in this type of environment, then the deflationary spiral isn't an equilibrium. Conclusion: Narayana has things turned around. Traditional macroeconomics gives us a long run Fisher effect. Deflationary spirals are not part of any "traditional" (i.e. serious) macroeconomic theory.

On the empirical front, the "deflationary spiral ... that afflicted the U.S. in the 1930s" looks like this: There's a body of macroeconomic history that ascribes that deflationary episode to the workings of the gold standard. Indeed, the deflation stops at about the time the U.S. goes off the gold standard. Not sure why we're using a gold standard episode to think about how monetary policy works in the current context. In modern economies, I have no knowledge of an instance of anything consistent with Narayana's "traditional model" in which increases in nominal interest rates by the central bank cause a "deflationary spiral" (if you know of one, please let me know). But, when "deflation" enters the conversation, some people will mention Japan. Here's the CPI

*level*for Japan for the last 20 years: This is one of my favorite examples. We wouldn't really call that a "deflationary spiral" as the magnitude of the deflation isn't high at any time, and it's not sustained. Over 20 years, average inflation is about zero. Further, since mid-1995, the Bank of Japan's nominal policy interest rate has been close to zero, and recently the BOJ has thrown everything but the kitchen sink (except, of course, a higher policy rate) at this economy in an attempt to generate inflation at 2% per year - to no avail. Note in particular that the blip in inflation in 2014 can be attributed almost entirely to the direct effect of an increase in the consumption tax of 3 percentage points.So, that's an instance in which a form of "traditional" macroeconomics doesn't work. That traditional macroeconomics is textbook IS-LM/Phillips curve with fixed inflation expectations. In that world, a low nominal interest rate makes output go up, and inflation goes up through a Phillips curve effect. A standard claim in world central banking circles is that a low nominal interest rate, sustained for a long enough time, will surely make inflation go up. I don't know about you, but if I want to catch a bus, and I go down to the bus stop and find someone who has been waiting for the bus for twenty years, my best guess is that sitting down in the bus shelter with that person has little chance of making a bus appear any time soon.

Next, in Narayana's post, he shows a time series plot of the fed funds rate and a breakeven rate. To be thorough, I'll include other breakeven rates, and focus on the post-2010 period, as the earlier information isn't relevant: Narayana says:

The Fed held the nominal interest rate near zero from late 2008 until late 2015 -- a policy that, according to Neo-Fisherites, should have driven inflation expectations into negative territory. Yet they stayed roughly the same for most of that period. They started to slide downward only after the Fed began to tighten policy in May 2013 by signaling that it would pull back on the bond purchases known as quantitative easing. Also, the recent modest increase in the nominal rate has not led to a commensurate increase in inflation expectations."According to Neo-Fisherites?" No way! Any good neo-Fisherite understands something about low real interest rates, and what might cause them to be low. Here's me thinking about it in 2010. If there are forces pushing down the real interest rate, we'll tend to get more inflation with a low nominal interest rate than we might have expected if we were thinking the long run real rate was 2%, for example. So, by 2013, yours truly neo-Fisherite was certainly not surprised to be seeing the breakeven rates in the last chart.

But how should we interpret the movements in the breakeven rates in the chart? On one hand, breakeven rates have to be taken with a grain of salt as measures of inflation expectations. They can reflect changes in the relative liquidity premia on nominal Treasury bonds and TIPS; they're measuring breakeven rates for CPI inflation, not the Fed's preferred PCE inflation measure; when inflation falls below zero, the inflation compensation on TIPS is zero; there is risk to worry about. On the other hand, what else can we do? There are alternative market-based measures of inflation expectations, but it's not clear they are any better than what I've shown in the chart.

So, suppose we take the breakeven measures in the chart seriously. The 5-year and 10-year breakevens can be interpreted as predictions of average inflation over the next 5 years, and the next 10 years, respectively. The five year/five year forward rate can be interpreted as the average inflation rate anticipated over a five-year period that is 5 to 10 years from today. Given that the interest rate Narayana is focused on here is the overnight fed funds rate, what matters for these market inflation expectation measures is the course of monetary policy for up to the next 10 years - in principle, the structure of the Fed's policy rule over that whole period. There are plenty of other things that matter as well - world events, shocks to the economy, and how those events and shocks matter for the Fed's policy rule. Narayana seems to think that the Fed "tightened" in May 2013, but I remember that episode - the "taper tantrum" - as a prelude to a period in which the public perception of the future course of interest rate hikes was constantly being revised down. A downward path for long-term inflation expectations seems to me consistent with a neo-Fisherian view of the world, with the market putting increasing weight on the possibility that nominal interest rates and inflation will remain persistently low.

Narayana finishes off in true hyperbolic fashion by raising the twin specters of the Great Depression and Great Recession:

I, too, once believed that the horrific events of the early 1930s, when economic output fell by a quarter and prices by even more, could not recur in a modern capitalist economy like the U.S. Then 2008 happened, and we all learned where a religious belief in the self-correcting nature of markets can lead us. If we want stability, we have to choose the right policies. Raising rates in the face of low inflation is not one of them.I hope you understand by now that I think: (i) economics is about science, not religion; (ii) neo-Fisherism has nothing to do with the "self-correcting nature of markets." Do I think that Narayana's policy prescriptions are crazy and dangerous? Absolutely not. If he's right, which I think he's not, then good for him. If he's wrong, and his policies get implemented, what harm gets done? Inflation stays low, and central banks may proceed to demonstrate, through experimentation, that unconventional policies don't do much. Or maybe we find some that actually work. Who knows? We would really be in danger if the people who think of high inflation as a cure-all figure out how to produce it. But I don't think that will happen.

## Sunday, July 31, 2016

### Multiple Equilibria, Installment #2

The goal in this post is to provide some more illumination with respect to Narayana's note, and my previous post. As well, if I could eliminate Nick Rowe's confusion, that would be great.

The problem at hand is one of multiple equilibria. Sometimes multiple equilibrium models are used in an attempt to explain real-world phenomena. That's Roger Farmer's approach - maybe we're stuck in bad, suboptimal states because of self-filfilling low expectations. Sometimes policy rules can lead to multiple equilibria in models we study. That's considered problematic as, to analyze policy in a coherent fashion, we would like to have a unique mapping from policy rules to outcomes, so that the optimal policy problem we're solving is well-specified. That's the problem that comes up in New Keynesian models, but it's certainly not unique to that class of models, as we'll show in the example below.

For people who work in monetary economics, multiple equilibria are ubiquitous. In any model that builds up a role for valued fiat money from first principles, there is always an equilibrium in which money is not valued - if people believe that money will not have value at any date in the future, it will never have value. Fiat money has no intrinsic payoffs, so if people believe that others will not accept it in exchange, they will not accept it either - valued money is supported as an equilibrium because everyone has the self-confirming faith that it will always be valued. So in models of fiat money, there is an equilibrium in which money is not valued, and typically many equilibria in which it is.

One old workhorse of monetary economics is Samuelson's overlapping generations model. The specific example I'm going to use comes from Costas Azariadis's 1981 paper. Time is indexed by

(1)

where

In equilibrium, everyone optimizes, and markets clear. There can be plenty of equilibria, including sunspot equilibria and cycles (see Azariadis's paper), but we'll focus on the deterministic ones. In general, we summarize equilibria as sequences {n(t)} that solve the difference equation

(2)

with

(3)

where

Here's an example. Let

(4)

So, if

In either case, there are two steady states: (i) n = 0, where money has no value forever, and nothing gets produced. You can't see that in the second picture, but it's an equilibrium nevertheless. (ii) n = 1. The second steady state is the quantity-theoretic equilibrium. The money growth rate is zero, the inflation rate is zero, the growth rate in output is zero, and the velocity of money is constant forever. But there are also other equilibria, depending on parameters.

First, suppose

So, those are all the equilibria for that case (I think there are no cyclical or sunspot equilibria either - see Costas's paper). What would Narayana's note say about this? He's interested in the limiting equilibria of finite horizon economies. If we looked for such equilibria here the search is not difficult. Suppose we fix the horizon at length

We could take a broader view, however. Take the infinite horizon economy, fix

Next, consider the case

One problem with the model I've specified is that it permits, under some conditions, hyperdeflations in which output grows without bound. A simple fix for that is to put an upper bound on labor supply, keeping preferences as we've specifed them. That will kill off all the hyperdeflationary equilibria, as well as the limiting two-cycle we get by the Narayana method. Then, the Narayana method, taken literally, gives us one equilibrium:

What about the literal Narayana method in his NK model? Here we have a problem. In spite of the fact that this is a cashless model, nominal bonds are traded as claims to money. But in a finite horizon model, the value of money must be zero in the final period, and thus in all periods. So the price of nominal bonds is zero. Thus, we can't even start discussing the usual NK approach, which is assuming that the central bank can set the price of a nominal bond. The central bank is stuck with a price of zero.

Of course, we can wave our hands at this point, and claim that, in a finite horizon monetary model, the price of money is pegged in the last period through fiscal intervention. But that would be a different model, and we might ask why the fiscal authority doesn't do that intervention in every period - then we're done. The central bank should abandon its assigned job and hand it over to the fiscal authority.

Here's something interesting. In line with my previous blog post, there is an optimal monetary policy in this model that kills off indeterminacy. It looks like this:

where

Question: Does Narayana have a point? Answer: Nah.

The problem at hand is one of multiple equilibria. Sometimes multiple equilibrium models are used in an attempt to explain real-world phenomena. That's Roger Farmer's approach - maybe we're stuck in bad, suboptimal states because of self-filfilling low expectations. Sometimes policy rules can lead to multiple equilibria in models we study. That's considered problematic as, to analyze policy in a coherent fashion, we would like to have a unique mapping from policy rules to outcomes, so that the optimal policy problem we're solving is well-specified. That's the problem that comes up in New Keynesian models, but it's certainly not unique to that class of models, as we'll show in the example below.

For people who work in monetary economics, multiple equilibria are ubiquitous. In any model that builds up a role for valued fiat money from first principles, there is always an equilibrium in which money is not valued - if people believe that money will not have value at any date in the future, it will never have value. Fiat money has no intrinsic payoffs, so if people believe that others will not accept it in exchange, they will not accept it either - valued money is supported as an equilibrium because everyone has the self-confirming faith that it will always be valued. So in models of fiat money, there is an equilibrium in which money is not valued, and typically many equilibria in which it is.

One old workhorse of monetary economics is Samuelson's overlapping generations model. The specific example I'm going to use comes from Costas Azariadis's 1981 paper. Time is indexed by

*t = 1,2,3,...*, and at*t = 0*there are some old people endowed with*M(0)*units of money. In each period there are*N*two-period-lived people who work when they are young and consume when they are old. Each has preferences(1)

*U[c(t+1),n(t)] = u[c(t+1)] - v[n(t)],*where

*c*is consumption and*n*is labor supply. One unit of labor input produces one unit of consumption good. In equilibrium, the young work, purchase money from the old in exchange for goods, and then sell the money for goods when they're old. The government can inject money each period through lump sum transfers to the old. The money stock in period*t*is*M(t).*Assume preferences have standard properties:*u*is strictly concave and*v*is strictly convex, etc.In equilibrium, everyone optimizes, and markets clear. There can be plenty of equilibria, including sunspot equilibria and cycles (see Azariadis's paper), but we'll focus on the deterministic ones. In general, we summarize equilibria as sequences {n(t)} that solve the difference equation

(2)

*[M(t)/M(t+1)]n(t+1)u'[n(t+1)] - n(t)v'[n(t)]=0,*with

(3)

*p(t) = n(t)/M(t),*where

*p(t)*is the price of money - the inverse of the price level.Here's an example. Let

*M(t)=1*for all time, and assume*u*has constant relative risk aversion*a*, with*v*just*n*to the power*b.*Here,*a > 0*and*b > 1.*Then, if we write the difference equation (2) in logs (don't know how to deal with exponents in html), we get(4)

*ln[n(t+1)] = [b/(1-a)]ln[n(t)]*So, if

*a < 1,*then (4) looks like this: And if*a > 1,*it looks like this:In either case, there are two steady states: (i) n = 0, where money has no value forever, and nothing gets produced. You can't see that in the second picture, but it's an equilibrium nevertheless. (ii) n = 1. The second steady state is the quantity-theoretic equilibrium. The money growth rate is zero, the inflation rate is zero, the growth rate in output is zero, and the velocity of money is constant forever. But there are also other equilibria, depending on parameters.

First, suppose

*a < 1.*In the first chart, there are many equilibria with*0 < n(0) < 1*which all converge in the limit to*n = 0*. These are hyperinflationary equilibria for which the inflation rate increases over time without bound. There are also many equilibria with*n(0) > 1*for which*n(t)*grows over time without bound. These are hyperdeflationary equilibria, for which the inflation rate falls over time without bound.So, those are all the equilibria for that case (I think there are no cyclical or sunspot equilibria either - see Costas's paper). What would Narayana's note say about this? He's interested in the limiting equilibria of finite horizon economies. If we looked for such equilibria here the search is not difficult. Suppose we fix the horizon at length

*T,*where*T*is finite. Then*p(t)=0*for all*t.*No one would want to hold money in any period, because it has no value in the final period. So, the only finite-horizon equlibrium is*n = 0*for any*T,*so if I take the limit I get*n = 0.*So, Narayana's claim that a limiting equilibrium of the finite horizon economy is an equilibrium in the infinite horizon economy is correct, but we only found one equilibrium by this approach - the one where money has no value.We could take a broader view, however. Take the infinite horizon economy, fix

*p(T),*solve the difference equation (4) backward, then let*T*go to infinity. In this case, the difference equation is stable backward. So, this picks out two equilibria,*n = 0*and*n = 1.*That's an equilibrium selection device which, if we took it seriously, would permit us to ignore all the non-steady-state equilibria that converge to*n = 0*in the limit. But that approach shouldn't fill us with confidence. By conventional criteria, in this case*n = 0*is "stable" and*n = 1*is "unstable."Next, consider the case

*1 < a < 1 + b.*In this case, the slope of the difference equation in the second figure is not too steep at*n = 1.*In addition to the two steady states, there are now many equilibria with*n(0) > 0*that converge in the limit to*n = 1.*Again, literally following Narayana's advice gives one equilibrium,*n = 0,*but if we following our other limiting approach, the difference equation is unstable backward, and there are three limiting equilibria: (i) n = 0; (ii) n = 1; (iii) a two cycle {...,0,inf,0,inf,0,inf,...}. So that's an example for which Narayana's claim is not correct, as that's not an equilibrium of the infinite horizon economy, since n = 0 is a steady state.One problem with the model I've specified is that it permits, under some conditions, hyperdeflations in which output grows without bound. A simple fix for that is to put an upper bound on labor supply, keeping preferences as we've specifed them. That will kill off all the hyperdeflationary equilibria, as well as the limiting two-cycle we get by the Narayana method. Then, the Narayana method, taken literally, gives us one equilibrium:

*n = 0.*The Narayana method, taken liberally, gives us two equilibria:*n = 0*and*n = 1.*Note that Narayana's NK model is misspecified in a similar way (see my previous blog post). Given his Phillips curve, he finds equilibria for which*i = inf*and*i = -inf.*But in the first such equilibrium, output is rising at an infinite rate, and in the second it is falling at an infinite rate. An upper bound on labor supply would put an upper bound on output, and kill the first equilibrium. As well, in Narayana's model, the Phillips curve is derived by assuming that a fraction of firms charge last period's average price. So, if*i = -inf*the sticky price firms sell no output, but the flexible price firms have to sell some output. This puts a lower bound on output, which kills off the*i = -inf*equilibrium. Thus, by the liberal Narayana method, there is only one equilibrium in his NK model - the Fisherian one.What about the literal Narayana method in his NK model? Here we have a problem. In spite of the fact that this is a cashless model, nominal bonds are traded as claims to money. But in a finite horizon model, the value of money must be zero in the final period, and thus in all periods. So the price of nominal bonds is zero. Thus, we can't even start discussing the usual NK approach, which is assuming that the central bank can set the price of a nominal bond. The central bank is stuck with a price of zero.

Of course, we can wave our hands at this point, and claim that, in a finite horizon monetary model, the price of money is pegged in the last period through fiscal intervention. But that would be a different model, and we might ask why the fiscal authority doesn't do that intervention in every period - then we're done. The central bank should abandon its assigned job and hand it over to the fiscal authority.

Here's something interesting. In line with my previous blog post, there is an optimal monetary policy in this model that kills off indeterminacy. It looks like this:

*M(t+1)/M(t) = {n(t+1)u'[n(t+1)]}/{n*v'[n(t)]}*where

*n**solves*u'(n*)=v'(n*)*. In equilibrium, the money supply is constant, and the policy rule specifies out-of-equilibrium actions that eliminate the indeterminacy.Question: Does Narayana have a point? Answer: Nah.

## Monday, July 18, 2016

### More Neo-Fisher

What follows is an attempt to make sense of Narayana's note on Neo-Fisherism. That discussion will lead into comments on a paper by George Evans and Bruce McGough.

Start with basics. What are Neo-Fisherite ideas anyway? Narayana says

But, whatever we think Neo-Fisherite or New Keynesian ideas are, Narayana is making a particular argument in his note, and we want to get to the bottom of it. I don't think the analogy part is particularly helpful though. There are two problems considered in Narayana's note. One is an asset pricing problem, and the other has to do with the properties of a particular NK model. As far as I can tell, the extent of the commonality is that solving each problem can involve geometric series. Otherwise, understanding one problem won't help you much with the other.

The asset pricing problem looks like a trick question you might give to unwitting PhD students on a prelim exam. The equilibrium one-period real interest rate is negative and constant forever, and we're asked to price an asset that pays out a constant real amount each period forever. Question: Solve for the steady state price of the asset. Answer: Dummy, there is no steady state price for the asset. Since a rational economic agent in this world values future payoffs more than current payoffs, if we compute the present value of the payoffs, it will be infinite.

Well, so what? On to the second problem. Narayana uses a version of the standard NK model. We're in a world with certainty - no shocks. I'll change the notation so I don't have to use Greek letters. From standard asset pricing, and assuming constant relative risk aversion utility, we can take logs and getHere,

In general, we can solve to get the difference equationThen, an equilibrium involves finding a sequence of inflation rates that solves the difference equation (3) given some sequence of nominal interest rates, or some policy rule governing the central bank's choice of the nominal interest rate each period.

So, suppose that the nominal interest rate is a constant

What Narayana does is to take equation (5), and let

But, with a nominal interest rate pegged at some value forever, we have an indeterminacy problem - there exists a plethora of equilibria. This makes it hard to make statements about what happens when the interest rate goes up or down. For example, it's certainly correct that, if we set

So what to do about that? If we follow the usual NK approach, we would specify a Taylor ruleIn equation (6), the Taylor principle is

So, we might look for other policy rules that are better behaved. Here's one:That rule implies a difference equation that looks like this:The equilibrium isThe first part of the rule, (8), acts to offset effects of future inflation on current inflation, thus killing off equilibrium paths that will imply current inflation above target. (8) is only an off-equilibrium threat. The second part of the rule, (9), acts to bring inflation back to target next period. The equilibrium result is that inflation can be lower than the target in period 0, but the central bank hits its target in every future period. Further, note that the rule is neo-Fisherian, in more than one way. First, the central bank reacts to low inflation by increasing the nominal interest rate above its long-run level, temporarily. Second, the equilibrium satisfies the properties in the quote at the beginning of this post. After period 0, the nominal interest rate is constant forever, and inflation is constant. If the inflation target increases, then the nominal interest rate increases one-for-one in periods 1,2,3,... Narayana says those are Neo-Fisherian properties, and I stated above that I thought these were claims made of standard NK models under the Taylor principle. Seemingly, these are deemed by some people to be good properties of a monetary policy rule.

What Narayana seems to be getting at is that stickiness in expectations matters. In the example he gives in his note, fixed expectations in the infinite future can have very large effects today. You can see that in equation (5), for example, if we fix

The question is, what happens for intermediate values of

Under no circumstances is the standard Taylor rule with

The critical value for inflation expectations isThat is, under the rule (19), the central banker goes to the zero lower bound if inflation expections fall below

In their paper, E-M say

Start with basics. What are Neo-Fisherite ideas anyway? Narayana says

...in the absence of shocks, the equilibrium inflation rate should be constant if the nominal interest rate is pegged forever. The Fisher equation then implies that the inflation rate should move one for one with the nominal interest rate. This logic is sometimes referred to as “neo-Fisherian”.I would actually call these New Keynesian (NK) claims. For example, in "Interest and Prices," Mike Woodford takes pains to address the concern, which came out of the previous macro literature, that nominal interest rate pegs are unstable. Woodford's claim is that a Taylor rule that conforms to the Taylor principle (a greater than one-for-one increase in the nominal interest rate in response to an increase in inflation) will imply determinacy. That is, if there are no shocks, then the nominal interest rate is pegged at a constant forever, and the inflation rate is a constant - the inflation target. Further, in the basic NK model, if Woodford's claim is correct then, in the absence of shocks, if the central bank wants to increase its inflation target, then the nominal interest rate should increase one-for-one with the increase in the inflation target, and actual inflation will respond accordingly. Under basic NK logic, this behavior is supported by promises to increase the nominal interest rate in response to higher inflation - and this inflation never materializes in equilibrium.

But, whatever we think Neo-Fisherite or New Keynesian ideas are, Narayana is making a particular argument in his note, and we want to get to the bottom of it. I don't think the analogy part is particularly helpful though. There are two problems considered in Narayana's note. One is an asset pricing problem, and the other has to do with the properties of a particular NK model. As far as I can tell, the extent of the commonality is that solving each problem can involve geometric series. Otherwise, understanding one problem won't help you much with the other.

The asset pricing problem looks like a trick question you might give to unwitting PhD students on a prelim exam. The equilibrium one-period real interest rate is negative and constant forever, and we're asked to price an asset that pays out a constant real amount each period forever. Question: Solve for the steady state price of the asset. Answer: Dummy, there is no steady state price for the asset. Since a rational economic agent in this world values future payoffs more than current payoffs, if we compute the present value of the payoffs, it will be infinite.

Well, so what? On to the second problem. Narayana uses a version of the standard NK model. We're in a world with certainty - no shocks. I'll change the notation so I don't have to use Greek letters. From standard asset pricing, and assuming constant relative risk aversion utility, we can take logs and getHere,

*y*is the output gap (the difference between actual output and efficient output),*i*is the inflation rate,*R*is the nominal interest rate, and*r*is the subjective discount rate (or the "natural real interest rate"). The second equation is a Phillips curveThis is the only difference from standard NK, as the Phillips curve doesn't have a term in anticipated inflation. This makes the solution easy, but I don't think it otherwise changes the basic mechanics.In general, we can solve to get the difference equationThen, an equilibrium involves finding a sequence of inflation rates that solves the difference equation (3) given some sequence of nominal interest rates, or some policy rule governing the central bank's choice of the nominal interest rate each period.

So, suppose that the nominal interest rate is a constant

*R*forever, and suppose that, in period*T*the inflation rate is*i(T).*Then, we can solve the difference equation (3) forward to getSimilarly, we can solve (3) backward to getSo, for any real number*i(T)*equations (4) and (5) describe an equilibrium. Thus, there is a whole continuum of equilibria, indexed by*i(T)*. In equation (4), the second term on the right-hand side converges to zero as*n*goes to infinity, for any*i(T)*. Thus, all equilibria converge in the limit to an inflation rate of*R-r.*That's the long-run Fisher relation. In equation (5), the second term does not converge as*n*goes to infinity, i.e. as time runs backward to minus infinity. If*i(T) < R - r,*then inflation runs off to minus infinity as time runs backward, and if*i(T) > R - r,*then inflation runs off to infinity as time runs backward. This is typical of course - we have a difference equation that's stable if we solve it forward, and it's unstable if we solve it backward. Note that one equilibrium is*i(t) = R - r*in every period.What Narayana does is to take equation (5), and let

*T*go to infinity, so he's only looking at the backward solution. As should be clear, I hope, that's not describing all the equilibria. By any conventional notion of what we mean by convergence and stability, the nominal interest rate peg is stable, and all the equilibria converge in the limit to*R - r.*The Fisher relation holds in the long run. As a practical implication of this, I've heard many people argue that, if the central bank holds its nominal interest rate at zero, then surely inflation will eventually rise to the 2% inflation target. Well, they can't be thinking about this model then. In any equilibrium with*R = 0*forever and with inflation initially lower than some inflation target*i*,*inflation either falls to*-r*in the limit, or rises to*-r*in the limit. If*-r < i*,*the central bank will never achieve its target by staying at zero.But, with a nominal interest rate pegged at some value forever, we have an indeterminacy problem - there exists a plethora of equilibria. This makes it hard to make statements about what happens when the interest rate goes up or down. For example, it's certainly correct that, if we set

*T=0*in equation (4), and think of time running from zero to infinity, solving the difference equation (3) forward, then given*i(0),*the inflation rate will be higher along the whole equilibrium path, if*R*rises. But*i(0)*is not predetermined - it's not an initial condition, it's endogenous and the first step in only one equilibrium path. Who is to say that economic agents don't treat*R*as a signal and jump to another equilibrium path? We might also be tempted to set*i(0) = R*-r,*then solve for the equilibrium path given*R = R**,*and think of that as describing the effects of an increase in the nominal interest rate from*R**to*R***, since an inflation rate of*R* - r*is the long run inflation rate when*R = R*.*Though that's suggestive, it's not precise, due to the indeterminacy problem.So what to do about that? If we follow the usual NK approach, we would specify a Taylor ruleIn equation (6), the Taylor principle is

*d > 1,*and Mike Woodford says that gives us determinacy. But what he means by that is local determinacy - that is, determinacy in a neighborhood of the inflation target*i*.*But this model is simple enough that it's easy to look at global determinacy - or indeterminacy, in this case. From equation (3) and (6), we getAnd the picture looks like this:*D*is the difference equation from (7). Note that the kink in the difference equation is where the nominal interest rate hits the zero lower bound (for low inflation rates).*A*is the desired steady state where the central bank hits its inflation target, and*B*is the undesired steady state in which the inflation rate is*- r*and the nominal interest rate is zero.*A*is an equilibrium, but it's unstable - there are many equilibria that converge in the limit to*B.*We won't discuss equilibria in which inflation increases without bound, as the model needs to be fixed a bit so that those make sense, but that's possible in a slightly modified model. These are well-known results - the Taylor principle has "perils," i.e. it yields indeterminacy, and there are many equilibria in which the central bank falls short of its inflation target forever - not great.So, we might look for other policy rules that are better behaved. Here's one:That rule implies a difference equation that looks like this:The equilibrium isThe first part of the rule, (8), acts to offset effects of future inflation on current inflation, thus killing off equilibrium paths that will imply current inflation above target. (8) is only an off-equilibrium threat. The second part of the rule, (9), acts to bring inflation back to target next period. The equilibrium result is that inflation can be lower than the target in period 0, but the central bank hits its target in every future period. Further, note that the rule is neo-Fisherian, in more than one way. First, the central bank reacts to low inflation by increasing the nominal interest rate above its long-run level, temporarily. Second, the equilibrium satisfies the properties in the quote at the beginning of this post. After period 0, the nominal interest rate is constant forever, and inflation is constant. If the inflation target increases, then the nominal interest rate increases one-for-one in periods 1,2,3,... Narayana says those are Neo-Fisherian properties, and I stated above that I thought these were claims made of standard NK models under the Taylor principle. Seemingly, these are deemed by some people to be good properties of a monetary policy rule.

What Narayana seems to be getting at is that stickiness in expectations matters. In the example he gives in his note, fixed expectations in the infinite future can have very large effects today. You can see that in equation (5), for example, if we fix

*i(T)*and solve backward. Indeed, it seems that conventional central banking wisdom comes from considering expectations as fixed, as is common practice in some undergraduate IS-LM/Phillips curve constructs. Take equation (1), fix all future variables, and an increase in the current nominal interest rate makes output and inflation go down. Indeed, sticky expectations is what George Evans and Bruce McGough have in mind. Here's their claim:Following the Great Recession, many countries have experienced repeated periods with realized and expected inflation below target levels set by policymakers. Should policy respond to this by keeping interest rates near zero for a longer period or, in line with neo-Fisherian reasoning, by increasing the interest rate to the steady-state level corresponding to the target inflation rate? We have shown that neo-Fisherian policies, in which interest rates are set according to a peg, impart unavoidable instability. In contrast, a temporary peg at low interest rates, followed by later imposition of the Taylor rule around the target inflation rate, provides a natural return to normalcy, restoring inflation to its target and the economy to its steady state.We can actually check this out in Narayana's model. Following Evans-McGough (E-M), we'll assume a form of adaptive expectations. Let

*e(t+1)*denote the expected rate of inflation in period*t+1*possessed by economic agents in period*t*. Assume that So,*h*determines the degree of stickiness in inflation expectations - there is less expectational inertia as*h*increases. Using (1), (2), and (11) we can solve for current inflation and expected inflation for next period given the current nominal interest rate and expected inflation as of last period:How this dynamic system behaves depends on parameters. To see some possibilities, consider extreme cases. If*h=0,*this is the fixed expectation case - expectations are so sticky that economic agents never learn. Letting*e*denote fixed inflation expecations,That's the undergrad IS-LM/P-curve model. If you want inflation to go up, reduce the nominal interest rate. The other extreme is*h = 1*which is essentially rear-mirror myopia - economic agents expect inflation next period to be what it was this period. This givesThat's extreme Neo-Fisherism. If you want inflation to go up by 1%, increase the nominal interest rate by 1%.The question is, what happens for intermediate values of

*h*? There are three cases:*sticky expectations**medium-sticky expectations:**Not-so-sticky expecations:*The sticky expectations case gives the results that E-M are looking for. If the central banker follows a Taylor rule then, if inflation expectations are sufficiently low, the central banker goes to the zero lower bound, inflation increases, the Taylor rule eventually kicks in, and inflation converges in the limit to the inflation target*i**. But, with medium-sticky or not-so-sticky expectations, from (12) increases in the nominal interest rate increase inflation. Further, if expectations are not-so-sticky there are Taylor rule perils. If*d > 1,*then there always exist equilibria converging to the zero lower bound with*i = -r*in the limit. In those equilibria the central bank undershoots its inflation target forever.Under no circumstances is the standard Taylor rule with

*d > 1*well-behaved. At best, if inflation is initially below target, the inflation target is only achieved in the limit, and at worst the central banker gets stuck at the zero lower bound forever. But, there are other rules. Here's one:Under this rule, the central banker hits the inflation target every period, provided initial inflation expectations are not too far below the inflation target. In the worst case, the central banker spends a finite number of periods at the zero lower bound when inflation expectations are too low. But, if inflation expectations are medium-sticky or not-so-sticky, the period at the zero lower bound exhibits inflation*above*the inflation target - i.e. a period at the zero lower bound can serve to bring inflation down.The critical value for inflation expectations isThat is, under the rule (19), the central banker goes to the zero lower bound if inflation expections fall below

*e*.*Note that*e**is decreasing in*h*and goes to minus infinity as*h*goes to 1. As expectations become less sticky, the zero lower bound kicks in only for extreme anticipated deflations.In their paper, E-M say

As we have shown, the adaptive learning viewpoint argues forcefully against the neo-Fisherian view and in support of the standard view.As I hope I've made clear, that's overstated. I take the "standard view" to be (i) staying at the zero lower bound will eventually make inflation go up; (ii) a standard Taylor rule is the best the central bank can do. In Narayana's model, under adaptive learning, (i) is only correct under some parameter configurations - actual inflation and expectation inflation both have to be sufficiently sticky. Further, (ii) is never correct.

## Tuesday, June 21, 2016

### Attitude Adjustment

For this post, note the disclaimer at the top of the page. I'm just speaking for myself here, and my views do not necessarily reflect those of the St. Louis Fed, the Federal Reserve System, or the Board of Governors.

This is a reply to Narayana's recent Bloomberg post, which is a comment on this St. Louis Fed memo.

First, Narayana says that Jim Bullard thinks that

Second, Narayana says:

Third, Narayana thinks that:

Finally, Narayana says:

Here's a question for Narayana: Why, if a goal is to have "capacity to lower rates" in the event of "say, global financial instability," does he want rates reduced now?

This is a reply to Narayana's recent Bloomberg post, which is a comment on this St. Louis Fed memo.

First, Narayana says that Jim Bullard thinks that

... the economy is so weak that a mere quarter-percentage-point increase would be enough for the foreseeable future.I don't think the memo actually characterizes the economy as "weak" - it's not a pessimistic view of the world as, for example, Larry Summers or Robert Gordon might see it. As I noted in this post, one would not characterize the labor market as "weak." It's in fact tight, by conventional measures that we can trust. The view in the St. Louis Fed memo is that growth in real GDP, at 2% per annum, is likely to remain lower than the pre-financial crisis trend for the foreseeable future - i.e. "weaker" than we've been accustomed to. But "so weak" is language that is too pessimistic. And there remains the possibility that this will turn around.

Second, Narayana says:

Bullard’s rationale focuses on productivity...That's not correct. The memo mentions low productivity growth, but a key part of the argument is in terms of low real rates of interest. According to conventional asset pricing and growth theory, low productivity growth leads to low consumption growth, which leads to low real rates of interest. But that effect alone does not seem to be strong enough to explain the fall in real interest rates in the world that has occurred for about the last 30 years or so. There is another effect that we could characterize as a liquidity premium effect, which could arise, for example, from a shortage of safe assets. I've studied that in some of my own work, for example in this paper with David Andolfatto. In recent history, the financial crisis, sovereign debt problems, and changes in banking regulation have contributed to the safe asset shortage, which increases the prices of safe assets, and lowers their yields. This problem is particularly acute for U.S. government debt. A key point is that a low return on government debt need not coexist with low returns on capital - see the work by Gomme, Ravikumar, and Rupert cited in the memo.

Third, Narayana thinks that:

Bullard uses a somewhat obscure measure of inflation developed by the Dallas Fed, rather than the Fed’s preferred measure, which is well below 2 percent and is expected to remain there for the next two to three years."Obscure," of course, is in the eye of the beholder. Let's look at some inflation measures: The first measure is raw pce inflation - that's the Fed's preferred measure, as specified here. The second is pce inflation, after stripping out food and energy prices - that's a standard "core" measure. The third is the Dallas Fed's trimmed mean measure. Trimmed mean inflation doesn't take a stand on what prices are most volatile, in that it strips out the most volatile prices as determined by the data - it "trims" and then takes the mean. Then we calculate the rate of growth of the resulting index. One can of course argue about the wisdom of stripping volatile prices out of inflation measures - there are smart people who come down on different sides of this issue. One could, for example, make a case that core measures of inflation give us some notion of where raw pce inflation is going. For example, in mid-2014, before oil prices fell dramatically, all three measures in the chart were about the same, i.e. about 1.7%. So, by Fisherian logic, if the real interest rate persists at its level in mid-2014, then an increase in the nominal interest rate of 50 basis points would make inflation about right - perhaps even above target. Personally, I think we don't use Fisherian logic enough.

Finally, Narayana says:

...the risk of excess inflation is relatively manageable.That's a point made in the memo. The forecast reflects a view that Phillips curve effects are unimportant, and thus an excessive burst in inflation is not anticipated.

Here's a question for Narayana: Why, if a goal is to have "capacity to lower rates" in the event of "say, global financial instability," does he want rates reduced now?

### Should We Think of Confidence as Exogenous?

I don't always agree with Roger Farmer, but I admire his independence. Roger doesn't like to be bound by the constraints of particular research groups, and typically won't accept the assumptions decreed by some New Keynesians, Monetarists, New Fisherites, or whoever. Farmer is a Farmerite. But, Roger falls into a habit common to others who call themselves Keynesians, which is to describe what he does in terms of some older paradigm. The first time I saw Roger do this was in 1994, when he gave this paper at a Carnegie-Rochester conference. The paper was about quantitative work on a class of models which were one step removed from neoclassical growth models. Such models, with unique equilibrium and exogenous stochastic productivity shocks, had been used extensively by real business cycle (RBC) proponents, but Roger's work (and that of other people, including Jess Benhabib) was aimed at studying indeterminacy and endogenous fluctuations. The indeterminacy in Roger's work came from increasing returns to scale in aggregate production. Sufficient increasing returns, he showed, permitted sunspot equilibria, and those equilibria could look much like the stochastic equilibria in RBC models. That seemed promising, and potentially opened up a role for economic policy aimed at dealing with indeterminacy. Old Keynesian economics says we should offset exogenous shocks with fiscal and monetary policy; baseline RBC theory says such stabilization policy is a waste of time. But with indeterminacy, policy is much more complicated - theoretically, we can construct policies that eliminate particular equilibria through off-equilibrium promises. In equilibrium, we wouldn't actually observe how the policymaker was doing his or her job. While promising, this approach introduced some challenges. How do we deal econometrically with indeterminacy? How would we know if real-world policymakers had actually figured out this problem and were solving it?

Though teaching and entertaining ourselves has a lot to recommend it, most economists are interested in persuading other people of the usefulness of their ideas. Though I haven't had a lot of experience with dissemination of ideas in other professions, I think economists are probably extreme in terms of how we work out ideas in public. Seminars and conferences can be combative. We have fun arguing with each other, to the point where the uninitiated find us scary. And all economists know it's an uphill battle to get people to understand what we're doing, let alone to have them think that we've come up with the greatest thing since indoor plumbing. There's an art to convincing people that there are elements of things they know in our ideas. That's intution - making the idea self-evident, without making it seem trivial, and hence unpublishable (horrors).

So, what does this have to do with Roger, indeterminacy, and 1994? In the talk I heard at CMU in 1994, to make his paper understandable Roger used words like "demand and supply shocks," "labor supply and demand curves," and, particularly, "animal spirits." Given that language, one would think that the elements of the model came from the

Roger was hardly the first macroeconomist who made use of language from the General Theory, or Hicksian IS-LM, or post-Hicksian static AS-AD language, to provide intuition for ideas they thought might appeal to people schooled in those traditions. Peter Diamond did it in 1982 – “aggregate demand” was in the title of the paper in which Diamond constructed a model with search and increasing returns in the matching function. That model could give rise to multiple steady states – equilibria with high output and low "unemployment" could coexist with equilibria with low output and high unemployment. If you knew some combination of one-sided search models, the Phelps volume, or had seen work by Dale Mortensen and Chris Pissarides on two-sided search, you could get it. People like Peter Howitt, Ken Burdett, and John Kennan could get it, because they were Northwestern students and been in contact with Mortensen. But an IS-LM Keynesian wouldn’t get it. For those people using the words “aggregate demand” is a dog whistle – a message that everything is OK. “Don’t worry, we’re not doing anything that you would object to.”

New Keynesians took some of these lessons in presentation to heart, and went far beyond dog whistles. A New Keynesian model is basically a neoclassical growth model with exogenous aggregate shocks, and with sticky prices in the context of price-setting monopolistically-competitive firms - and with something we could think of as monetary policy. Again, Keynes would not have the foggiest idea what this was about, but in some incarnations (three-equation reduced form), this was dressed up in a language that had for been taught to undergraduates for about thirty years prior to the advent of New Keynesian frameworks in the late 1990s – the language of “aggregate demand,” “IS curves,” and “Phillips curves.”

New Keynesian economics was no less radical than what Lucas, Prescott, and others were up to in the 1970s and 1980s, but Lucas and Prescott were very in-your-face about what they did. That’s honest, and refreshing, but getting in the faces of powerful people can get you in trouble. I think Mike Woodford learned from that. Better to calm the powerful people who might have a hard time understanding you – get them on your side, and give them the impression that they get it. If Woodford had been in-your-face like Lucas and Prescott, he would probably have the reputation that, perhaps surprisingly, Lucas and Prescott still enjoy among some Cambridge (MA) educated people of my generation. For some, Lucas and Prescott are put in a class with the low life of society – Ponzi schemers, used car salespeople, and other hucksters. Not by the Nobel committee, fortunately.

But, there’s a downside to being non-confrontational. Woodford’s work, and the work of people who extended it, and did quantitative work in that paradigm, is technical – no less technical than the work of Lucas, Sargent, Wallace, Prescott, etc., from which it came. Not everyone is going to be able to do it, and not everyone will get it if it is presented in all its glory. But the dog whistles, and other more explicit appeals to defunct paradigms - or ones that should be - makes some people think that they get it. And when they think they get it, they think that the defunct paradigms are actually OK. And, if the person that thinks he or she gets it is making policy decisions, we’re all in trouble.

Why are we in trouble? Here’s an example. I could know a lot more math and econometrics than I do, and I’ve got plenty of limitations, as we all do. But I’ve had a lot of opportunities to learn firsthand from some of the best people in the profession – Rao Aiyagari, Mark Gertler, Art Goldberger, John Geweke, Chuck Wilson, Mike Rothschild, Bob Lucas, Ed Prescott, Larry Christiano, Narayana Kocherlakota, etc., etc. But I couldn’t get NK models when I first saw them. What’s this monetary model with no money in it? Where’s that Phillips curve come from? What the heck is that central bank doing without any assets and liabilities? I had to read Woodford’s book (and we know that Woodford isn’t stingy with words), listen to a lot of presentations, read some more papers, and work stuff out for myself, before I could come close to thinking I was getting it. So, trust me, if you hear the words “IS curve,” “Phillips curve,” “aggregate demand,” and “central bank,” and think you’ve got NK, you’re way off.

Way off? How? In this post, I wrote about a simplified NK model, and its implications. Some people seem to think that NK models with rational expectations tell us that, if a central bank increases its nominal interest rate target, then inflation will go down. But, in my post, I showed that there are several ways in which that is false. NK models in fact have Fisherian properties – or Neo-Fisherian properties, if you like. Fortunately, there are some people who agree with me, including John Cochrane and Rupert and Sustek. But, in spite of the fact that you can demonstrate how conventional macroeconomic models have Neo-Fisherian properties – analytically and quantitatively – and cite empirical evidence to back it up, the majority of people who work in the NK tradition don’t believe it, and neither do most policymakers. Part of this has to do with the fact that there indeed exists a model from which one could conclude that an increase in the central bank’s nominal interest rate target will decrease inflation. That model is a static IS-LM model with a Phillips curve and fixed (i.e. exogenous) inflation expectations. That’s the model that many (indeed likely the majority) of central bankers understand. And you can forgive them for thinking that’s roughly the same thing as a full-blown NK model, because that’s what they were told by the NK people. Now you can see the danger of non-confrontation – the policymakers with the power may not get it, though they are under the illusion that they do.

I know I’m taking a circuitous route to discussing Roger’s new paper, but we’re getting there. A few years ago, when Roger started thinking about these ideas and putting the ideas in blog posts, I wrote down a little model to help me understand what he was doing. Not wanting to let that effort go to waste, I expanded on it to the point where I could argue I was doing something new, and submitted it to a journal. AEJ-Macro rejected it (an unjust decision, as I’m sure all your rejections are too), but I managed to convince the JMCB to take it. [And now I'm recognizing some of my errors - note that "Keynesian" is in the title.] Here’s the idea. In his earlier work Roger had studied a type of macroeconomic indeterminacy that is very different from the multiple equilibrium models most of us are used to. In search and matching models we typically have to deal with situations in which two economic agents have to divide the surplus from exchange. There is abundant theory to bring to bear here - generalized Nash bargaining, Kalai bargaining, Rubinstein bargaining, etc. - but if we're to be honest with ourselves, we have to admit that we really don't know much about how people will divide the surplus in exchange. That idea has been exploited in monetary theory - for example by Hu, Kennan, and Wallace. Once we accept the idea that there is indeterminacy in how the surplus from exchange is split, we can think about artificial worlds with multiple equilibria. In my paper, I first showed a simple version of Roger's idea. Output is produced by workers and producers, and there is a population of people who can choose to be either, but not both. Each individual in this world chooses an occupation (worker or producer), they go through a matching process where workers are matched with producers (there's a matching function). Some get matched, some do not, and when there is a match output gets produced and the worker and producer split the proceeds and consume. In equilibrium there are always some unmatched workers (unemployment) and unmatched producers (unfilled vacancies). There is a continuum of equilibria indexed by the wage in a match. A high wage is associated with a high unemployment rate. That's because, in equilibrium, everyone has to be indifferent between becoming a producer and becoming a worker. If the wage is high, an individual receives high surplus as a worker and low surplus as a producer. Therefore, it must be easier in equilibrium to find a match as a producer than as a worker - the unemployment rate must be high and the vacancy rate low.

What I did was to extend the idea by working this out in a monetary economy - for me, a Lagos-Wright economy where money was necessary to purchase goods. Then, I could think about monetary (and fiscal) policy, and how policymakers could achieve optimality. As in the indeterminacy literature, this required thinking about how policy rules could kill off bad equilibria.

On to Roger's new paper. He also wants to flesh out his ideas in a monetary economy, and there's a lot in there, including quantitative work. As in Roger's previous work, and my interpretation of it, there are multiple steady states, with high wage/high unemployment steady states. As it's a monetary economy (overlapping generations), there are also multiple dynamic equilibria, and Roger explores that. So, that all seems interesting. But I'm having trouble with two things. The first is Roger's "belief function." In Roger's words:

Second complaint: This goes back to my lengthy discussion above. Roger's paper has "animal spirits" in the title, it cites the

Maybe you think this is all harmless, but it gets in the way of understanding, and I think Roger's goal is to be understood. Describe a bear as if it's a chicken, and you're going to confuse and mislead people. And they may make bad policy decisions as a result. Better to get in our faces with your ideas, and bear the consequences.

Though teaching and entertaining ourselves has a lot to recommend it, most economists are interested in persuading other people of the usefulness of their ideas. Though I haven't had a lot of experience with dissemination of ideas in other professions, I think economists are probably extreme in terms of how we work out ideas in public. Seminars and conferences can be combative. We have fun arguing with each other, to the point where the uninitiated find us scary. And all economists know it's an uphill battle to get people to understand what we're doing, let alone to have them think that we've come up with the greatest thing since indoor plumbing. There's an art to convincing people that there are elements of things they know in our ideas. That's intution - making the idea self-evident, without making it seem trivial, and hence unpublishable (horrors).

So, what does this have to do with Roger, indeterminacy, and 1994? In the talk I heard at CMU in 1994, to make his paper understandable Roger used words like "demand and supply shocks," "labor supply and demand curves," and, particularly, "animal spirits." Given that language, one would think that the elements of the model came from the

*General Theory*and textbook AS/AD models. But that was certainly*not*the case. The elements of the model were: (i) the neoclassical growth model, which most of the people in the room would have understood; (ii) increasing returns to scale which, again, was common currency for most in the room; (iii) sunspot equilibria, which were first studied in the late 1970s by Cass and Shell. This particular conference was in part about indeterminacy, so there were people there - Russ Cooper, Mike Woodford, Rao Aiyagari, for example - who understood the concept well, and could construct sunspot equilibria if you asked them to. But there were other people in the room - Alan Meltzer for example - who would have no clue. But having Roger tell the non-initiated that his paper was actually about AD/AS and animal spirits would not actually help anyone understand what he was doing. If Roger had just delivered his indeterminacy paper in unadulterated form, no undergraduate versed in IS-LM AS-AD would have have drawn any connection, and if Keynes had been in the room he would not have seen any similarity between his work and Roger's ideas. But once Roger said "animal sprits," Keynes would have thought, "Oh, now I get it." He would have left the conference with the impression that Roger was just validating the General Theory in a more technical context. And he would have been seriously mislead.Roger was hardly the first macroeconomist who made use of language from the General Theory, or Hicksian IS-LM, or post-Hicksian static AS-AD language, to provide intuition for ideas they thought might appeal to people schooled in those traditions. Peter Diamond did it in 1982 – “aggregate demand” was in the title of the paper in which Diamond constructed a model with search and increasing returns in the matching function. That model could give rise to multiple steady states – equilibria with high output and low "unemployment" could coexist with equilibria with low output and high unemployment. If you knew some combination of one-sided search models, the Phelps volume, or had seen work by Dale Mortensen and Chris Pissarides on two-sided search, you could get it. People like Peter Howitt, Ken Burdett, and John Kennan could get it, because they were Northwestern students and been in contact with Mortensen. But an IS-LM Keynesian wouldn’t get it. For those people using the words “aggregate demand” is a dog whistle – a message that everything is OK. “Don’t worry, we’re not doing anything that you would object to.”

New Keynesians took some of these lessons in presentation to heart, and went far beyond dog whistles. A New Keynesian model is basically a neoclassical growth model with exogenous aggregate shocks, and with sticky prices in the context of price-setting monopolistically-competitive firms - and with something we could think of as monetary policy. Again, Keynes would not have the foggiest idea what this was about, but in some incarnations (three-equation reduced form), this was dressed up in a language that had for been taught to undergraduates for about thirty years prior to the advent of New Keynesian frameworks in the late 1990s – the language of “aggregate demand,” “IS curves,” and “Phillips curves.”

New Keynesian economics was no less radical than what Lucas, Prescott, and others were up to in the 1970s and 1980s, but Lucas and Prescott were very in-your-face about what they did. That’s honest, and refreshing, but getting in the faces of powerful people can get you in trouble. I think Mike Woodford learned from that. Better to calm the powerful people who might have a hard time understanding you – get them on your side, and give them the impression that they get it. If Woodford had been in-your-face like Lucas and Prescott, he would probably have the reputation that, perhaps surprisingly, Lucas and Prescott still enjoy among some Cambridge (MA) educated people of my generation. For some, Lucas and Prescott are put in a class with the low life of society – Ponzi schemers, used car salespeople, and other hucksters. Not by the Nobel committee, fortunately.

But, there’s a downside to being non-confrontational. Woodford’s work, and the work of people who extended it, and did quantitative work in that paradigm, is technical – no less technical than the work of Lucas, Sargent, Wallace, Prescott, etc., from which it came. Not everyone is going to be able to do it, and not everyone will get it if it is presented in all its glory. But the dog whistles, and other more explicit appeals to defunct paradigms - or ones that should be - makes some people think that they get it. And when they think they get it, they think that the defunct paradigms are actually OK. And, if the person that thinks he or she gets it is making policy decisions, we’re all in trouble.

Why are we in trouble? Here’s an example. I could know a lot more math and econometrics than I do, and I’ve got plenty of limitations, as we all do. But I’ve had a lot of opportunities to learn firsthand from some of the best people in the profession – Rao Aiyagari, Mark Gertler, Art Goldberger, John Geweke, Chuck Wilson, Mike Rothschild, Bob Lucas, Ed Prescott, Larry Christiano, Narayana Kocherlakota, etc., etc. But I couldn’t get NK models when I first saw them. What’s this monetary model with no money in it? Where’s that Phillips curve come from? What the heck is that central bank doing without any assets and liabilities? I had to read Woodford’s book (and we know that Woodford isn’t stingy with words), listen to a lot of presentations, read some more papers, and work stuff out for myself, before I could come close to thinking I was getting it. So, trust me, if you hear the words “IS curve,” “Phillips curve,” “aggregate demand,” and “central bank,” and think you’ve got NK, you’re way off.

Way off? How? In this post, I wrote about a simplified NK model, and its implications. Some people seem to think that NK models with rational expectations tell us that, if a central bank increases its nominal interest rate target, then inflation will go down. But, in my post, I showed that there are several ways in which that is false. NK models in fact have Fisherian properties – or Neo-Fisherian properties, if you like. Fortunately, there are some people who agree with me, including John Cochrane and Rupert and Sustek. But, in spite of the fact that you can demonstrate how conventional macroeconomic models have Neo-Fisherian properties – analytically and quantitatively – and cite empirical evidence to back it up, the majority of people who work in the NK tradition don’t believe it, and neither do most policymakers. Part of this has to do with the fact that there indeed exists a model from which one could conclude that an increase in the central bank’s nominal interest rate target will decrease inflation. That model is a static IS-LM model with a Phillips curve and fixed (i.e. exogenous) inflation expectations. That’s the model that many (indeed likely the majority) of central bankers understand. And you can forgive them for thinking that’s roughly the same thing as a full-blown NK model, because that’s what they were told by the NK people. Now you can see the danger of non-confrontation – the policymakers with the power may not get it, though they are under the illusion that they do.

I know I’m taking a circuitous route to discussing Roger’s new paper, but we’re getting there. A few years ago, when Roger started thinking about these ideas and putting the ideas in blog posts, I wrote down a little model to help me understand what he was doing. Not wanting to let that effort go to waste, I expanded on it to the point where I could argue I was doing something new, and submitted it to a journal. AEJ-Macro rejected it (an unjust decision, as I’m sure all your rejections are too), but I managed to convince the JMCB to take it. [And now I'm recognizing some of my errors - note that "Keynesian" is in the title.] Here’s the idea. In his earlier work Roger had studied a type of macroeconomic indeterminacy that is very different from the multiple equilibrium models most of us are used to. In search and matching models we typically have to deal with situations in which two economic agents have to divide the surplus from exchange. There is abundant theory to bring to bear here - generalized Nash bargaining, Kalai bargaining, Rubinstein bargaining, etc. - but if we're to be honest with ourselves, we have to admit that we really don't know much about how people will divide the surplus in exchange. That idea has been exploited in monetary theory - for example by Hu, Kennan, and Wallace. Once we accept the idea that there is indeterminacy in how the surplus from exchange is split, we can think about artificial worlds with multiple equilibria. In my paper, I first showed a simple version of Roger's idea. Output is produced by workers and producers, and there is a population of people who can choose to be either, but not both. Each individual in this world chooses an occupation (worker or producer), they go through a matching process where workers are matched with producers (there's a matching function). Some get matched, some do not, and when there is a match output gets produced and the worker and producer split the proceeds and consume. In equilibrium there are always some unmatched workers (unemployment) and unmatched producers (unfilled vacancies). There is a continuum of equilibria indexed by the wage in a match. A high wage is associated with a high unemployment rate. That's because, in equilibrium, everyone has to be indifferent between becoming a producer and becoming a worker. If the wage is high, an individual receives high surplus as a worker and low surplus as a producer. Therefore, it must be easier in equilibrium to find a match as a producer than as a worker - the unemployment rate must be high and the vacancy rate low.

What I did was to extend the idea by working this out in a monetary economy - for me, a Lagos-Wright economy where money was necessary to purchase goods. Then, I could think about monetary (and fiscal) policy, and how policymakers could achieve optimality. As in the indeterminacy literature, this required thinking about how policy rules could kill off bad equilibria.

On to Roger's new paper. He also wants to flesh out his ideas in a monetary economy, and there's a lot in there, including quantitative work. As in Roger's previous work, and my interpretation of it, there are multiple steady states, with high wage/high unemployment steady states. As it's a monetary economy (overlapping generations), there are also multiple dynamic equilibria, and Roger explores that. So, that all seems interesting. But I'm having trouble with two things. The first is Roger's "belief function." In Roger's words:

To close our model, we assume that equilibrium is selected by ‘animal spirits’ and we model that idea by introducing a belief function as in Farmer (1993, 2002, 2012b). We treat the belief function as a fundamental with the same methodological status as preferences and endowments and we study the implications of that assumption for the ability of monetary policy to influence inflation, output and unemployment.So, a lot of people have done work on indeterminacy, and I have never run across a "belief function," that someone wants me to think is going to deliver beliefs exogenously. In Roger's model, the belief function is actually an equilibrium selection device, imposed by the modeler. The model tells us there are multiple equilibria, and that's all it has to say. "Beliefs" as we typically understand them, are in fact endogenous in Roger's model. And calling them exogenous does not accomplish anything, as far as I can tell, other than to get people confused, or cause them to raise objections, as I'm doing now.

Second complaint: This goes back to my lengthy discussion above. Roger's paper has "animal spirits" in the title, it cites the

*General Theory,*and the words "aggregate demand" show up 7 times in the paper. Roger also sometimes comes up with passages like this:Our model provides a microfoundation for the textbook Keynesian cross, in which the equilibrium level of output is determined by aggregate demand. Our labor market structure explains why firms are willing to produce any quantity of goods demanded, and our assumption that beliefs are fundamental determines aggregate demand.And this:

Although our work is superficially similar to the IS-LM model and its modern New Keynesian variants; there are significant differences. By grounding the aggregate supply function in the theory of search and, more importantly, by dropping the Nash bargaining assumption, we arrive at a theory where preferences, technology and endowments are not sufficient to uniquely select an equilibrium.In how many ways are these silly statements? This model is related to the Keynesian Cross and IS-LM as chickens are related to bears. The genesis of Roger's framework is Paul Samuelson's overlapping generations model, work on indeterminacy in monetary versions of that model (some of which you can find in the Minneapolis conference volume), and the search and matching literature. NK models are not "variants" of IS-LM models - they are entirely different beasts. It's not "aggregate demand" that is determining anything in Roger's model - there are multiple equilibria, and that's all.

Maybe you think this is all harmless, but it gets in the way of understanding, and I think Roger's goal is to be understood. Describe a bear as if it's a chicken, and you're going to confuse and mislead people. And they may make bad policy decisions as a result. Better to get in our faces with your ideas, and bear the consequences.

## Friday, June 17, 2016

### Dazed and Confused?

In October 2015, after a September payroll employment estimate of 142,000 new jobs, described as "grim" and "dismal" in the media, I wrote this blog post, arguing that we might well see less employment growth in the future. That conclusion came from simple labor force arithmetic. With the working-age population (ages 15-64) growing at a low rate of about 0.5%, if the labor force participation rate failed to increase and the unemployment rate stopped falling, payroll employment could grow at most by 60,000 per month, as I saw it last October.

After the last employment report, which included an estimate of a monthly increase of 38,000 in payroll employment, some people were "shocked," apparently. Let's take a look at a wider array of labor market data, and see whether they should be panicking.

If you have been following employment reports in the United States for a while, you might wonder why the establishment survey numbers are always reported in terms of the monthly change in seasonally-adjusted employment. After all, we typically like to report inflation as year-over-year percentage changes in the price level, or real GDP as quarterly percentage changes in a number that has been converted to an annual rate. So, suppose we look at year-over-year percentage changes in payroll employment:That wouldn't quite make your cat climb the curtains. Employment growth rates were above 2% for a short time in early 2015, and the growth rate has fallen, but we're back to growth rates close to what we saw in 2013-2014.

What's happening with unemployment and vacancies? The unemployment rate is currently at 4.7%, only 0.3 percentage points higher than its most recent cyclical low of 4.4% in May 2007, and the vacancy rate (JOLTS job openings rate) has been no higher since JOLTS came into being more than 15 years ago. Thus, by the standard measure we would use in labor search models (ratio of vacancies to unemployment), this job market is very tight.

If we break down the unemployment rate by duration of unemployment, we get more information: In this chart, I've taken the number of unemployed for a particular duration, and expressed this as percentage of the labor force. If you add the four quantities, you get the total unemployment rate. Here, it's useful again to compare the May 2016 numbers with May 2007. In May 2007, the unemployment rates for less than 5 weeks, 5 to 14 weeks, 15-26 weeks, and 27 weeks and over were 1.6%, 1.4%, 0.7%, and 0.7%, respectively. In May 2016 they were 1.4%, 1.4%, 0.7%, and 1.2%, respectively. So, middle-duration unemployment currently looks the same as in May 2007, but there are fewer very-short-term unemployed, and more long-term unemployed. But long-term unemployment continues to fall, with a significant decline in the last report.

Some people have looked at the low employment/population ratio and falling participation rate, and argued that this reflects a persistent inefficiency: So, for example, if you thought that a large number of "involuntarily" unemployed had dropped out of the labor force and were only waiting for the right job openings to materialize, you might have thought that increases in labor force participation earlier this year were consistent with such a phenomenon. But the best description of the data now seems to be that labor force participation leveled off as of mid-2015. Given the behavior of unemployment and vacancies in the previous two charts, and the fact that labor force participation has not been cyclically sensitive historically, the drop in labor force participation appears to be a secular phenomenon, and it is highly unlikely that this process will reverse itself. Thus, it seems wrongheaded to argue that some persistent wage and price stickiness is responsible for the low employment/population ratio and low participation rate. There is something to explain in the last chart alright (for example, Canada and Great Britain, with similar demographics, have not experienced the same decline in labor force participation), and this may have some connection to policies in the fiscal realm, but it's hard to make the case that there is some alternative monetary policy that can make labor force participation go up.

Another key piece of labor market information comes from the CPS measures of flows among the three labor force states - employed (E), unemployed (U), and not in the labor force (N). We'll plot these as percentage rates, relative to the stock of people in the source state. For the E state: The rate of transition from E to U is at close to its lowest value since 1990, but the transition rate from E to N is relatively high. This is consistent with the view that the decline in labor force participation is a long-run phenomenon. People are not leaving E, suffering a period of U, and then going to N - they're going directly from E to N. Next, the U state: In this chart, the total rate at which people are exiting the U state is lower than average and, while before the last recession the exit rate to E was higher than the exit rate to N, these rates are currently about the same. This seems consistent with the fact that the unemployment pool currently has a mix that tilts more toward long-term unemployed. These people have a higher probability than the rest of the unemployed, of going to state N rather than E. Finally, for state N: Here, the rates at which people are leaving state N for both states E and U are relatively low. Thus, labor force participation has declined both because of a high inflow (from both E and U) and a low outflow. But, the low outflow rate to U from N (in fact, the lowest since 1990) also reflects the tight labor market, in that a person leaving state N is much more likely to end up in state E rather than U (though no more likely, apparently, than was the case historically, on average).

The last thing we should look at is productivity. In this context, a useful measure is the ratio of real GDP to payroll employment, which looks like this: By this measure, average labor productivity took a large jump during 2009, but since early 2010 it has been roughly flat. There has been some discussion as to whether productivity growth measures are biased downward. Chad Syverson, for example, argues that there is no evidence of bias in measures of output per hour worked. So, if we take the productivity growth measures at face value, this is indeed something to be shocked and concerned about.

1. The recent month's slowdown in payroll employment growth should not be taken as a sign of an upcoming recession. The labor market, by conventional measures, is very tight.

2. The best forecast seems to be that, barring some unanticipated aggregate shock, labor force participation will stay where it is for the next year, while the unemployment rate could move lower, to the 4.2%-4.5% range, given that the fraction of long-term unemployed in the unemployment pool is still relatively high.

3. Given an annual growth rate of about 0.5% in the working age population, and supposing a drop of 0.2-0.5 percentage points in the unemployment rate over the next year, with half the reduction in unemployment involving transitions to employment, payroll employment can only grow at about 80,000 per month over the next year, assuming a stable labor force participation rate. Thus, if we add the striking Verizon workers (about 35,000) to the current increase in payroll employment, that's about what we'll be seeing for the next year. Don't be shocked and concerned. It is what it is.

4. Given recent productivity growth, and the prospects for employment growth, output growth is going to be low. I'll say 1.0%-2.0%. And that's if nothing extraordinary happens.

5. Though we can expect poor performance - low output and employment growth - relative to post-WWII time series for the United States, there is nothing currently in sight that represents an inefficiency that monetary policy could correct. That is, we should expect the labor market to remain tight, by conventional measures.

After the last employment report, which included an estimate of a monthly increase of 38,000 in payroll employment, some people were "shocked," apparently. Let's take a look at a wider array of labor market data, and see whether they should be panicking.

If you have been following employment reports in the United States for a while, you might wonder why the establishment survey numbers are always reported in terms of the monthly change in seasonally-adjusted employment. After all, we typically like to report inflation as year-over-year percentage changes in the price level, or real GDP as quarterly percentage changes in a number that has been converted to an annual rate. So, suppose we look at year-over-year percentage changes in payroll employment:That wouldn't quite make your cat climb the curtains. Employment growth rates were above 2% for a short time in early 2015, and the growth rate has fallen, but we're back to growth rates close to what we saw in 2013-2014.

What's happening with unemployment and vacancies? The unemployment rate is currently at 4.7%, only 0.3 percentage points higher than its most recent cyclical low of 4.4% in May 2007, and the vacancy rate (JOLTS job openings rate) has been no higher since JOLTS came into being more than 15 years ago. Thus, by the standard measure we would use in labor search models (ratio of vacancies to unemployment), this job market is very tight.

If we break down the unemployment rate by duration of unemployment, we get more information: In this chart, I've taken the number of unemployed for a particular duration, and expressed this as percentage of the labor force. If you add the four quantities, you get the total unemployment rate. Here, it's useful again to compare the May 2016 numbers with May 2007. In May 2007, the unemployment rates for less than 5 weeks, 5 to 14 weeks, 15-26 weeks, and 27 weeks and over were 1.6%, 1.4%, 0.7%, and 0.7%, respectively. In May 2016 they were 1.4%, 1.4%, 0.7%, and 1.2%, respectively. So, middle-duration unemployment currently looks the same as in May 2007, but there are fewer very-short-term unemployed, and more long-term unemployed. But long-term unemployment continues to fall, with a significant decline in the last report.

Some people have looked at the low employment/population ratio and falling participation rate, and argued that this reflects a persistent inefficiency: So, for example, if you thought that a large number of "involuntarily" unemployed had dropped out of the labor force and were only waiting for the right job openings to materialize, you might have thought that increases in labor force participation earlier this year were consistent with such a phenomenon. But the best description of the data now seems to be that labor force participation leveled off as of mid-2015. Given the behavior of unemployment and vacancies in the previous two charts, and the fact that labor force participation has not been cyclically sensitive historically, the drop in labor force participation appears to be a secular phenomenon, and it is highly unlikely that this process will reverse itself. Thus, it seems wrongheaded to argue that some persistent wage and price stickiness is responsible for the low employment/population ratio and low participation rate. There is something to explain in the last chart alright (for example, Canada and Great Britain, with similar demographics, have not experienced the same decline in labor force participation), and this may have some connection to policies in the fiscal realm, but it's hard to make the case that there is some alternative monetary policy that can make labor force participation go up.

Another key piece of labor market information comes from the CPS measures of flows among the three labor force states - employed (E), unemployed (U), and not in the labor force (N). We'll plot these as percentage rates, relative to the stock of people in the source state. For the E state: The rate of transition from E to U is at close to its lowest value since 1990, but the transition rate from E to N is relatively high. This is consistent with the view that the decline in labor force participation is a long-run phenomenon. People are not leaving E, suffering a period of U, and then going to N - they're going directly from E to N. Next, the U state: In this chart, the total rate at which people are exiting the U state is lower than average and, while before the last recession the exit rate to E was higher than the exit rate to N, these rates are currently about the same. This seems consistent with the fact that the unemployment pool currently has a mix that tilts more toward long-term unemployed. These people have a higher probability than the rest of the unemployed, of going to state N rather than E. Finally, for state N: Here, the rates at which people are leaving state N for both states E and U are relatively low. Thus, labor force participation has declined both because of a high inflow (from both E and U) and a low outflow. But, the low outflow rate to U from N (in fact, the lowest since 1990) also reflects the tight labor market, in that a person leaving state N is much more likely to end up in state E rather than U (though no more likely, apparently, than was the case historically, on average).

The last thing we should look at is productivity. In this context, a useful measure is the ratio of real GDP to payroll employment, which looks like this: By this measure, average labor productivity took a large jump during 2009, but since early 2010 it has been roughly flat. There has been some discussion as to whether productivity growth measures are biased downward. Chad Syverson, for example, argues that there is no evidence of bias in measures of output per hour worked. So, if we take the productivity growth measures at face value, this is indeed something to be shocked and concerned about.

**Conclusions**1. The recent month's slowdown in payroll employment growth should not be taken as a sign of an upcoming recession. The labor market, by conventional measures, is very tight.

2. The best forecast seems to be that, barring some unanticipated aggregate shock, labor force participation will stay where it is for the next year, while the unemployment rate could move lower, to the 4.2%-4.5% range, given that the fraction of long-term unemployed in the unemployment pool is still relatively high.

3. Given an annual growth rate of about 0.5% in the working age population, and supposing a drop of 0.2-0.5 percentage points in the unemployment rate over the next year, with half the reduction in unemployment involving transitions to employment, payroll employment can only grow at about 80,000 per month over the next year, assuming a stable labor force participation rate. Thus, if we add the striking Verizon workers (about 35,000) to the current increase in payroll employment, that's about what we'll be seeing for the next year. Don't be shocked and concerned. It is what it is.

4. Given recent productivity growth, and the prospects for employment growth, output growth is going to be low. I'll say 1.0%-2.0%. And that's if nothing extraordinary happens.

5. Though we can expect poor performance - low output and employment growth - relative to post-WWII time series for the United States, there is nothing currently in sight that represents an inefficiency that monetary policy could correct. That is, we should expect the labor market to remain tight, by conventional measures.

Subscribe to:
Posts (Atom)