Interviews 2017-07-24T04:36:18+00:00


April 2017, Volume 18, Issue 1

Q&A:  Marco Del Negro on DSGE modelling in policy

Marco Del Negro is a Vice President in the Macroeconomics and Monetary Studies Function of the Research and Statistics Group of the Federal Reserve Bank of New York. His research focuses on the use of general equilibrium models in forecasting and policy analysis. Del Negro’s IDEAS/RePEc entry. Any views expressed in this interview are his only and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System.

EconomicDynamics: How prevalent has the use of DSGE models become in policy-making?
Marco Del Negro: Well, “policymaking” may not be the right way to put it. At best, DSGEs and other models can only *inform* policymaking! And the answer varies widely from central bank to central bank, with the Norges Bank displaying the DSGE model’s optimal policy results in its quarterly reports, and others not having a DSGE model at all. But my impression is that by now most advanced economies’ central banks have a DSGE model, and some in developing economies do as well. Their use varies across central banks, from forecasting, to scenario analysis, to optimal policy exercises. One thing to bear in mind is that DSGEs are never the only models on the table—they are almost always used in conjunction with other approaches, especially for forecasting. And that’s good: these models are not, nor will ever be, “reality,” so it’s a good practice to be robust, and check their answers against other methods. By the way, Marc Giannoni and I discuss using DSGEs for forecasting and policy analysis at the New York Fed in a VoxEU book that just came out.
ED: Forecasting models always need a “fudge factor” to align the model with current data or to accomodate special circumstances. For example, consumer confidence was unexpectedly high after the Brexit vote. How can a DSGE model with rational agents deal with that?
MDN: I guess that one of the points of using DSGE models for forecasting is that you do not need “fudge factors”, or “add factors,” as you do in other approaches. Much of the purpose of forecasting is to test the model: If your DSGE produces poor forecasts, you may as well face it, as it says your model has issues that need to be fixed. If you “fudge” the forecasts, you’ll never know for sure. To put it differently, we are trying to be “serious” econometricians, and in the long run, we’ll hopefully reap the benefits from this stance.Now, this does not mean that you should just use your seven macro time-series only in forecasting, and ignore all else happening in the world. One thing you can do is to use financial variables, like spreads, as we do in the New York Fed DSGE model and as Frank Schorfheide and I have done in our Handbook of Economic Forecasting chapter (and applied in an AEJ Macro paper with Marc Giannoni as well). These variables do react rapidly to current events (see our work with Raiden Hasegawa), so they are useful sources of information for the DSGE econometrician. You can also try to incorporate information from expectations surveys. One problem with DSGE models is surely their misspecification—everything considered, they are simple models- but another problem is that the econometrician’s information set is often so limited. We need to deal with both issues. Coming back to “fudging,” policymakers never take the models’ forecasts as revealed truths. They are smart enough to fudge the forecasts themselves, that is, to incorporate judgmental elements if they feel that in some specific circumstances the model is missing something important.
ED: A DSGE model requires a lot more rigor than a purely statistical model; it requires a very specific theory after all. Doesn’t that give the DSGE model an unfair disadvantage in any forecasting horse race?
MDN: I don’t know that it is “unfair.” After all, all approaches have their pros and cons, and the rigidity of DSGEs is definitely a con in the short run. If you think something is happening to exchange rates, for instance, and your model is a closed-economy model, you are pretty stuck. At the New York Fed, the DSGE group has teamed up with the Bayesian VAR folks (Domenico Giannone, Richard Crump, and Stefano Eusepi) to overcome this problem at least for scenario analysis. This kind of methodological joint-venture helps to study the risks to the forecast that events such as, say, Brexit may entail. A large BVAR by construction deals with a lot of observables, and so this approach can encompass most, if not all, of the variables that we see are impacted by the event, e.g., exchange rates or bond yields. We then use the BVAR to derive the implications for the variables that are in the DSGE information set. Finally, we employ the DSGE to understand what the event in question may mean for objects of interest that the VAR does include, say, r*, or the output gap, as well as for possible monetary policy responses.In the long run, it is pretty obvious that if a variable is important for the forecasts, and for policy analysis, we should include it and change the model. But that’s what we are here for. Models are not static objects. As a recent popular Italian song goes, “whatever happens, panta rhei.”
ED: Estimated DSGE and *VAR models have replaced a generation of forecasting models that grew to hundreds or even thousands of equations and became unwieldy. Recently, DSGE models have been increasing in size as well. Is this a good development?
MDN: I am not sure that DSGE and VAR models have “replaced” anything. If you think about it, the workhorse model for policy analysis in the Federal Reserve System is still the FRB/US model, a very large model to some extent in the Cowles Foundation tradition. And perhaps the problem with these Cowles Foundation models is not so much in their size, but in their “incredible identification” restrictions, as pointed out by Chris Sims almost forty years ago. And yet, “large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification,” and this is still true forty years later. I personally do not think that DSGEs should try to compete in size with Cowles Foundation-style models, but I am also not sure that growing in size is necessarily a bad thing. If you think about it, many of the aspects that the current batch of DSGEs are missing, like heterogeneity in households and firms, a more fleshed-out financial intermediation system, or an expectations formation mechanism more in line with the evidence from surveys, would most likely make the models larger. If there’s a good reason for building larger models, like incorporating more data, or relevant aspects of the economy, why not?
ED: An important aspect of DSGE models is that they help provide intuition about what is happening in the economy. Yet, as models grow, it becomes increasingly difficult to get that intuition. Is intuition a factor in forecasting or policy evaluations?
MDN: What you call “intuition” is definitely a plus for DSGE models in terms of forecasting. In a reduced form model (say, in a factor model or in a VAR where you have not identified the shocks), if the forecasts change from one period to the next, you can only take a guess as to the reasons for the change. You know your forecast errors—perhaps the fact that you underpredicted output—but have no idea what caused it. Was it productivity, or demand, or monetary policy? DSGEs are identified models—meaning, they tell you exactly what shocks lie behind any forecast error, and any change in the forecasts, no matter the size of the model. Of course, the stories a DSGE tells are only as good as the model that generates them. But these stories are also a way of testing the model. For instance, if a model suggests that the only fundamentals that drive inflation forecasts are price markup shocks, maybe you do not have such a great model of inflation. (Incidentally, contrary to popular belief, it is not true that in all DSGEs markup shocks are the only drivers of inflation; see the paper with Frank and Marc that I mentioned before, or the blog post by Andrea Tambalotti and Argia Sbordone on the FRBNY DSGE model’s interpretation of the Great Recession and its aftermath).Now, Paul Romer and many others will critique these stories because the shocks in DSGEs aren’t really “structural.” To some extent that’s true. But this does not imply that these stories are completely uninformative. Let me give you an example. The model described in the paper with Frank and Marc says that “whatever shocks caused the increase in spreads during the Great Recession are also the reason why inflation has been persistently low afterwards.” Does the model tell you exactly what economic forces drove the spreads bananas after the Lehman crisis? Or what caused the crisis to start with? No. So, for a purist, that shock is bogus. Does that make the DSGE story completely uninformative? No. The story connects one event (the Lehman crisis) to another (low inflation). That seems valuable information to me (and by the way, that’s not an ex post story—the model *forecasted* low inflation in the future as of 2008Q4). And it’s testable. If spreads will increase again in the future, the DSGE will predict low inflation—a testable prediction.
ED: Let’s talk about identification for a minute, since the Romer paper makes a big deal about it.
MDN: Let me start by saying that people questioning identification, or whatever else they want to question, in DSGE models is totally fair game. Identification of the shocks in DSGEs is definitely not “model free.” If you don’t buy the model, you don’t buy the identification either. I still think it’s a step forward relative to Cowles Foundation models (or in current large-scale models used in central banks, for that matter) where identification is achieved by placing ad hoc zero restrictions here and there. At least in DSGEs you can question identification by talking about economic assumptions. If Romer and others want to point out that these assumptions all too often go unquestioned, that seems fair—although that applies all over economics, not just to DSGEs.It’s important to understand that the issue we just discussed has absolutely nothing to do with parameter identification, that is, the “back to square one” point of Canova and Sala (2009). On that point, there is quite a bit of confusion. DSGE models are often estimated using Bayesian methods, and for good reasons. Do Bayesian methods “fix” whatever identification problem may be there? No way. Dale Poirier puts it as plainly as possible: “A Bayesian analysis of a nonidentified model is always possible if a proper prior on all the parameters is specified. There is, however, no Bayesian free lunch. The ‘price’ is that there exist quantities about which the data are uninformative, i.e., their marginal prior and posterior distributions are identical.” Even if the model is perfectly identified, priors will always matter in finite samples (which is what we have in macro). For a Bayesian, in fact, a model is not just the equilibrium conditions. Instead, it’s the combination of those *and* the prior distribution one puts on the parameters. Both should be questioned. As for the priors, there is a very useful paper by Ulrich Müller called “Measuring prior sensitivity and prior informativeness in large Bayesian models.” He proposes a method to check the extent to which the objects that you are reporting (say, impulse responses) are sensitive to the priors you are using. Basically his paper moves the debate forward from “Is this DSGE model exactly identified or not?” to a more concrete and useful “How much do priors matter for the results that I am reporting?” (regardless of whether the DSGE is exactly identified).
ED: When is it a good idea to use frequentist methods to estimate a DSGE model?
MDN: I guess that if you are a frequentist the answer is all the time! Seriously, I think there are a couple of reasons for using Bayesian methods. One is that there is information on the parameters outside of the macro data that you are using, e.g., from micro studies. In my view, that is one contribution of the calibration literature, namely to say: let’s use this information! Ideally you would estimate your model on both macro and micro data at the same time. When that’s too hard, you just incorporate the micro data information into your prior. I see that as a plus. Bayesian analysis has also something else in common with calibration: the approach to model validation. That’s not well appreciated or understood, but Bayesian marginal likelihoods describe the fit of the model when the parameters are generated from the prior (before seeing the data), not the posterior.The other reason, and that’s more philosophical, has to do with how you characterize uncertainty. I personally like the Bayesian approach: from our perspective parameters (or other objects of interest) are random variables. We simply don’t know their true value and so we may as well report the full extent of our ignorance. But, you know, reasonable people can disagree on this one. Sometimes the full Bayesian approach is computationally unfeasible: it’s just too hard to compute the likelihood. So one may want to employ a limited information approach—say, use only a subset of moments to estimate the model. Other times it’s good to use a limited information approach just because you just can’t swallow the assumption that your DSGE is the data-generating process. Even so, there are Bayesian alternatives: the DSGE-VAR approach Frank Schorfheide and I have developed goes after this very idea. In addition, there are Bayesian limited information methods. Ulrich Müller, whom I mentioned before, has done very nice work on this approach. In the end, you can be Bayesian all the time if you want. But if you don’t, that’s OK, too.

Let me finish by thanking you for the opportunity to be interviewed. Lots of people have views on this stuff, and it’s been great to be able to give mine!


Canova, F., and Sala, L., 2009. “Back to square one: Identification issues in DSGE models,” Journal of Monetary Economics, Elsevier, vol. 56(4), pages 431-449, May.

Del Negro, M., and Schorfheide, F., 2013. “DSGE Model-Based Forecasting,” Handbook of Economic Forecasting, vol. 2, Chapter 57, Elsevier.

Del Negro, M., Giannoni, M., and Schorfheide, F., 2015. “Inflation in the Great Recession and New Keynesian Models,” American Economic Journal: Macroeconomics, American Economic Association, vol. 7(1), pages 168-196, January.

Del Negro, M., Hasegawa, R., and Schorfheide, F., 2016. “Dynamic prediction pools: An investigation of financial frictions and forecasting performance,” Journal of Econometrics, Elsevier, vol. 192(2), pages 391-405.

Gabbani, F., 2017. Occidentali’s Karma, Italian entry to Eurovision Song Contest.

Müller, U., 2012. “Measuring prior sensitivity and prior informativeness in large Bayesian models,” Journal of Monetary Economics, Elsevier, vol. 59(6), pages 581-597.

Poirier, D., 1998. “Revising Beliefs In Nonidentified Models,” Econometric Theory, Cambridge University Press, vol. 14(04), pages 483-509, August.

Romer, P., 2016. “The Trouble with Macroeconomics,” manuscript, New York University.

Sims, C., 1980. “Macroeconomics and Reality,” Econometrica, Econometric Society, vol. 48(1), pages 1-48, January.

Tambalotti, A., and Sbordone, A., 2014. “Developing a Narrative: The Great Recession and Its Aftermath,” Liberty Street Economics, Federal Reserve Bank of New York.

November 2016, Volume 17, Issue 2

Q&A: Johannes Stroebel on Real Estate Dynamics

Johannes Stroebel is Associate Professor of Finance at the New York Univeristy Stern School of Business. His research interests lie in the relationship between real estate and macroeconomics. Stroebel’s RePEc/IDEAS profile.

EconomicDynamics: The U.S. has seen large swings in house prices over the past 15 years. What were the effects of these house price movements on the broader economy?
Johannes Stroebel: This is a very interesting question that we are still trying to fully understand. It is by now clear that there are a number of different channels through which the large boom-bust cycle in house prices affected the economy. Many of these channels are directly or indirectly related to household balance sheets. The idea is that when house prices go up, households can withdraw their increased home equity and use the additional resources to consume. On the flip side, when house prices decline and the value of housing assets drop, households’ ability to consume out of home equity disappears and they might have to default. Through this channel, house price movements have the potential to affect aggregate demand.Much of the early research on this household balance sheet mechanism was pioneered by Atif Mian and Amir Sufi. In Mian and Sufi (2011), they document that during the 2002 to 2006 house price boom, households extracted more home equity, and indeed used much of the money to increase consumption. They also show that the resulting increase in household leverage subsequently led to higher mortgage defaults. Similarly, in Mian, Rao and Sufi (2013), they document a large marginal propensity to consume (MPC) out of housing wealth during the housing bust between 2006 and 2009. They find this MPC to be particularly large in areas with poorer households, suggesting disproportionately large effects on household demand in those regions. This line of empirical research has put households and their balance sheets squarely at the center of the macroeconomic narrative of the housing boom-bust cycle. More recently, there have been research efforts to better understand how models of optimal household behavior can generate the large observed MPCs out of housing wealth (e.g, Berger, Guerreri, Lortenzoni, and Vavra, 2015; Kaplan, Mitman, and Violante, 2015).

Researchers have also explored other macroeconomic effects of house price increases. For example, Giroud and Mueller (2016) show that the negative local demand response to house price declines led firms to reduce employment, with particularly strong effects at highly-levered firms (see also Mian and Sufi, 2014). Charles, Hurst, and Notowidigdo (2015) show that house price increases in the early 2000s led to a significant decline in the college enrollment of individuals who were, for example, drawn into the construction sector. This highlights how temporary swings in house prices can have large and permanent effects on the economy.

In my own work with Joe Vavra, we analyze how local retail prices and markups respond to house-price-induced local demand shocks. We use item-level price data from thousands of retail stores to construct price indices at the zip code and city level. We show that local retail prices strongly respond to local house price movements, with elasticities of retail prices to house prices of about 15%-20% across both housing booms and busts. We document that this elasticity is driven by changes in markups rather than by changes in local costs. We also find that these elasticities are much larger in areas where there are mainly homeowners compared to areas with mainly renters. Our interpretation is that markups rise with house prices, particularly in high homeownership locations, because greater housing wealth reduces homeowners’ demand elasticity, and firms raise markups in response. When we look at shopping data, we find corroborating evidence for this. In particular, we document that when house prices go up, homeowners become less price sensitive (for example, they buy fewer goods on sale or with a coupon), while renters become more price sensitive.

ED: Would you say the large increases in global house prices in the early 2000s were the result of a housing bubble?
JS: That very much depends on what you mean by a housing bubble. The workhorse model of bubbles in macroeconomics is based on a failure of the transversality condition that requires the present value of a payment occurring infinitely far in the future to be zero. Such a bubble is often called a classic rational bubble (see Blanchard and Watson, 1982, and Tirole, 1982, 1985).In joint work with Stefano Giglio and Matteo Maggiori, we provide the first direct test of this classic rational bubble. Our analysis focuses on housing markets in the U.K. and Singapore since the early 1990s. In both countries, property ownership takes the form of either a leasehold or a freehold. Leaseholds are temporary, pre-paid, and tradable ownership contracts with initial maturities ranging from 99 to 999 years, while freeholds are perpetual ownership contracts. By comparing the price of 999-year leaseholds with that of freeholds, we can get a direct estimate of the present value of owning the property in 999 years. Models of classic rational bubbles would predict there to be a large price differences between these two contracts. When we look in the data, we find no price difference, even when we zoom in on the periods and regions with the largest house price increases. Put differently, at no point do we find evidence for a classic rational bubble in these housing markets.

However, just because house prices did not display the features of a classic rational bubble does not mean that they did not deviate from fundamental values. Other, more behavioral models of bubbles do not require a failure of the transversality condition. Differentiating between these models is important, since the positive and normative implications of models with bubbles depend crucially on the exact type of bubble that is considered. So empirical research should not just investigate whether there was a deviation of prices from fundamental values. It is as important to understand the exact sources of those deviations.

ED: What are examples of such forces that could explain deviations of house prices from fundamental value?
JS: A lot of evidence points towards an important role played by the way that potential homebuyers form their expectations about future house price growth. For example, there is mounting evidence that when households think about where house prices will go in the future, they extrapolate from their own past experiences: households that experienced larger past price growth expect faster price growth going forward. Kuchler and Zafar (2016) provide strong evidence for this type of extrapolative expectations in the housing market. Guren (2016), Glaeser and Nathanson (2015), and Barberis, Greenwood, Jin, and Shleifer (2015) explore the implications of such extrapolation for price dynamics. The take-away is that extrapolative expectations can quite easily cause prices to deviate from fundamental values.Another force that I have recently investigated together with Mike Bailey, Rachel Cao, and Theresa Kuchler is the role of social interactions in driving housing market expectations and investments. To conduct our analysis, we combine anonymized social network information from Facebook with housing transaction data and a survey. Our research design exploits that we can observe variation in the geographic spread of different people’s social networks. For example, when you compare two individuals living in Los Angeles, one might have more friends in Boston, and the other might have more friends in Miami. At any point in time, house prices in Boston might either appreciate more or less than house prices in Miami. We find that in years following disproportionate increases in Miami house prices, Los Angeles-based renters with more friends in Miami are more likely to buy a house. They also buy larger houses, and pay more for a given house. Similarly, when homeowners’ friends experience less positive recent house price changes, these homeowners are more likely to become renters, and more likely to sell their property at a lower price.

We show that these relationships cannot be explained by common shocks to individuals and their friends. Instead, they are driven by the effect of social interactions on individuals’ housing market expectations. We arrive at this conclusion after analyzing responses to a housing expectation survey. We find that individuals whose geographically distant friends experienced larger recent house price increases consider local property a more attractive investment, with bigger effects for individuals who regularly discuss such investments with their friends. This provides a link between social interactions and behavior that goes through expectations. It also provides some evidence as to the sources of disagreement about future house price growth: two people looking at the same local housing market might disagree because their heterogeneous social networks have recently had very different price experiences.

This evidence is very consistent with Bob Shiller’s narrative of the housing boom in the 2000s, which he described as a “social epidemic of optimism for real estate.” Indeed, it does appear that after interacting with individuals who have recently experienced house prices going up, you yourself become more optimistic about housing market investments in your own local housing market. So one promising direction for future research is to better understand the role of social dynamics in forming expectations and ultimately prices, even beyond housing markets. Burnside, Eichenbaum, and Rebelo (2016) have made important progress in how to model such interactions, but there are many interesting unresolved empirical and theoretical questions.

ED: Price formation depends also on how future cash flows are discounted. You recently provided evidence about long-term discount rates.
JS: So the problem we analyzed for that project was to understand how households discount payments that will only materialize very far in the future, say 100 years or more. This horizon is very relevant for a number of important intergenerational public policy issues, such as the question of how much to invest in climate change abatement. But there were essentially no data points on actual discount rates used by households for payments that far into the future. The reason is that finite maturity assets necessary to estimate those discount rates usually do not extend more than 30 or at most 50 years into the future.Here is where my research with Stefano Giglio and Matteo Maggiori came in. We used the same leasehold/freehold contract structure for real estate that I described above to back out households’ discount rates over these very long horizons. Our approach was to compare transaction prices for two otherwise identical properties, one trading as a 100-year leasehold and the other as a freehold. Our insight was that any price differences would capture the present value of owning the freehold in 100 years, and would therefore be informative about discount rates that households used over 100-year horizons. When we looked at the data, we found that in both the U.K. and Singapore, freeholds were trading at a premium of about 10% relative to 100-year leaseholds on otherwise identical properties. This implied very low annual discount rates over this horizon, at levels of about 2.6%.

In follow-up work with Andreas Weber we are exploring what these low long-run discount rates for housing can tell us about the appropriate discount rates to use for climate change abatement. Two of our insights from that project are as follows: First, when we combine the low long-run discount rate with estimates of the average return to housing, we find evidence for a strongly-declining term structure of discount rates for housing. This can help find appropriate discount rates at a wide variety of horizons. Second, since housing is a risky asset (i.e., it will pay off in good states of the world), but climate change abatement is a hedge (i.e., it will pay off during climate disasters, which are bad states of the world), the low long-run discount rates for housing are actually an upper bound on the appropriate discount rate one should apply for investments in climate change abatement. This suggests that such investments will have much higher present values than what is commonly assumed.


Michael Bailey, Ruiqing Cao, Theresa Kuchler, and Johannes Stroebel, 2016. “Social Networks and Housing Markets,” CESifo Working Paper Series 5905, CESifo Group Munich.

Nicholas Barberis, Robin Greenwood, Lawrence Jin, and Andrei Shleifer, 2015. “X-CAPM: An extrapolative capital asset pricing model,” Journal of Financial Economics, Elsevier, vol. 115(1), pages 1-24.

David Berger, Veronica Guerrieri, Guido Lorenzoni, and Joseph Vavra, 2015. “House Prices and Consumer Spending,” NBER Working Papers 21667, National Bureau of Economic Research, Inc.

Olivier Blanchard and Mark W. Watson, 1982. “Bubbles, Rational Expectations and Financial Markets.” In: Paul Wachterl (ed.), Crises in the Economic and Financial Structure, pp. 295-316. Lexington, MA: D.C. Heathand Compan.

Craig Burnside, Martin Eichenbaum, and Sergio Rebelo, 2016. “Understanding Booms and Busts in Housing Markets,” Journal of Political Economy, University of Chicago Press, vol. 124(4), pages 1088-1147.

Kerwin Kofi Charles, Erik Hurst, and Matthew J. Notowidigdo, 2015. “Housing Booms and Busts, Labor Market Opportunities, and College Attendance,” NBER Working Papers 21587, National Bureau of Economic Research, Inc.

Stefano Giglio, Matteo Maggiori, and Johannes Stroebel, 2014. “Very Long-Run Discount Rates,” NBER Working Papers 20133, National Bureau of Economic Research, Inc.

Stefano Giglio, Matteo Maggiori, and Johannes Stroebel, 2016. “No‐Bubble Condition: Model‐Free Tests in Housing Markets,” Econometrica, Econometric Society, vol. 84, pages 1047-1091, 05.

Stefano Giglio, Matteo Maggiori, Johannes Ströbel, and Andreas Weber, 2015. “Climate Change and Long-Run Discount Rates: Evidence from Real Estate,” CESifo Working Paper Series 5608, CESifo Group Munich.

Xavier Giroud and Holger M. Mueller, 2016. “Redistribution of Local Labor Market Shocks through Firms’ Internal Networks<,” NBER Working Papers 22396, National Bureau of Economic Research, Inc.

Edward L. Glaeser and Charles G. Nathanson, 2015. “An Extrapolative Model of House Price Dynamics,” NBER Working Papers 21037, National Bureau of Economic Research, Inc.

Adam Guren (2016). “House Price Momentum and Strategic Complementarity,” Mimeo, Boston University.

Greg Kaplan, Kurt Mitman, and Gianluca Violante, 2015. “Consumption and House Prices in the Great Recession: Model Meets Evidence,” 2015 Meeting Papers 275, Society for Economic Dynamics.

Theresa Kuchler and Basit Zafar, 2015. “Personal experiences and expectations about aggregate outcomes,” Staff Reports 748, Federal Reserve Bank of New York.

Atif Mian and Amir Sufi, 2011. “House Prices, Home Equity-Based Borrowing, and the US Household Leverage Crisis,” American Economic Review, American Economic Association, vol. 101(5), pages 2132-2156, August.

Atif Mian and Amir Sufi, 2014. “What Explains the 2007–2009 Drop in Employment?,” Econometrica, Econometric Society, vol. 82, pages 2197-2223, November.

Atif Mian, Kamalesh Rao, and Amir Sufi, 2013. “Household Balance Sheets, Consumption, and the Economic Slump,” The Quarterly Journal of Economics, Oxford University Press, vol. 128(4), pages 1687-1726.

Robert Shiller, 2007. “Understanding recent trends in house prices and homeownership,” Proceedings – Economic Policy Symposium – Jackson Hole, Federal Reserve Bank of Kansas City, pages 89-123.

Johannes Stroebel and Joseph Vavra, 2014. “House Prices, Local Demand, and Retail Prices,” NBER Working Papers 20710, National Bureau of Economic Research, Inc.

Jean Tirole, 1982. “On the Possibility of Speculation under Rational Expectations,” Econometrica, Econometric Society, vol. 50(5), pages 1163-1181, September.

Jean Tirole, 1985. “Asset Bubbles and Overlapping Generations,” Econometrica, Econometric Society, vol. 53(6), pages 1499-1528, November.

April 2016, Volume 17, Issue 1

Q&A: Giancarlo Corsetti on Debt dynamics

Giancarlo Corsetti is Chair in Macroeconomics and Fellow of Clare College at the University of Cambridge. He is interested in open macroeconomics, in particular crises and policies in an international context. Corsetti’s RePEc/IDEAS profile.

EconomicDynamics: To some, it looks like Europe locked itself into a path with unsustainable debt that may even be entirely self-inflicted. What went wrong in the Euro area (EA)?

Giancarlo Corsetti: Consider a comparison between the US and the EA. The aggregate level of public debt in the US is not that different from the EA. Yet, the US has been able to reduce unemployment after the shock of the global crisis, running deficits and expanding the balance sheet of the Fed, without suffering any tension in the debt market. Initially, the crisis hit regions/states asymmetrically, causing large variations in unemployment and thus local fiscal conditions. Yet, thanks to a sufficiently developed institutional framework, there was no geographical polarization of risk and borrowing costs. In response to stabilization policies, aggregate recovery and regional convergence in unemployment rates went hand-in-hand. While the aggregate level of public debt in the US is not that different from the EA, the US has been able to stabilize economic activity (running deficits and expanding the balance sheet of the Fed) without suffering any significant tension in the debt market.

Conversely, even if the crisis shock in the EA was initially less geographically concentrated than in the US, an uncoordinated and unconvincing policy response in an institutional vacuum ended up magnifying regional weaknesses in fundamentals, up to generating a sovereign risk and ultimately a country-risk crisis.

As of today, aggregate economic activity has not recovered in the EA. The aggregate problem and the internal polarization are two sides of the same coin. In an economy with diverging fiscal, financial and macroeconomic conditions at regional level, (a) the transmission of monetary policy is profoundly asymmetric: borrowing conditions for public and private agents are quite different across borders and respond differently to policy decisions. (b) Fiscal policy has a strong contractionary (procyclical) bias. Most importantly, (c) a profound divergence in views and interests among national policy makers on how to adjust to shocks have reduced reciprocal trust to a historical minimum. Conflicts create continuing policy uncertainty and delay interventions. If anything is done, it is done too little too late.

In each country in the EA, the crisis has unique national features, i.e., rooted in a specific combination of financial, fiscal, and macroeconomic fragility. But after the emergence of the Greek problem, for the reasons highlighted above, the crisis became largely systemic with sovereign spread at times completely driven by common factors. It is well understood that multiple equilibria are possible in economies that lack policy credibility and have high levels of debt (a point stressed by many recent papers, see e.g. Lorenzoni and Werning 2015 or my work with Luca Dedola among others). The difficulty in policy formulation is that in practice, weak fundamentals and self-fulfilling expectations are near impossible to separate in a crisis. What happened in the EA after 2010 is best described by a combination of the two, reflecting rising fiscal liabilities and the inability to implement credible policy responses (or credible reform), at both domestic and union-wide level.

ED: Is there a general lesson for the literature on sovereign debt crisis?

GC: We typically think of the cost of default as hitting an economy ex post, when a credit event actually occurs (in the case of the EA, this can take the form of breakup of the union with forced conversion of debt into local currency). The recent EA experience reminds us that the mere possibility of default (or a currency breakup) in some state of the world at some point in the future can already cause large costs in the present, in terms of a persistent and deep macroeconomic distress. The problem tends to be mostly attributed to a ‘diabolic loop’ linking sovereigns and banks in a crisis (see e.g. Brunnermeier et al. 2016): as the price of public debt falls in a fiscal crisis, banks’ balance sheets suffer, the supply of credit contracts, financial stability is shaken, the fiscal outlook further deteriorates, creating the loop. In joint work with Keith Kuester, André Meier, Gernot Mueller, however, we early on realized that the problem is more pervasive. Even for large non-financial corporations, possibly multinationals that do not depend specifically on any national banking system, financial conditions are strongly correlated with those of the state in which they are headquartered. As a general pattern, private borrowing costs in the EA rose and borrowing conditions deteriorated sharply with sovereign risk premia.

With rising premia, the costs of prospective default may stem from either falling aggregate demand or tightening financial constraints, or a combination of the two. More empirical and theoretical work on this topic is badly needed, for instance, concerning the roots of the country-risk premia (uncertainty about tax regimes, macroeconomic conditions, debt overhang, and general political risks).

ED: You have tried to answer some of these issues.

GC: I have mainly focused on the transmission via aggregate demand, based on version of the New-Keynesian model set up by Curdia-Woodford (2010). In Corsetti et al. (2013 and 2014), we show that, with policy rates at the zero lower bound, not only may adverse cyclical shocks be substantially amplified by the implied deterioration of the fiscal outlook, but also, under the same conditions, fiscal policy becomes an unreliable tool. The large multiplier of government spending at the zero lower bound that has been widely discussed in recent literature may not materialize. In the model, first, the size—in fact, even the sign—of the multiplier appears to be quite sensitive to the extent of nominal distortions and market expectations concerning the persistence of the cyclical shocks. Second, sovereign risk affects indeterminacy, raising the risk that expectations become unanchored.

To wit: suppose that, in a monetary union with policy rates at the zero lower bound, markets develop arbitrary pessimistic view of the macroeconomic development in a country, implying a string of higher deficits and debt in the near future. All else equal, this translates into a deterioration of the country’s fiscal outlook, which in turn raises country risk and the borrowing costs for the private sector. Aggregate demand falls. With some downward nominal rigidities, the ensuing fall in economic activities validates ex post the initial, arbitrary, expectations. A similar mechanism may depress output via lower investment and growth (a channel active also in the absence of nominal rigidities). Note that, from the vantage point of market participants, the crisis and downturn appears to be entirely justified by weak fundamentals.

Rethinking the evidence of the EA crisis in light of this model, it is quite clear that stemming the above vicious circle required more than just procyclical deficits (the first reaction to the crisis)—especially when crash budget corrections contribute very little to restore policy credibility, at both domestic and union-wide levels.

To illustrate the potential costs of above mechanisms, contrast the quarterly GDP growth in the UK and EA crisis countries after the crisis. Take Italy. Before the summer of 2011, when the sovereign risk crisis extended to Italy, the GDP in the two countries (setting 2008Q1=100) moved in synch. After 2011, when the sovereign risk crisis hit Italy in full force, the UK has remained on a path of low but steady growth. Italy lost more than 10 percentage points to the UK. Between 2011 and 2015, the Italian debt rose from below 120 to above 132 percent of GDP. Based on the premise that the crisis was in part self-fulfilling, and reflected the inability of the EA policymakers to resolve their differences and conflicts, much of the economic and social costs of this crisis could have been avoided. And of course it could have been much worse without the Outright Monetary Transactions (OMT).

ED: Can policy do anything about it?

GC: In a belief-driven crisis, the first line of defence can be provided by central banks. Surprisingly, until very recently very little work has been devoted to the subject. In recent joint work with Luca Dedola, we have analyzed the conditions under which the central bank can rule out self-fulfilling sovereign default (mind: not fundamental default) via a credible threat to intervene in the government debt market (Corsetti and Dedola 2016). The starting point of our analysis is that central banks can issue liabilities in the form of (possibly interest bearing) monetary assets that are only exposed to the risk of inflation, not to the risk of default. Hence, when a central bank purchases debt, it effectively swaps default-risky with default free nominal liabilities, lowering the cost of borrowing. It is by virtue of this mechanism that, when markets coordinate their expectations on anticipation of (non-fundamental) default, central bank interventions on an appropriate scale can prevent the cost of issuing debt from raising substantially, and thus prevent default from becoming an attractive policy option for fiscal policymakers. Theory here is important to clarify that a “monetary backstop” to government debt needs not rely on a (threat of) debt debasement via inflation. Quite the opposite: the credibility of a monetary backstop via interventions in the debt market may be at stake if these foreshadow high future inflation. It turns out that a strong aversion to inflation, shared by policymakers and society, is a key precondition for its success.

Arguably, most central banks in advanced countries, if only implicitly, have provided a monetary backstop to government throughout the crisis years. Before 2012, the European Monetary Union lacked the required institutional framework for the ECB to do so. This framework could only come into existence after member states finally agreed on a reform of the fiscal rules, on the creation of the European Stability Mechanism (addressing conditionality), as well as on a blueprint for banking union. In September 2012, the ECB was eventually in a position to launch its OMT programme (still, amid political objections).

The importance of these developments in 2012 for the integrity of euro area cannot be overemphasized. Yet they came late and only addressed part of the ongoing problems. Financial and macroeconomic conditions in problematic countries continue to be weak. There has been little or no reversal of internal fragmentation and polarization.

As Europeans are rethinking and debating the institutional future of the euro, there are several questions that require more economic analysis. Ultimately, the responsibility of designing strong fiscal institutions in the EA cannot but remain with national governments. At the same time, it is important to recognize that, logically, debt sustainability also depends on the institutions and regimes of official lending. Europe has moved from IMF-style interventions to an approach lengthening the maturity of the loans and charging concessional rates. We need a theoretical framework to assess the effects and implications of these different approaches—a task that I am currently pursuing in joint work with Aitor Erce and Tim Uy.


Brunnermeier M., L. Garicano, P. R. Lane, M. Pagano, R. Reis, T. Santos, D. Thesmar, S. Van Nieuwerburgh, and D. Vayanos, (2016). “The Sovereign-Bank Diabolic Loop and ESBies,”American Economic Review P&P, May, forthcoming.

Corsetti G. and L. Dedola (2016). “The Mystery of the Printing Press: Self-fulfilling Debt crises, and Monetary Sovereignty,” CEPR Dicussion paper 11089.

Corsetti G., A. Erce, and T. Uy (2016). “Debt Sustainability and the Terms of Official Lending,” University of Cambridge , in Progress.

Corsetti G., K. Kuester, A. Meier, and G. Mueller (2013). “Sovereign Risk, Fiscal Policy, and Macroeconomic Stability,” Economic Journal, February, pages F99-F132.

Corsetti G., K. Kuester, A. Meier, and G. Mueller (2014). “Sovereign risk and belief-driven fluctuations in the euro area,” Journal of Monetary Economics, vol. 61, pages 53-73.

Curdia V. and M. Woodford (2010). “Credit spreads and monetary policy,” Journal of Money, Credit and Banking, vol. 42(s1), pages 3-35.

Lorenzoni G. and I. Werning (2015). “Slow Moving Debt Crises,” mimeo, MIT.

April 2015, Volume 16, Issue 1

Q&A: Enrique Mendoza on Sovereign Debt

Enrique Mendoza is the Presidential Professor of Economics at the University of Pennsylvania. His work concentrates on financial crises and fiscal policy, in particular in an international context. Mendoza’s RePEc/IDEAS profile.

EconomicDynamics: The literature on optimal contracts implies that defaults should not happen. Why is it that they still happen, even repeatedly, for sovereign debt?
Enrique Mendoza: The problem is that several models in the optimal contracts literature are not designed to capture the stylized facts of sovereign default. This is not to say those particular models are useless, just that they are not useful for understanding sovereign debt. There are some models with optimal recursive contracts in which default does happen at equilibrium, notably the famous Eaton-Gersovitz class of models that have dominated the quantitative literature on external sovereign default since the mid 2000s. These models support equilibria with default because lenders are deep-pocket, risk-neutral agents, so when default happens they basically just think “too bad.” They took a risk-neutral bet on the debt not being repaid, while it was repaid charged hefty premia (particularly as default probabilities rose), and when they don’t get repaid, they do not mind it much. There are also versions of these models with dynamic bargaining that can produce repeated defaults.But these Eaton-Gersovitz models are also far from perfect. They have a notorious problem in being unable to match simultaneously the high debt ratios, low default frequencies and high risk spreads that we see in the data. Variations of the models with different tweaks have been developed that go some way in fixing one or two of these problems, but we still lack a convincing setup that can handle all three. Moreover, the vast majority of these models treat income as an exogenous, stochastic endowment, and the models’ ability to explain basic debt facts (e.g. defaults in bad times, non-trivial debt ratios) hinges critically on imposing default costs with an ad-hoc convex structure (first proposed in Arellano’s 2008 AER article): As a percent of the income realization of default, the cost is zero when income is below a threshold, and then a linear, increasing function of the income realization.In our 2012 QJE article, Vivian Yue and I attempted to model endogenous production and produce a convex, increasing cost of default as result of misallocation of labor and intermediate goods triggered by the disruption of international trade caused by a default (the firms’ access to imports of a subset of intermediate goods are impaired with the loss of access to international credit markets). But we acknowledge in our paper that there is still much left to do in this area.
ED: The problem with sovereign debt is that it is sovereign. Would it be to the advantage of governments to tie their hands by subjecting themselves to some legal authority, say the IMF, or could implicit contracts and markets be sufficient?
EM: At the core of this question is our thinking as to what it means to assume that the government is able or unable to commit. The premise of default models is that the government cannot commit to repay. But if it can tie its hands to the IMF, we are implicitly assuming that there are in fact some forms of commitment that the government can make. This seems to me inconsistent. Why is it that a government cannot commit to repay but can commit to some kind of outside institution or legal authority? I think we are better off pursuing research programs that face head-on the fact that governments are unable to commit and further our understanding of sovereign debt in these environments. After all, every time we read narratives of default episodes, such as in the work of Reinhart and Rogoff or the recent article by Hall and Sargent on fiscal discriminations, defaults generally involve the government precisely abandoning some kind of commitment it had “firmly” made earlier (e.g. the convertibility law in Argentina in 2002, the promise to repay in Spanish dollars in the aftermath of the revolutionary war in the United States, or the promise by Eurozone members to stick to the commitments on debt and deficits set forth in the Maastricht treaty).
ED: Emerging markets are more likely to be the subject of debt crises. What makes them special?
EM: This is an interesting question. I also used to think that. But if you look around the world today, you would conclude the opposite. The focus of attention on sovereign debt crises is almost all on industrial countries, and it will continue to be for a while. It is true that emerging markets have a much more turbulent sovereign debt history, and that their access to international debt markets is more limited. But my hunch is that this is mainly because (a) their fiscal revenues are significantly more volatile (e.g. commodity exports are an important source of fiscal revenues, even in places like Mexico where they have become a small fraction of total exports, they are still large for fiscal revenues); (b) their capacity to raise traditional revenues in the form of taxes is much weaker; and (c) their reputation as serial defaulters precedes them and their domestic institutions often become highly dysfunctional in difficult times.
ED: Do you consider the sovereign debt problems of Europe and the United States different from those typically modeled in the literature?
EM: Yes I do. I think the literature until fairly recently was mainly focused on external sovereign default, whereas the debt problems in Europe and the United States highlight the risk of domestic sovereign default. Episodes of outright domestic defaults, by which I mean de-jure defaults not de-facto defaults as when inflation erodes the real value of debt, are not unheard of, although they are less frequent (by a ratio of 1 to 3) than external defaults, again based on the historical work of Reinhart and Rogoff and Hall and Sargent. If explaining the facts of external default has proven challenging, explaining the facts of domestic public debt and default is a major challenge and is a subject that has not been explored much. Domestic currency public debt markets are huge (2011 estimates value the market of domestic currency government bonds worldwide at about half the size of the world’s GDP!), domestic public debt ratios are high, and have gotten very high in many advanced economies post-2008. But how can we explain that a government may choose to default on its creditors when these creditors are part of its constituency?Pablo D’Erasmo and I have been working on models that approach this issue from a distributional-incentives viewpoint. Think of a government or social planner that maximizes the weighted sum of the welfare of bond and non-bond holders. Issuing debt is great for reducing consumption dispersion across agents, but when repayment time arrives, defaulting is great for precisely that same reason. Interestingly, we find that default costs enter again in the picture, because if the story is purely one of redistribution, debt markets would not exist (because default is always optimal). If default is costly, then debt can be sustained as long as the ownership of government bonds is not too concentrated. The question is what costs are the relevant ones and where do they originate? I think the key here is in looking at what public debt is actually used for. Pablo and I think that the roles of public debt as a vehicle that relaxes liquidity constraints of non-bond-holders and as the vehicle for building precautionary savings are very important factors. We know from the work of Aiyagari and McGrattan in models with heterogeneous agents and no default that domestic public debt has nontrivial social value for these reasons. We believe that these could be the nontrivial costs of domestic default that governments trade off against the incentive to use default to re-distribute. Moreover, this redistribution becomes tempting only when other more subtle ways to re-distribute via taxation and transfer payments are exhausted or are insufficient for the desired amount of redistribution. Alternatively, we found that equilibria with domestic public debt exposed to default risk can also be supported if the government’s social welfare function weighs bond holders more than what a utilitarian government would, and that it is possible that non-bond-holders prefer that the government indeed acts with a bias favoring bond holders, so that it can sustain more debt at lower risk premia, which relaxes the liquidity constraint of non-bond-holders. Because of this, we can also show that there can be majority voting equilibria in which even if non-bond-holders are the majority, the government that is elected has a bias in favor of bond holders.
ED: Does austerity work to reduce public debt?
EM: This is a good time to ask me this question. Together with Jing Zhang and Pablo D’Erasmo we just finished writing a chapter for the new volume of the Handbook of Macroeconomics which is precisely about this issue. The Chapter is entitled “What is a Sustainable Public Debt,” and covers the issue from the perspective of three approaches, one purely empirical and non-structural, one structural based on a variant of a two-country dynamic neoclassical model, and one focusing on the domestic default issue. The structural approach is calibrated to Europe and the United States, takes observed increases in public debt ratios since 2008, and asks this question: Can tax increases generate equilibria in which the present discounted value of the primary fiscal balance rises as much as the debt has increased? A key aspect of this exercise is that we calibrated the model to match actual estimates of tax base elasticities, by introducing endogenous capacity utilization and a realistic treatment of tax allowances for depreciation of capital. We use the model to plot what we call “dynamic Laffer curves,” which show how the present value of primary balances changes as labor or capital taxes change. The precise answer this exercise gives to your question is: Increases in capital income taxes in the U.S. or Europe cannot make the existing debt sustainable, because the Laffer curves peak below the required increase in the present value of the primary balances. The same is true for labor taxes in Europe, but the U.S., since it has a much lower tax wedge in the labor-leisure margin than Europe, has plenty of room to make the observed debt increase consistent with fiscal solvency by taxing labor. Overall, this exercise shows that tax austerity, and particularly via capital income taxation, has pretty much been exhausted as a tool to fix public debt problems.The fact that we live in a financially integrated world matters a lot for these results. The reason capital income taxes are near the peak of their revenue-generating capacity is because they make the countries that increase them competitively-disadvantaged, triggering outflows of capital and equilibrium responses in equilibrium prices and allocations that weaken revenue-generating capacity. The opposite effects provide windfall revenue to the countries that do not raise their taxes or raise them less. The incentives for strategic interaction leading to international competition in capital taxes are therefore strong, as are the incentives to limit capital mobility in hopes that partial financial autarky can avoid these international external effects of country-specific tax changes.


Aiyagari, S. Rao, and Ellen R. McGrattan, 1998. “The optimum quantity of debt,” Journal of Monetary Economics, Elsevier, vol. 42(3), pages 447-469, October.

Arellano, Cristina, 2008. “Default Risk and Income Fluctuations in Emerging Economies,” American Economic Review, American Economic Association, vol. 98(3), pages 690-712, June.

D’Erasmo, Pablo, and Enrique G. Mendoza, 2013. “Distributional Incentives in an Equilibrium Model of Domestic Sovereign Default,” NBER Working Paper 19477.

Mendoza, Enrique G., Linda L. Tesar, and Jing Zhang, 2014. “Saving Europe?: The Unpleasant Arithmetic of Fiscal Austerity in Integrated Economies,” Working Paper WP-2014-13, Federal Reserve Bank of Chicago.

D’Erasmo, Pablo, Enrique Mendoza, and Jing Zhang, 2015 “What is Sustainable Pubic Debt?” forthcoming in Handbook of Macroeconomics, Volume 2, Elsevier.

Eaton, Jonathan, and Mark Gersovitz, 1981. “Debt with Potential Repudiation: Theoretical and Empirical Analysis,” Review of Economic Studies, Wiley Blackwell, vol. 48(2), pages 289-309, April.

Hall, George J., and Thomas J. Sargent, 2014. “Fiscal discriminations in three wars,” Journal of Monetary Economics, Elsevier, vol. 61(C), pages 148-166.

Mendoza, Enrique G., and Vivian Z. Yue, 2012. “A General Equilibrium Model of Sovereign Default and Business Cycles,” The Quarterly Journal of Economics, Oxford University Press, vol. 127(2), pages 889-946.

Reinhart, Carmen M., and Kenneth S. Rogoff, 2009. This Time Is Different: Eight Centuries of Financial Folly, Princeton University Press.

November 2013, Volume 14, Issue 2

James Bullard on policy and the academic world

James Bullard is President and CEO of the Federal Reserve Bank of St. Louis. His research focuses on learning in macroeconomics. Bullard’s RePEc/IDEAS entry.

EconomicDynamics: You have talked about how you want to connect the academic world with the policy world. The research world is already working on some of these questions. Do you have any comments on that?
James Bullard: I have been dissatisfied with the notion that has evolved over the last 25 or 30 years that it was okay to allow a certain group of economists to work on really rigorous models and do the hard work of publishing in journals and then have a separate group that did policymaking and worried about policymaking issues. These two groups often did not talk to each other, and I think that that is a mistake. It is something you would not allow in other fields. If you are going to land a man on Mars, you are going to want the very best engineering. You would not say that the people who are going to do the engineering are not going to talk to the people who are strategizing about how to do the mission.An important part of my agenda is to force discussion between what we know from the research world and the pressing policy problems that we face and try to get the two to interact more. I understand about the benefits of specialization, which is a critical aspect of the world, but still I think it is important that these two groups talk to each other.
ED: Is there a place in policy for the economic models of the “ivory tower”?
JB: I am not one who thinks that the issues discussed in the academic journals are just navel gazing. Those are our core ideas about how the economy works and how to think about the economy. There are no better ideas. That is why they are published in the leading journals. So I do not think you should ignore those. Those ideas should be an integral part of the thinking of any policymaker. I do not think that you should allow policymaking to be based on a sort of second-tier analysis. I think we are too likely to do that in macroeconomics compared to other fields.
ED: Why do you think that is?
JB: I think people have some preconceptions about what they think the best policy is before they ever get down to any analysis about what it might be. I understand people have different opinions, but I see the intellectual market place as the battleground where you hash that out.I do not think the answers are at all obvious. A cursory reading of the literature shows you that there are many, many smart people involved. They have thought hard about the problems that they work on, and they have spent a lot of time even to eke out a little bit of progress on a particular problem. The notion that all those thousands of pages could be summed up in a tweet or something like that is kind of ridiculous. These are difficult issues, and that is why we have a lot of people working on them under some fair amount of pressure to produce results.Sometimes I hear people talking about macroeconomics, and they think it is simple. It is kind of like non-medical researchers saying, “Oh, if I were involved, I would be able to cure cancer.” Well fine, you go do that and tell me all about it. But the intellectual challenge is every bit as great in macroeconomics as it is in other fields where you have unsolved problems. The economy is a gigantic system with billions of decisions made every day. How are all these decisions being made? How are all these people reacting to the market forces around them and to the changes in the environment around them? How is policy interacting with all those decisions? That is a hugely difficult problem, and the notion that you could summarize that with a simple wave of the hand is silly.
ED: Do you remember the controversy, the blogosphere discussion, that macroeconomics has been wrong for two decades and all that criticism? Do you have any comments on that?
JB: I think the crisis emboldened people that have been in the wilderness for quite a while. They used the opportunity to come out and say, “All the stuff that we were saying that was not getting published anywhere is all of the sudden right.”My characterization of the last 30 years of macroeconomic research is that the Lucas-Prescott-Sargent agenda completely smoked all rivals. They, their co-authors, friends, and students carried the day by insisting on a greatly increased level of rigor, and there was a tremendous amount of just rolling up their sleeves and getting into the hard work of actually writing down more and more difficult problems, solving them, learning from the solution and moving on to the next one. Their victory remade the field and disenfranchised a bunch of people. When the financial crisis came along, some of those people came back into the fray, and that is perfectly okay. But, there is still no substitute for heavy technical analysis to get to the bottom of these issues. There are no simple solutions. You really have to roll up your sleeves and get to work.
ED: What about the criticism?
JB: I think one thing about macroeconomics is that because everyone lives in the economy and they talk to other people who live in the economy, they think that they have really good ideas about how this thing works and what we need to do. I do not begrudge people their opinions, but when you start thinking about it, it is a really complicated problem. I love that about macroeconomics because it provides for an outstanding intellectual challenge and great opportunities for improvement and success. I do not mind working on something that is hard.But everyone does seem to have an opinion. In medicine you do see some of that: People think they know better than the doctors and they think they are going to self-medicate because their theory is the right one, and the doctors do not know what they are doing. Steve Jobs reportedly thought like this when he was sick. But I think you see less of this type of attitude in the medical arena than you do in economics. That is distressing for us macroeconomists, but maybe we can improve that going forward.
ED: What do you think about the criticism of economists not being able to forecast or to see the financial crisis? Do you have any thoughts on that?
JB: One of the main things about becoming a policymaker is the juxtaposition between the role of forecasting and the role of modeling to try to understand how better policy can be made.In the policy world, there is a very strong notion that if we only knew the state of the economy today, it would be a simple matter to decide what the policy should be. The notion is that we do not know the state of the system today, and it is all very uncertain and very hazy whether the economy is improving or getting worse or what is happening. Because of that, the notion goes, we are not sure what the policy setting should be today. So, the idea is that the state of the system is very hard to discern, but the policy problem itself is often disarmingly simple. What is making the policy problem hard is discerning the state of the system. That kind of thinking is one important focus in the policy world.In the research world, it is just the opposite. The typical presumption is that one knows the state of the system at a point in time. There is nothing hazy or difficult about inferring the state of the system in most models. However, the policy problem itself is often viewed as really difficult. It might be the solution to a fairly sophisticated optimization problem that carefully weighs the effects of the policy choice on the incentives of households and firms in a general equilibrium context. That kind of attitude is just the opposite of the way the policy world approaches problems. I have been impressed by this juxtaposition since I have been in this job.Now, forecasting itself I think is overemphasized in the policy world because there probably is an irreducible amount of ambient noise in macroeconomic systems which means that one cannot really forecast all that well even in the best of circumstances. We could imagine two different economies, the first of which has a very good policy and second of which has a very poor policy. In both of these economies it may be equally difficult to forecast. Nevertheless, the first economy by virtue of its much better policy would enjoy much better outcomes for its citizens than the economy that had the worse policy. Ability to forecast does not really have much to do with the process of adopting and maintaining a good policy.The idea that the success of macroeconomics should be based on forecasting is a holdover from an earlier era in macroeconomics, which Lucas crushed. He said the goal of our theorizing about the economy is to understand better what the effects of our policy interventions are, not necessarily to improve our ability to forecast the economy on a quarter-to-quarter or year-to-year basis.What we do want to be able to forecast is the effect of the policy intervention, but in most interesting cases that would be a counterfactual. We cannot just average over past behavior in the economy, which has been based on a previous policy, and then make a coherent prediction about what the new policy is going to bring in terms of consumption and investment and other variables that we care about. It is a different game altogether than the sort of day-to-day forecasting game that goes on in policy circles and financial markets.

Of course it is important to try to have as good a forecast as you can have for the economy. It is just that I would not judge success on, say, the mean square error of the forecast. That may be an irreducible number given the ambient noise in the system.

One very good reason why we may not be able to reduce the amount of forecast variance is that if we did have a good forecast, that good forecast would itself change the behavior of households, businesses, and investors in the economy. Because of that, we may never see as much improvement as you might hope for on the forecasting side. The bottom line is that better forecasting would be welcome but it is not the ultimate objective.

We [central banks] do not really forecast anyway. What we do is we track the economy. Most actual forecasting day to day is really just saying: What is the value of GDP last period or last quarter? What is it this quarter? And what is it going to be next quarter? Beyond that we predict that it will go back to some mean level which is tied down by longer-run expectations. There is not really much in the way of meaningful forecasting about where things are going to go. Not that I would cease to track the economy–I think you should track the economy–but it is not really forecasting in the conventional sense.

The bottom line is that improved policy could deliver better outcomes and possibly dramatically better outcomes even in a world in which the forecastable component of real activity is small.

ED: Can the current crisis be blamed on economic modeling?
JB: No. I think that this is being said by people who did not spend a lot of time reading the literature. If you were involved in the literature as I was during the 1990s and 2000s, what I saw was lots of papers about financial frictions, about how financial markets work and how financial markets interact with the economy. It is not an easy matter to study, but I think we did learn a lot from that literature. It is true that that literature was probably not the favorite during this era, but there was certainly plenty going on. Plenty of people did important work during this period, which I think helped us and informed us during the financial crisis on how to think about these matters and where the most important effects might come from. I think there was and continues to be a good body of work on this. If it is not as satisfactory as one might like it to be, that is because these are tough problems and you can only make so much progress at one time.Now, we could think about where the tradeoffs might have been. I do think that there was, in the 1990s in particular, a focus on economic growth as maybe the key phenomenon that we wanted to understand in macroeconomics. There was a lot of theorizing about what drives economic growth via the endogenous growth literature. You could argue that something like that stole resources away from people who might have otherwise been studying financial crises or the interaction of financial systems with the real economy, but I would not give up on those researchers who worked on economic growth. I think that was also a great area to work on, and they were right in some sense that in the long run what you really care about is what is driving long-run economic growth in large developed economies and also in developing economies, where tens of millions of people can be pulled out of poverty if the right policies can be put in place.So to come back later, after the financial crisis, and say, in effect, “Well those guys should not have been working on long-run growth; they should have been working on models of financial crisis,” does not make that much sense to me and I do not think it is a valid or even a coherent criticism of the profession as a whole. In most areas where researchers are working, they have definitely thought it through and they have very good ideas about what they are working on and why it may be important in some big macro sense. They are working on that particular area because they think they can make their best marginal contribution on that particular question.That brings me to another related point about research on the interaction between financial markets and the real economy. One might feel it is a very important problem and something that really needs to be worked on, but you also might feel as a researcher, “I am not sure how I can make a contribution here.” Maybe some of this occurred during the two decades prior to the financial crisis.On the whole, at least from my vantage point (monetary theory and related literature) I saw many people working on the intersection between financial markets and the real economy. I thought they did make lots of interesting progress during this period. I do think that the financial crisis itself took people by surprise with its magnitude and ferocity. But I do not think it makes sense to then turn around and say that people were working on the wrong things in the macroeconomic research world.
ED: There is a tension between structural models that are built to understand policy and statistical models that focus on forecasting. Do you see irrevocable differences between these two classes of models?
JB: I do not see irrevocable differences because there is no alternative to structural models. We are trying to get policy advice out of the models; at the end of the day, we are going to have to have a structural model. We have learned a lot about how to handle data and how to use statistical techniques for many purposes in the field, and I think those are great advances. These days you see a lot of estimation of DSGE models, so that is a combination of theorizing with notions of fit to the data. I think those are interesting exercises.I do not really see this as being two branches of the literature. There is just one branch of the literature. There may be some different techniques that are used in different circumstances. Used properly, you can learn a lot from purely empirical studies because you can simply characterize the data in various ways and then think about how that characterization of the data would match up with different types of models. I see that process as being one that is helpful. But it has to be viewed in the context that ultimately we want to have a full model that will give you clear and sharp policy advice about how to handle the key decisions that have to be made.
ED: What are policy makers now looking for from the academic modelers?
JB: I have argued that the research effort in the U.S. and around the world in economics needs to be upgraded and needs to be taken more seriously in the aftermath of the crisis. I think we are beyond the point where you can ask one person or a couple of smart people to collaborate on a paper and write something down in 30 pages and make a lot of progress that way. At some point the profession is going to have to get a lot more serious about what needs to be done. You need to have bigger, more elaborate models that have many important features in them, and you need to see how those features interact and understand how policy would affect the entire picture.A lot of what we do in the published literature and in policy analysis is sketch ingenious but small arguments that might be relevant for the big elephant that we cannot really talk about because we do not have a model of the big elephant. So we only talk about aspects of the situation, one aspect at a time. Certainly, being very familiar with research myself and having done it myself, I think that approach makes a great deal of sense. As researchers, we want to focus our attention on problems that can be handled and that one can say something about. That drives a lot of the research. But in the big picture, that is not going to be enough in the medium run or the long run for the nation to get a really clear understanding of how the economy works and how the various policies are affecting the macroeconomic outcomes.We should think more seriously about building larger, better, more encompassing types of models that put a lot of features together so that we can understand the relative magnitudes of various effects that we might think are going on all at the same time. We should also do this within the DSGE context, in which preferences are well specified and the equilibrium is well defined. Therein lies the conflict: to get to big models that are still going to be consistent with micro foundations is a difficult task. In other sciences you would ask for a billion dollars to get something done and to move the needle on a problem like this. We have not done that in economics. We are way too content with our small sketches that we put in our individual research papers. I do not want to denigrate that approach too much because I grew up with that and I love that in some sense, but at some point we should get more serious about this. One reason why this has not happened is that there were attempts in the past (circa 1970) to try to put together big models, and they failed miserably because they did not have the right conceptual foundations about how you would even go about doing this. Because they failed, I think that has made many feel like, “Well, we are not going to try that again.” But just because it failed in the past does not mean it is always going to fail. We could do much better than we do in putting larger models together that would be more informative about the effects of various policy actions without compromising on our insistence that our models be consistent with microeconomic behavior and the objects that we study are equilibrium outcomes under the assumptions that we want to make about how the world works.
ED: Can you perhaps talk about some cutting edge research? You have made some points on policy based on cutting edge research.
JB: One of the things that struck me in the research agenda of the last decade or more is the work by Jess Benhabib, Stephanie Schmitt-Grohe and Martin Uribe on what you might think of as a liquidity trap steady state equilibrium which is routinely ignored in most macroeconomic models. But they argue it would be a ubiquitous feature of monetary economies in which policymakers are committed to using Taylor-type rules and in which there is a zero bound on nominal interest rates and a Fisher relation. Those three features are basically in every model. I thought that their analysis could be interpreted as being very general plus you have a really large economy, the Japanese economy, which seems to have been stuck in this steady state for quite a while.That is an example of a piece of research that influenced my thinking about how we should attack policy issues in the aftermath of the crisis. I remain disappointed to this day that we have not seen a larger share of the analysis in monetary policy with this steady state as an integral part of the picture. It seems to me that this steady state is very, very real as far as the industrialized nations are concerned. Much of the thinking in the monetary policy world is that “the U.S. should not become Japan.” Yet in actual policy papers it is a rarity to see the steady state included.That brings up another question about policy generally. Benhabib et al. are all about global analysis. A lot of models that we have are essentially localized models that are studying fluctuations in the neighborhood of a particular steady state. There is a fairly rigorous attempt to characterize the particular dynamics around that particular steady state as the economy is hit by shocks and the policymaker reacts in a particular way. There are also discussions of whether the model so constructed provides an appropriate characterization of the data or not, and so on.However, whether the local dynamics observed in the data are exactly the way a particular model is describing them or not is probably not such a critical question compared to the possibility that the system may leave the neighborhood altogether. The economy could diverge to some other part of the outcome space which we are not accustomed to exploring because we have not been thinking about it. Departures of this type may be associated with considerably worse outcomes from a welfare perspective.I have come to feel fairly strongly that a lot of policy advice could be designed and should be designed to prevent that type of an outcome. If the economy is going to stay in a small neighborhood of a given steady state forever, do we really care exactly what the dynamics are within that small neighborhood? The possibility of a major departure from the neighborhood of the steady state equilibrium that one is used to observing gives a different perspective on the nature of ‘good policy.’ We need to know much more about the question: Are we at risk of leaving the neighborhood of the steady state equilibrium that we are familiar with and going to a much worse outcome, and if we are, what can be done to prevent that sort of global dynamic from taking hold in the economy?I know there has been a lot of good work on robustness issues. Tom Sargent and Lars Hansen have a book on it. There are many others who have also worked on these issues. I think, more than anything, we need perspectives on policy other than just what is exactly the right response to a particular small shock on a particular small neighborhood of the outcome space.
ED: Do you have an example?
JB: I have also been influenced by some recent theoretical studies by Federico Ravenna and Carl Walsh, in part because the New Keynesian literature has had such an important influence on monetary policymakers. A lot of the policy advice has been absorbed from that literature into the policymaking process. I would not say that policymakers follow it exactly, but they certainly are well informed on what the advice would be coming out of that literature.I thought the Ravenna-Walsh study did a good job of trying to get at the question of unemployment and inflation within this framework that so many people like to refer to, including myself on many occasions. They put a rigorous and state-of-the-art version of unemployment search theory into the New Keynesian framework with an eye toward describing optimal policy in terms of both unemployment and inflation. The answer that they got was possibly surprising. The core policy advice that comes out of the model is still price stability–that you really want to maintain inflation close to target, even when you have households in the model that go through spells of unemployment and even though the policymaker is trying to think about how to get the best welfare that you can for the entire population that lives inside the model. The instinct that many might have–that including search-theoretic unemployment in the model explicitly would have to mean that the policymaker would want to “put equal weight” on trying to keep prices stable and trying to mitigate the unemployment friction–turns out to be wrong. Optimal monetary policy is still all about price stability.I think that is important. We are in an era when unemployment has been much higher than what we have been used to in the U.S. It has been coming down, but it is still quite high compared to historical experience in the last few decades. For that reason many are saying that possibly we should put more weight on unemployment when we are thinking about monetary policy. But this is an example of a very carefully done and rigorous piece of theoretical research which can inform the debate, and the message that it leaves is that putting too much weight on unemployment might be actually counterproductive from the point of view of those that live inside the economy because they are going to have to suffer with more price variability than they would prefer, unemployment spells notwithstanding. I thought it was an interesting perspective on the unemployment/inflation question, which is kind of a timeless issue in the macro literature.


Jess Benhabib, Stephanie Schmidt-Grohé and Martin Uribe, 2001. “The Perils of Taylor Rules,” Journal of Economic Theory, vol. 96(1-2), pages 40-69, January.

James Bullard, 2013. “The Importance of Connecting the Research World with the Policy World,” Federal Reserve Bank of St. Louis The Regional Economist, October.

James Bullard, 2013. “Some Unpleasant Implications for Unemployment Targeters,” presented at the 22nd Annual Hyman P. Minsky Conference in New York, N.Y., April 17.

James Bullard, 2010. “Seven Faces of ‘The Peril,’Federal Reserve Bank of St. Louis Review, vol. 92(5), pages 339-52, September/October.

James Bullard, 2010. “Panel Discussion: Structural Economic Modeling: Is It Useful in the Policy Process?” presented at the International Research Forum on Monetary Policy in Washington D.C., March 26.

Lars Peter Hansen and Thomas Sargent, 2007. Robustness. Princeton University Press.

Federico Ravenna and Carl Walsh, 2011. “Welfare-Based Optimal Monetary Policy with Unemployment and Sticky Prices: A Linear-Quadratic Framework,” American Economic Journal: Macroeconomics, vol. 3(2), pages 130-62, April.

Volume 14, Issue 1, November 2012

Q&A: Robert Lucas on Modern Macroeconomics

Robert Lucas is Professor of Economics at the University of Chicago. Recipient of the 1995 Nobel Prize, his research focuses on macroeconomics. Lucas’s RePEc/IDEAS entry.
EconomicDynamics: Has the Lucas Critique become less relevant?
Robert Lucas: My paper, “Econometric Policy Evaluation: A Critique” was written in the early 70s. Its main content was a criticism of specific econometric models—models that I had grown up with and had used in my own work. These models implied an operational way of extrapolating into the future to see what the “long run” would look like. I was working with Leonard Rapping on Phillips Curves at that time, using these same modeling techniques. Then Ned Phelps and Milton Friedman used theoretical ideas to work out what long run Phillips curves had to look like. The two approaches both seemed solid to us, but they gave different answers. I figured out how to apply John Muth’s idea of rational expectations, which Muth also spelled out in explicit models, to resolve this puzzle.Of course every economist, then as now, knows that expectations matter but in those days it wasn’t clear how to embody this knowledge in operational models. Now everyone knows Muth’s work and the complementary work of Gene Fama’s on efficient markets and the models I criticized then have long since been replaced by others that build on their work. I am pleased that my work contributed to this.But the term “Lucas critique” has survived, long after that original context has disappeared. It has a life of its own and means different things to different people. Sometimes it is used like a cross you are supposed to use to hold off vampires: Just waving it it an opponent defeats him. Too much of this, no matter what side you are on, becomes just name calling.
ED: How would you define a DSGE model?
RL: Virtually all macroeconomic models today are dynamic, stochastic and general equilibrium (where here “general equilibrium” means you have the same number of equations and unknowns). But this definition isn’t very useful since it applies to the original models of Tinbergen, Klein and Goldberger, and all the others just as well as more recent work.If we narrow the definition of DSGE by using “general equilibrium” to refer to competitive or Nash equilibria where the strategy sets of each agent are made explicit in an internally consistent way then we have Kydland-Prescott and other RBC descendants and not much else. My 1972 JET paper and other “toy models” would qualify too, but now DSGE seems to me to include only seriously calibrated models. But Freeman and Kydland’s 2000 AER paper would qualify and maybe recent work by Lorenzoni and Guerreri.New Keynesian models—any model with price stickiness built in—would fail, but I’m not sure whether this should be viewed as a defect or a virtue. If we accept any version of the Quantity Theory of Money then it seems clear that it does not hold at high frequencies (which is what I think price stickiness means). If we don’t accept the Quantity Theory of Money at low frequencies then I guess we should just close up shop. There are some hard unresolved problems to be faced.
ED: You have once written that all business cycles are alike. Is the last one different?
RL: The line “business cycles all alike” is taken from my paper “Understanding Business Cycles” where I used it to sum up the empirical findings of Wesley Mitchell (the founder of the NBER). Mitchell collected time series on wide variety of economic measurements on anything that moved—freight car loadings, pig iron production, you name it—and used them to construct a kind of typical average trough to trough cycle. Mitchell cared little about economic theory, but his hope was that he could find patterns in the data that would be useful in practice. What struck me in this research was the fact that co-movements in different actual cycles all seemed to conform pretty well to Mitchell’s typical cycle.I drew from this the idea that all cycles are probably driven the same kind of shocks. Since I was convinced by Friedman and Schwartz that the 1929-33 down turn was induced by monetary factors (declined is money and velocity both) I concluded that a good starting point for theory would be the working hypothesis that all depressions are mainly monetary in origin.Ed Prescott was skeptical about this strategy from the beginning and I remember he gave me an old framed photo of Mitchell (with a cracked glass!) saying that I should have it because I was an admirer of Mitchell and he was not. Ed wanted to start with Kuznets highly structured series rather than the hodgepodge of series Mitchell used. He also thought we needed to have some kind of benchmark theoretical model to give us a start and he liked the natural match ups between theoretical objects and Kuznets’ accounts. His great 1982 paper with Kydland ended by summarizing comovements just as Mitchell had done but unlike Mitchell they produced an internally consistent theoretical account of causes and effects at the same time.As I have written elsewhere, I now believe that the evidence on post-war recessions (up to but not including the one we are now in) overwhelmingly supports the dominant importance of real shocks. But I remain convinced of the importance of financial shocks in the 1930s and the years after 2008. Of course, this means I have to renounce the view that business cycles are all alike!
ED: If the economy is currently in an unusual state, do micro-foundations still have a role to play?
RL: “Micro-foundations”? We know we can write down internally consistent equilibrium models where people have risk aversion parameters of 200 or where a 20% decrease in the monetary base results in a 20% decline in all prices and has no other effects. The “foundations” of these models don’t guarantee empirical success or policy usefulness.What is important—and this is straight out of Kydland and Prescott—is that if a model is formulated so that its parameters are economically-interpretable they will have implications for many different data sets. An aggregate theory of consumption and income movements over time should be consistent with cross-section and panel evidence (Friedman and Modigliani). An estimate of risk aversion should fit the wide variety of situations involving uncertainty that we can observe (Mehra and Prescott). Estimates of labor supply should be consistent aggregate employment movements over time as well as cross-section, panel, and lifecycle evidence (Rogerson). This kind of cross-validation (or invalidation!) is only possible with models that have clear underlying economics: micro-foundations, if you like.This is bread-and-butter stuff in the hard sciences. You try to estimate a given parameter in as many ways as you can, consistent with the same theory. If you can reduce a 3 orders of magnitude discrepancy to 1 order of magnitude you are making progress. Real science is hard work and you take what you can get.”Unusual state”? Is that what we call it when our favorite models don’t deliver what we had hoped? I would call that our usual state.


Burns, A. F., and W. C. Mitchell, 1946. “Measuring Business Cycles,” National Bureau of Economic Research, Inc.
Fama, E. F., 1970. “Efficient Capital Markets: A Review of Theory and Empirical Work,” Journal of Finance, vol. 25(2), pages 383-417.
Freeman, S., and F. E. Kydland, 2000. “Monetary Aggregates and Output,” American Economic Review, vol. 90(5), pages 1125-1135.
Friedman, M., 1957. “A Theory of the Consumption Function,” National Bureau of Economic Research.
Friedman, M., and A. J. Schwartz, 1963. “A Monetary History of the United States, 1867-1960,” National Bureau of Economic Research.
Guerrieri, V., and G. Lorenzoni, 2011. “Credit Crises, Precautionary Savings, and the Liquidity Trap,” Working Paper 17583, National Bureau of Economic Research.
Kydland, F. E., and E. C. Prescott, 1982. “Time to Build and Aggregate Fluctuations,” Econometrica, vol. 50(6), pages 1345-70.
Lucas, R. E. Jr., 1972. “Expectations and the neutrality of money,” Journal of Economic Theory, vol. 4(2), pages 103-124.
Lucas, R. E. Jr., 1976. “Econometric policy evaluation: A critique,” Carnegie-Rochester Conference Series on Public Policy, vol. 1(1), pages 19-46.
Lucas, R. E. Jr., 1977. “Understanding business cycles,” Carnegie-Rochester Conference Series on Public Policy, vol. 5(1), pages 7-29.
Lucas, R. E. Jr., and L. A. Rapping, 1969. “Price Expectations and the Phillips Curve,” American Economic Review, vol. 59(3), pages 342-50.
Mehra, R., and E. C. Prescott, 1985. “The equity premium: A puzzle,” Journal of Monetary Economics, vol. 15(2), pages 145-161.
Modigliani, F., and R. Brumberg, 1954. “Utility Analysis and the Consumption Function: An Interpretation of Cross Section Data,” in K.K. Kurihara, ed., Post Keynesian Economics.
Muth, J. F., 1961. “Rational Expectations and the Theory of Price Movements”, Econometrica, vol. 29, pages 315-335.
Phelps, E. S., 1969. “The New Microeconomics in Inflation and Employment Theory,” American Economic Review, vol. 59(2), pages 147-60.
Volume 13, Issue 2, April 2012

Q&A: Frank Schorfheide on DSGE Model Estimation

Frank Schorfheide is Professor of Economics at the University of Pennsylvania. He is interested in the estimation of DSGE models, Bayesian methods, vector autoregressions. Schorfheide’s RePEc/IDEAS entry.
EconomicDynamics: DSGE model used to be exclusively calibrated. Your work was a major contributor in bringing estimation to this literature. Where do you see the major advantages of estimating a DSGE model?
Frank Schorfheide: When I started my research as a PhD student in the mid 1990s there seemed to be strong misconceptions among calibrators about what econometrics can deliver and among econometricians about what it means to calibrate. Both camps seemed to engage in some sort of trench warfare launching grenades at what were poor incarnations of econometrics and calibration analysis. The stereotype among calibrators was that econometrics requires “true” models and the stereotype among econometricians was that calibrators pick parameters in an arbitrary way, disregarding empirical evidence. While this made for good pub conversations, it didn’t exactly facilitate progress in empirical macroeconomics.My personal interest, when I started to work on econometric methods for the analysis of DSGE models, was to develop a formal statistical framework (Schorfheide 2000) that captures some of the reservations of calibrators: the framework should be able to account for misspecification of DSGE models and it should recognize that objective functions for the determination of parameters should be derived from loss functions that are connected to the decision problems that the model is supposed to solve.Once one recognizes that econometrics does not need to rely on the “Axiom of Correct DSGE Model Specification” it offers a lot of tools that are useful to summarize parameter uncertainty, uncertainty associated with model implications, forecasts, and policy predictions, and it provides coherent measures of fit for the comparison and weighting of competing models. My favorite approach of dealing with DSGE model misspecification is to use the models to construct priors for VARs or other flexible time series models. Starting in 2004, I have explored this idea in several co-authored papers with Marco Del Negro. We called the resulting hybrid model DSGE-VAR.In the past decade the time series fit of (representative agent) DSGE models has improved considerably, an example is the celebrated Smets-Wouters model, such that the initial concerns about inappropriate probabilistic structures of the model became less relevant. In turn, the use of formal econometric tools is much more attractive now than it was 20 years ago. I have discussed some of the progress and the challenges in the area of DSGE model estimation in Schorfheide (2010).
ED: Can a case still be made for calibration?
FS: A few years ago my colleague Victor Rios-Rull and I engaged into the following computational/educational experiment. At the time Victor was teaching his quantitative macro class and I was teaching my time series econometrics class. We both asked our students to use a stochastic growth model to measure the importance of technology shocks for business cycle fluctuations of hours and output. Victor’s students were supposed to calibrate the model and my students were supposed to estimate it with Bayesian methods.Together with some of our students we later turned the results into a paper. While the two of us favor different empirical strategies, the paper emphasizes the most important aspect of the empirical analysis is how the key parameters of the model can be identified based on the available data. Once there is some agreement on plausible sources of identification, these sources can be incorporated into either an estimation objective function or a calibration objective function.In fact, Bayesian estimation and calibration are much closer than many people think. The steps taken when prior distributions for DSGE model parameters are elicited are often quite similar to the steps involved in the calibration. Moreover, both calibration and estimation tend to condition on the data and are not concerned about repeated sampling. The main difference is that Bayesians tend to utilize the information in the likelihood function, whereas in a calibration analysis the information in the autocovariances of macroeconomic time series is often deliberately ignored when it comes to the determination of parameters.Coming back to the question, calibration is particularly attractive in models that have a complicated structure, e.g. heterogeneous agent economies, and are costly (in terms of computational time) to solve repeatedly for different parameter values. However, it is important to clearly communicate how the data are used to determine the model parameters and to what extent the model is consistent or at odds with salient features of the data.
ED: DSGE models and calibration were a response to the Lucas Critique. Isn’t the estimation of inherently abstract models a step backwards in this respect?
FS: Not at all. Let me modify your statement as follows: DSGE models were a response to the Lucas Critique and, at the early stage of development, calibration was a way of parameterizing DSGE models in view of their stylized structure.The Lucas Critique was concerned with the lack of policy-invariance of estimated decision rules, e.g. consumption equations, investment equations, or labor supply equations. In turn, macroeconomists specified their models in terms of agents’ “preferences and technologies” and derived the decision rules as solutions to intertemporal optimization problems, imposing a dynamic equilibrium concept. Counterfactual policy analyses could then be conducted by re-solving for the equilibrium under alternative policy regimes and comparing the outcomes.Arguably, the more stylized the DSGE model, the less convincing the claim that the preference and technology parameters are indeed policy-invariant, which undermines the credibility of the counterfactual policy analysis. In Schorfheide, Chang and Kim (forthcoming) we provide some simulation evidence that the aggregate labor supply elasticity and the aggregate level of total factor productivity in a representative agent model is sensitive to changes in the tax rate if the representative agent model is an approximation of a heterogeneous agent economy. In turn, policy predictions with the representative agent model tend to be inaccurate.
ED: Forecasting has traditional been limited to purely statistical models. You have started evaluating the forecasting performance of estimated DSGE models. Can the limitations from theory still allow them to compete with models fitted for forecasting?
FS: Marco Del Negro and I recently wrote a chapter for a forthcoming second volume of the Elsevier Handbook of Economic Forecasting and we used the following analogy: while a successful decathlete may not be the fastest runner or the best hammer thrower, he certainly is a well-rounded athlete. In this analogy the DSGE model is supposed to be the decathlete that competes in various disciplines such as forecasting, policy analysis, story telling, etc., and it has to compete on the one hand with purely statistical forecast models that are optimized to predict a particular series, e.g. inflation, and on the other hand with less quantitative and more specialized applied theory models that highlight, say, particular frictions in financial intermediation or in the housing market.Our general reading of the literature and the finding in our own work is that (i) DSGE models, in particular models that have been tailored to fit the data well — such as the Smets and Wouters (2007) model, are competitive with statistical models in terms of forecast accuracy. But when push comes to shove elaborate statistical models can certainly beat DSGE models. (ii) The use of real-time information, e.g. treating nowcasts of professional forecasters as current quarter observations, can drastically improve short-run forecasting performance. (iii) Anchoring long-run inflation dynamics in the DSGE model with observations on 10-year inflation expectations improves inflation forecasts. (iv) Relaxing the DSGE model restrictions a little bit by using the DSGE model to generate a prior for the coefficients for a VAR also helps to boost forecast performance.
ED: Recent economic history gives good reasons to believe non-linear phenomena may be at play at business cycle frequencies. How should we study this?
FS: Nonlinearities tend to be compelling ex post but are often elusive ex ante. In the time series literature there exists an alphabet soup of reduced-form nonlinear models. Many of these models have been developed to explain certain historical time series pattern ex post, but most of them do not perform better than linear models in a predictive sense (though there are some success stories).When I looked at real-time forecasts from linearized DSGE models and vector autoregressions during the 2007-09 recession I was surprised how well these models did — in the following sense: of course they did not predict the large drop in output in the second half of 2008, but neither did, say, professional forecasters. However, in early 2009, the models were back on track. So, for a nonlinear model to beat these models in a predictive sense, it would have had to predict the 2008:Q4 downturn, say, in July.Of course, the ex-post story of a linear model for the recent recession is that it was caused by large shocks with a magnitude of multiple standard deviations. This may not be particularly compelling — since the narrative for the financial crisis involves problems in the Mortgage market that lead to a severe disruption of financial intermediation and economic activity. A model that captures this mechanism has to be inherently nonlinear and the development of models with financial frictions is an important area of current research.Our standard stochastic growth model as well as the typical New Keynesian DSGE model are actually fairly linear (at least for parameterizations that can replicate post-war U.S. business cycle fluctuations). However, researchers have been adding mechanisms to these models that can generate nonlinear dynamics, including stochastic volatility, learning mechanisms, borrowing constraints, non-convex adjustment costs, a zero-lower-bound on nominal interest rates, to name a few. This is certainly an important direction for future research.Aruoba, Bocola and Schorfheide (2012) started to work on the development of a class of nonlinear time series models that can be used to evaluate DSGE models with nonlinearities — in the same way that we have used VARs to evaluate linearized DSGE models. With a simple univariate version of this nonlinear time series model one can pick up some interesting empirical features, e.g. asymmetries across recessions and expansions in GDP growth and zero-lower-bound dynamics of interest rates. We are currently working on multivariate extensions.
ED: We tend to focus on deviations from a trend or steady state. But in many ways trends matter more. Is there any work on estimating trends in DSGE models, and if not, should there be?
FS: Most estimated DSGE model nowadays have the trend incorporated into the model. For instance, in a stochastic growth model, a trend in the endogenous variables can be generated by assuming that the technology process has a deterministic trend or a stochastic trend (e.g., random walk with drift). The advantage of this method is that one does not have to detrend the macroeconomic time series prior to fitting the DSGE model. The disadvantage is that the DSGE model imposes very strong co-trending restrictions that are to some extent violated in the data.For instance, the basic stochastic growth model implies that output, consumption, investment, and real wages have a common trend, whereas hours worked is stationary. However, in the data the “great ratios,” e.g. consumption-output or investment-output are not exactly stationary. Moreover, hours worked are often very persistent and exhibit unit-root-type dynamics. As a result, some of the estimated shock processes tend to be overly persistent because they have to absorb the trend misspecification. This might distort the subsequent analysis with the model.In general, this is an important topic but there is no single solution that is completely satisfactory. Neither the old method of detrending each series individually and then modeling deviations from trends with a model that has clear implications about long-run equilibrium relationships, nor the newer method of forcing misspecified trends on the data are entirely satisfactory. More careful research on this topic would be very useful.In my own (co-authored) work (Del Negro and Schorfheide forthcoming, and Aruoba and Schorfheide 2011), one of the more successful attempts to dealing with trends — not in real series, but in nominal series — was to include time-varying target inflation rates into the model and to “anchor” the inflation target by observations on long-run inflation expectations. This really helps the fit and the forecast performance and has the appealing implication that low-frequency movements in inflation and interest rates are generated by changes in monetary policy.
ED: Is theory still ahead of measurement?
FS: The phrase “theory ahead of measurement” is connected to Prescott’s (1986) conjecture that (some of) the discrepancy between macroeconomic theories and data could very well disappear “if the economic variables were measured more in conformity with theory.” The phrase becomes problematic if it is used as an excuse not to subject macroeconomic models to a careful empirical evaluation.In general we are facing the problem that in order to keep or models tractable we have to abstract from certain real phenomena. Take for instance seasonality. We could either build models that can generate fluctuations at seasonal frequencies or we could remove these fluctuations from the data. Most people would probably agree that matching a model that is not designed to generate seasonal fluctuations to seasonally-unadjusted data is not a good idea. The profession has converged to an equilibrium in which seasonality is removed from the data and not incorporated into DSGE models.However, in other dimensions there is more of a disagreement among economists: what are the right measures of price inflation, wage inflation, hours worked, and interest rate spreads that should be used in conjunction with aggregate DSGE models? In general, a careful measurement of key economic concepts is very important, but that does not mean that theory is ahead of measurement.I do agree with the following statement in Prescott’s (1986) conclusion: “Even with better measurement, there will likely be significant deviations from theory which can direct subsequent theoretical research. This feedback between theory and measurement is the way mature, quantitative sciences advance.”


Aruoba, B., L. Bocola, and F. Schorfheide, 2011. “A New Class of Nonlinear Time Series Models for the Evaluation of DSGE Models,” Working Paper.
Aruoba, B., and F. Schorfheide, 2011. “Sticky Prices versus Monetary Frictions: An Estimation of Policy Trade-offs,” American Economic Journal: Macroeconomics, vol. 3(1), pages 60-90.
Chang, Y., S.-B. Kim, and F. Schorfheide, 2012. “Labor Market Heterogeneity, Aggregation, and the Policy-(In)variance of DSGE Model Parameters,” Journal of the European Economic Association, forthcoming.
Del Negro, M., and F. Schorfheide, 2004. “Priors from General Equilibrium Models for VARs,” International Economic Review, vol. 45(2), pages 643-673.
Del Negro, M., and F. Schorfheide, 2012. “DSGE Model-Based Forecasting,” in preparation for a chapter of the Handbook of Economic Forecasting, Vol. 2, Elsevier.
Lucas, R. Jr, 1976. “Econometric policy evaluation: A critique,” Carnegie-Rochester Conference Series on Public Policy, vol. 1(1), pages 19-46.
Prescott, E. C., 1986. “Theory ahead of business cycle measurement,” Quarterly Review, Federal Reserve Bank of Minneapolis, issue Fall, pages 9-22.
Rios Rull, J. V., C. Fuentes-Albero, R. Santaeulalia-Llopis, M. Kryshko, and F. Schorheide, 2009. “Methods versus Substance: Measuring the Effects of Technology Shocks,” NBER Working Paper 15375.
Schorfheide, F., 2000. “Loss Function-based Evaluation of DSGE Models,” Journal of Applied Econometrics, vol. 15(6), pages 645-670.
Schorfheide, F., 2010. “Estimation and Evaluation of DSGE Models: Progress and Challenges“, NBER Working Paper 16781.
Volume 13, Issue 1, November 2011

Gita Gopinath on Sovereign Default

Gita Gopinath is Professor of Economics at Harvard University. She has worked on debt issues, emerging markets and international economics. Gopinath’s RePEc/IDEAS entry.
EconomicDynamics: The current crisis with Greece and possibly other European countries highlights that it has become more difficult to manage high public debt when currency devaluation is not an option. Aside from drastic austerity measures, what are possible policies?
Gita Gopinath: The interaction between high public debt and the inability to devalue has come up frequently in discussions of the Euro crisis. However, there is an important distinction to be made. There are two channels through which a currency devaluation can help a government repay its debt: One, by reducing the real value of the debt owed and two by stimulating the economy through adjustments of the terms of trade and therefore raising primary fiscal surpluses for the government. The first channel is relevant to the extent that the debt is denominated in local currency in which case the real value of debt owed externally is lower. However, in the case of an individual country in the Euro Area like Greece whose debt is in Euros, even if they were to exit the Euro, as long as they did not default on their debt contracts by re-denominating their liabilities in their local currency, a currency devaluation would do little to reduce the value of debt owed.As for the second channel through which a currency devaluation can help, namely the expansionary effect it can have on economic output, there is a clear substitute through the use of fiscal instruments. In a recent paper, Farhi, Gopinath and Itskhoki (2011) show that “fiscal devaluations” deliver the exact same real allocations as currency devaluations. Currency devaluations to the extent they have expenditure switching effects do so by deteriorating the terms of trade of the country, that is raising the relative price of imported to exported goods. In the absence of a currency adjustment, a combination of an increase in value added taxes (with border adjustment) and a uniform cut in payroll taxes can deliver the same outcomes. An increase in VAT will raise the price of imported goods as foreign firms face a higher tax and it will lower the price of domestic exports (relative to domestic sales prices) since exports are exempt from VAT. The net effect is a deterioration in the terms of trade equivalent to that following a currency devaluation. To ensure that firms that adjust prices do so similarly across currency and fiscal devaluations an increase in VAT needs to be accompanied with a cut in payroll taxes. We show that the equivalence of currency and fiscal devaluations is valid in a wide range of environments, with varying degrees of price and wage stickiness and with alternative asset market structures.The increase in VAT can be viewed as an austerity measure but it is important to note that when combined with a payroll tax cut its impact on the economy is exactly the same as that following an exchange rate devaluation. In other words the lack of exchange rate flexibility does not limit the ability of countries in the Euro area to achieve allocations attainable under a nominal exchange rate devaluation.
ED: In the face of the latest debt developments, central banks have been much more proactive than historically. Do you view this as a good development?
GG: The European Central Bank has certainly been proactive in containing the debt crisis in Europe through direct purchases of troubled sovereign debt. At the same time they have been cautious in their role of lender of last resort. Their current stance is that they are not willing to monetize the debt of countries at the risk of future inflation. Given that the crisis has spread to Italy and to a lesser extent France it will be interesting to see how the ECB responds.The bigger question is what should a central bank do. Should it stick to its mandate of targeting inflation or should it resort to inflating away the debt so as to prevent a default by the country? I do not believe we have the answers here. As recently pointed out by Kocherlakota (2011) if a central bank decides to commit to an inflation target then this effectively makes the debt of the country real. The country can be subject to self-fulfilling credit crisis along the lines of Calvo (1988) and Cole and Kehoe (2000). If lenders expect that governments will default they raise the cost of borrowing for governments who then find it difficult to roll over their debts and this in turn triggers default. The negative consequences for the economy of a default are potentially large.In this context suppose a central bank is willing to inflate away the debt. Does this firstly rule out the multiple equilbria described previously? Calvo (1988) evaluates several scenarios some of which involve multiple equilibria even when default is implicit through inflation. So it is not obvious whether having the ability to inflate solves the multiplicity problem. Then there are the relative costs of inflation versus outright default. A plus for inflation is that it can be done incrementally unlike default that is much more discreet. It is arguable that the costs of a small increase in inflation are low relative to the costs of default. Of course the level of inflation required can be very high and this can unhinge inflation expectations that then can have large negative consequences for the economy. So, as I said earlier, it is still to be determined in future research what the virtues are of having a central bank that uses its monetary tools to contain a debt crisis.
ED: While it is always an economic solution to a threat of default, political constraints are very restricting, as Greece shows. Are we neglecting the political economy aspects?
GG: It is not straightforward even from a pure economic point of view. Should countries default or undertake austerity measures to remain in good credit standing? The answer depends on how costly defaults are and empirical evidence on this provides little guidance. In our models we postulate several costs associated with defaults, including loss of access to credit markets (Eaton and Gersolvitz (1981), Aguiar and Gopinath (2006)), collateral damage to other reputational contracts, spillover to banking crisis, trade sanctions etc. See Wright (forthcoming) for a non-technical survey of the sovereign debt and default literature. The empirical evidence on this, as surveyed in Panizza et al. (2009) is, however, not very informative given the endogeneity of defaults. The ambiguity associated with the optimal response to the debt crisis was evident at the start of the debt crisis in Europe when there was little agreement even among economists about whether Greece should default or not.Then, as you rightly point out the political economy aspects add another dimension of complexity. The political constraints have permeated all aspects of the debt crisis. Firstly, the objective function being maximized here is clearly not purely based on economics but based on political factors that motivated the formation of the Euro. Secondly, the policy responses have been constrained by politics. For instance, at the start of the crisis it was widely perceived that the reason the Germans and French were interested in bailing out Greece was mainly to protect their banks that had significant exposure to Greek debt. An alternative, less costly route was to directly bailout German and French banks but that was viewed as politically impossible to implement. The Euro zone is now back to where it started with banks needing bailouts, except the crisis has exacerbated with now the debts of Portugal, Spain and Italy also facing default pressure. In addition, the political fallout of implementing austerity measures is evident all over Europe.While there exists a large literature on the political economy of sovereign debt the focus has largely been on explaining why a country can end up with too much debt. There is more limited work that combines political economy with the possibility of sovereign default. An exception is Manuel Amador (2008) who examines the role of political economy in sustaining debt in equilibrium and Aguiar, Amador and Gopinath (2009) who show that the relative impatience of the government combined with its inability to commit to repayment of debt and its tax policy can lead distortionary investment cycles even in the long-run. There is almost no work on the redistributional impact on heterogeneous agents of the decision to default versus to undertake costly austerity measures, something the current crisis has brought to the forefront. This is certainly fertile ground for future research.


Aguiar, Mark, Manuel Amador and Gita Gopinath, 2009. “Investment Cycles and Sovereign Debt Overhang,” Review of Economic Studies, Wiley Blackwell, vol. 76(1), pages 1-31.
Aguiar, Mark and Gita Gopinath, 2006. “Defaultable debt, interest rates and the current account,” Journal of International Economics, Elsevier, vol. 69(1), pages 64-83.
Amador, Manuel, 2008. “Sovereign Debt and the Tragedy of the Commons“, Manuscript.
Calvo, Guillermo, 1988. “Servicing the Public Debt: The Role of Expectations,” American Economic Review, American Economic Association, vol. 78(4), pages 647-61.
Cole, Harold and Timothy Kehoe, 2000. “Self-Fulfilling Debt Crises,” Review of Economic Studies, Wiley Blackwell, vol. 67(1), pages 91-116.
Eaton, Jonathan and Mark Gersovitz, 1981. “Debt with Potential Repudiation: Theoretical and Empirical Analysis,” Review of Economic Studies, Wiley Blackwell, vol. 48(2), pages 289-309.
Farhi, Emmanuel, Gita Gopinath and Oleg Itskhoki, 2011. “Fiscal Devaluations“, manuscript.
Kocherlakota, Narayana, 2011. “Central Bank Independence and Sovereign Default“, Speech, September 26, 2011.
Panizza, Ugo, Federico Sturzenegger and Jeromin Zettelmeyer, 2009. “The Economics and Law of Sovereign Debt and Default,” Journal of Economic Literature, American Economic Association, vol. 47(3), pages 651-98.
Wright, Mark, forthcoming. “The Theory of Sovereign Debt and Default“, Encyclopaedia of Financial Globalization.

Volume 12, Issue 2, April 2011

Jeremy Greenwood on DGE beyond Macroeconomics

Jeremy Greenwood is Professor of Economics at the University of Pennsylvania. His first interests have been in economic fluctuations, and he has since strayed into other areas of economics and beyond: premarital sex, female emancipation, the information technology revolution, and more. Greenwood’s RePEc/IDEAS entry.
EconomicDynamics: Dynamic General Equilibrium (DGE) theory has found its first applications in the study of economic fluctuations, in particular with Real Business Cycle theory. You have applied DGE to many other areas. Why is it so useful outside of macroeconomics (in a narrow sense)?
Jeremy Greenwood: I remember Finn Kydland presenting “Time to Build and Aggregate Fluctuations” at the University of Rochester sometime in the early 1980s. I was a graduate student at the time, taking a reading course in dynamic programming from James Friedman, the game theorist (emeritus at UNC). We were reading Bellman’s book “Dynamic Programming”, Denardo’s (1967) paper on “Contraction Mappings in the Theory Underlying Dynamic Programming”, and Karlin’s (1955) paper “The Structure of Dynamic Programming Models”. I was the only macro student in this small class. During Finn’s seminar, I could feel the goose bumps on my arms: Who knew that you could solve dynamic programming models on the computer?The place that Kydland and Prescott’s (1982) landmark paper would have in economics was unclear at first. Look at McCallum’s (1989) discussion about Kydland and Prescott’s work. He talks about the meaning of technology shocks, the (ir)relevance of solving central planner’s problems, the absence of money in the framework, and the model’s inability to mimic the cyclical behavior of prices and wages. The importance for business cycle theory of Kydland and Prescott’s landmark paper cannot be understated. In retrospect, however, its prime achievement was to introduce modern techniques from operations research and numerical analysis into economics, more generally.General equilibrium modeling was waning at the time because it had reached a point of diminishing returns. Higher level mathematics was no longer yielding interesting new results. Kydland and Prescott signaled that you could put a dynamic stochastic general equilibrium model onto a computer and simulate it to get new and interesting findings. Researchers were no longer limited by pencil and paper techniques. A new era had dawned. Of course, other disciplines had reached this point earlier. Aerospace engineers had realized for a long time that they cannot hope to discover the properties of a helicopter flying through turbulence by using pencil and paper techniques. The same is true for astrophysicists trying to model the instant after the big bang. Both use computers. In some quarters in economics there is still a resistance to this. Simulations aren’t general enough, people say. Yet, these same individuals will take drugs or fly in planes designed on computers. Nobody “proved” these things are safe. Anyway, the idea of simulating dynamic stochastic models has applications in many fields in economics. You can use it in public finance to study the impact of taxation. In fact, Shoven and Walley (1972) did this in a static setting before Kydland and Prescott. Labor economists can use it to explore topics such as the occupational mobility of workers, as in Kambourov and Manovksii (2009). People in industrial organization employ simulation methodology to model firm dynamics. A classic example here is Hopenhayn and Rogerson (1993). One can use it in models of international trade to explain how shifts in tariffs and other trade costs impact on job turnover and the distribution of wages–Cosar, Guner, and Tybout (2010). Questions in finance, such as consumer bankruptcy, have been addressed, as in Chatterjee, Corbae, Nakajima and Ríos-Rull (2007).
ED: Your applications do not limit themselves to Economics. How can Economics, and in particular DGE, teach us something about topics typically covered by other social sciences?
JG: Economics has a lot to say about modeling human behavior, both at the individual and aggregate levels. Take the sexual revolution. In yesteryear out-of-wedlock births were rare. Since contraception was primitive, one can surmise that premarital sex was not widespread. In the U.S. only 6% of 19-year-old females would have experienced premarital sex in 1900, versus approximately 75% today. Why? Historically, the consequences of a young woman engaging in premarital sex were very dire. Most families were at or near a subsistence level of consumption. Unmarried young women often just abandoned foundlings. This is documented in two great books by Fuchs (1984, 1992). Today, contraception is very effective. The odds of not becoming pregnant from a single encounter might have risen from something like 0.961 in 1900 to 0.996 today. This is a big difference. If a girl has sex many times, then in order to calculate the odds of not becoming pregnant after n encounters, you need to raise these numbers to the power of n. Therefore, the benefit/cost calculation of engaging in premarital sex has changed dramatically. Economists would accordingly expect a rise in premarital sex to occur.Culture is often taken as a force outside economics. Economics can cast some light on how culture evolves, though. In the past, draconian measures were taken to regulate premarital sex. In New Haven in 1700 about 70% of criminal cases were for “fornication.” There were huge incentives for parents to try to mold their offspring’s preferences to prevent premarital sex. They worked hard to stigmatize the act. The same thing was true for churches or states, which had to provide charity for unwed mothers. This was a great financial burden for them. This socialization process was very costly for parents, churches and states. The increase in the efficacy of contraception reduced the need for all of this. Therefore, over time the stigmatization associated with sex has declined as parents, churches and states engaged in less socialization. One message from this is that some component of culture is clearly endogenous.
ED: Speaking of adolescent sexuality, parents spend considerable efforts instilling their offspring some rationality in this regard. Economic models, and forward-looking expectation models in particular, assume rationality and rational expectations. Do you fell this is appropriate?
JG: Yes, I do. Rational behavior is a central tenet in modern economics. There are no good alternative modeling assumptions. Take sex, drugs and alcohol. You could assume that people engage in risky behavior because they don’t understand the hazards. Specifically, they don’t appreciate the risks of becoming pregnant or contacting HIV/AIDS, or the impact that alcohol and drugs can have on their health. Public policies aimed at educating people should then have a big impact on people’s behavior. An alternative view is that people do understand the consequences of their actions. They enjoy having sex, drinking alcohol, or doing drugs. The risks of becoming pregnant or contracting HIV/AIDS are actually low, if you just have a single encounter. Similarly, so are the odds of becoming addicted or overdosing on alcohol/drugs. From the perspective of a rational person, the cost of a single night engaging in such behavior might seem low relative to the benefit. So, they do it. The probability of trouble rises quickly with the number of nights involved in such activity. But, people decide such things sequentially, one act at a time. The public policy prescriptions when you take the latter approach are unclear. People are doing something they enjoy, they understand the risks, and a certain number of them will have a bad experience. If there were no negative externalities, one view might be to just leave these people alone. If you think insurance markets are incomplete then perhaps you would provide them with medical or other services. But, this encourages risky behavior. If you think there are externalities, perhaps you should tax the activity to dissuade people from engaging in it. Or, you could jail people for taking drugs. The latter policy doesn’t seem to curtail such behavior much, perhaps because the likelihood of getting caught for a single snort or toke is so small. Enforcement is very expensive too. There is no easy solution. This is the position society is in.Philipp Kircher, Michèle Tertilt and I adopt the rational approach to modeling the HIV/AIDS epidemic in Africa. Work by Delavande and Kohler (2009) on the HIV/AIDS epidemic in Malawi suggests that peoples’ subjective expectations about whether or not they have HIV/AIDS, based on their sexual behavior, are rational. In our analysis people update this belief in a Bayesian fashion, depending on what type of sexual behavior they have been engaging in. The analysis suggests that policies such as circumcising males may backfire. Circumcision reduces the risk of contracting HIV/AIDS for a male, or so some researchers believe. But, as a result, males may engage in more risky behavior. This is bad for females. While at this time it might be unclear just how rational people are in thinking about decisions involving risky behavior, the work done so far illustrates that a model based on rational behavior is able to account for a number of stylized phenomena concerning the Malawian HIV/AIDS epidemic.
ED: Your reason has dealt with issues usually handled by other social sciences. How have non-economists reacted to economists stepping on their toes?
JG: Well, so far most of the hostility has come from people within the economics profession. The work mentioned above spans areas such as economic history, labor economics and macroeconomics. Each area has its own way of doing things. People often aren’t so open minded when you enter their territory. Applied economists frequently are not acquainted with modern theory and sometimes aren’t enamored with it either. There is a lot of hostility toward using simulation based methods. “Engines of Liberation” was rejected at the American Economic Review, the Journal of Political Economy and the Quarterly Journal of Economics before it was accepted at the Review of Economic Studies. Even there it wasn’t smooth sailing but the Editor, Fabrizio Zilibotti, was willing to intervene and manage the process. This took both courage and effort on his part. You rarely see this because it is costly for an editor to do. Here is a link to a hostile referee report from a well-known economic historian that we received from the AER. The referee’s second point illustrates that he truly had no conception about how difficult doing this sort of work is. Other times people just don’t understand notions that are ubiquitous in macroeconomics, such as sequential optimization or perfect foresight equilibrium, as can be seen from this report on “Social Change: The Sexual Revolution.”Other people in the area relate similar experiences. In a very small way, I now appreciate the difficult time that Lucas, Prescott, Sargent and Wallace had in the late 1960s and 1970s. Given this, I have some concern about young researchers in this area. In the last two years Jim Heckman and Nezih Guner have held separate conferences in family economics, at The University of Chicago and Autònoma de Barcelona, which have brought together researchers in labor economics and macroeconomics. This is great. It promotes understanding across fields. Exciting new work was presented at these conferences, such as Stefania Albanesi’s research on maternal health and fertility. Satyajit Chatterjee, Lee Ohanian and I have held small conferences at the Philadelphia Fed that encourage this type of work. Hopefully, things will get better. It has been very exciting working on such topics (rather than doing the same old same old) and I would not have given up the experience for anything.


Albanesi, Stefania, 2010. “Maternal Health and Fertility: An International Perspective,” manuscript, Columbia University.
Bellman, Richard E., 1957. Dynamic Programming. Princeton University Press, Princeton, NJ.
Chatterjee, Satyajit, Dean Corbae, Makoto Nakajima and José-Víctor Ríos-Rull, 2007. “A Quantitative Theory of Unsecured Consumer Credit with Risk of Default,” Econometrica, 75(6), 1525-1589.
Cosar, A. Kerem, Nezih Guner and James Tybout, 2010. “Firm Dynamics, Job Turnover, and Wage Distributions in an Open Economy,” NBER Working Paper 16326.
Delavande, Adeline, and Hans-Peter Kohler, 2009. “Subjective expectations in the context of HIV/AIDS in Malawi,” Demographic Research, 20(31), 817-875.
Denardo, Eric V., 1967. “Contraction Mappings in the Theory Underlying Dynamic Programming,” SIAM Review, 9(2), 165-117.
Fernández-Villaverde, Jesús, Jeremy Greenwood and Nezih Guner, 2010, “From Shame to Game in One Hundred Years: An Economic Model of the Rise in Premarital Sex and its De-Stigmatization“, NBER Working Paper 15677. VoxEU discussion. Youtube video.
Fuchs, Rachel, 1984. Abandoned Children: Foundlings and Child Welfare in Nineteenth-Century France. Albany, State University of New York Press.
Fuchs, Rachel, 1992. Poor and Pregnant in Paris. New Brunswick, Rutgers University Press.
Greenwood, Jeremy, and Nezih Guner, 2010. “Social Change: The Sexual Revolution,” International Economic Review, 51(4), 893-923.
Greenwood, Jeremy, Philipp Kircher and Michèle Tertilt, 2010. “An Equilibrium Model of the African HIV/AIDS Epidemic,” manuscript, London School of Economics.
Greenwood, Jeremy, Ananth Seshadri and Mehmet Yorukoglu, 2005. “Engines of Liberation,” Review of Economic Studies, 72(1), 109-133.
Hopenhayn, Hugo, and Richard Rogerson, 1993. “Job Turnover and Policy Evaluation: A General Equilibrium Analysis,” Journal of Political Economy, 101(5), 915-938.
Kambourov, Gueorgui, and Iourii Manovskii, 2009. “Occupational Specificity of Human Capital,” International Economic Review, 50(1), 63-115.
Karlin, Samuel, 1955. “The Structure of Dynamic Programming Models,” Naval Research Logistics Quarterly, 2, 285-294.
Kydland, Finn E., and Edward C. Prescott, 1982. “Time to Build and Aggregate Fluctuations,” Econometrica, 50(6), 1345-1370.
McCallum, Bennett, 1989. “Real Business Cycle Models,” in Robert Barro (ed.), Modern Business Cycle Theory, 16-50, Harvard University Press, Cambridge MA.
Shoven, John B., and John Whalley, 1972. “A general equilibrium calculation of the effects of differential taxation of income from capital in the U.S.,” Journal of Public Economics, 1(3-4), 281-321
Volume 12, Issue 1, November 2010

Steven Davis on Labor Market Dynamics

Steven J. Davis is the William H. Abbott Professor of International Business and Economics at the University of Chicago Booth School of Business. His interests cover employment outcomes, worker mobility, job loss, business dynamics, economic fluctuations, and national economic performance. He is also the editor of the American Economic Journal: Macroeconomics. Davis’s RePEc/IDEAS entry.
EconomicDynamics: You have studied labor markets flows in the US extensively for a long time. In particular, you have identified regularities in job loss and job finding rates in recessions and recoveries. Is the recent recession looking similar to previous ones?
Steven Davis: In terms of labor market flows, the recent recession began much like previous US downturns: sharp spikes in job destruction, layoffs and the flow of job losers into unemployment plus a slow down in job creation, a drop in quits and a decline in the job-finding rate for unemployed workers. An important distinguishing feature of the recent downturn — apart from its severity — is the long, deep slide in the job-finding rate and its failure to rebound.As measured by the unemployment escape rate in the Current Population Survey, the job-finding rate has ranged from 19-23% per month since the middle of 2009. This is about half the rate from 2004 to 2007, and much lower than job-finding rates in previous US recessions. Quit rates also dropped dramatically relative to pre-recession levels. By these metrics, the US labor market has become much less fluid since the onset of the recent recession.Employer-side measures of labor market flows tell a consistent story. They also suggest that recent developments have accentuated a secular decline in the fluidity of U.S. labor markets. Gross job creation rates in the BLS Business Employment Dynamics data are about 25% smaller in recent quarters than in the early 1990s. A similar downward drift since the early 1990s shows up in the rate at which businesses hire new employees. See my working paper with Jason Faberman and John Haltiwanger on “Labor Market Flows in the Cross Section and over Time.” Some of my other papers show that the rate of job reallocation across employers has been drifting down since the early 1980s, and since the late 1960s in the manufacturing sector.
ED: The unemployment rate in the US has doubled during the last recession, while the change has been much smaller in Europe. How can such a difference be explained?
SD: This is a great question but not one that I have studied carefully. An obvious point is that the United States entered the recession with a lower unemployment rate than most European countries. Still, even in terms of the additive change in the unemployment rate since 2007 or 2008, the United States fares poorly in comparison to many European countries.The contrast to Germany is particularly striking. According to OECD statistics, the German unemployment rate fell from 8.4% in 2007 to 6.8% in the six months through September 2010. The US unemployment rate rose by 5 percentage points over the same period. Several factors are in play here, but it is worth stressing that Germany and the United States experienced similar output growth over this period, with a somewhat deeper contraction in Germany followed by a somewhat stronger recovery in 2010. So the main source of the contrasting unemployment experiences does not rest on differences in output growth.Public subsidies for short-time working arrangements muted German job losses in response to the recession. To my mind, these subsidy programs raise several important questions: First, how large a role did they play in stemming the rise of unemployment in Germany (and several other countries)? Second, under what circumstances are subsidies for short-time arrangements preferable to a US-style unemployment insurance program? Third, to what extent do short-time arrangements retard the productivity-enhancing reallocation of jobs, workers and capital? Fourth, will short-time subsidies be unwound in a timely manner, or will they evolve into a semi-permanent subsidy to employment and production in weak and declining sectors? The answer to the last question may turn on institutional and political factors that differ across countries.While the role of short-time arrangements in US-European unemployment comparisons warrants careful study, there remains the big question of why job creation and hiring have failed to rebound more strongly in the United States. I have not seen a convincing analysis of this question. I suspect that many factors are working in the same direction: a diminished capacity and appetite for risk bearing as continuing legacies of the financial crisis; high levels of uncertainty about future taxes, business regulations, and the implementation of recent health care legislation; an expensive fiscal stimulus program that was not well designed to promote employment; and a monetary authority that has exhausted its capacity to push down short-term interest rates.
ED: Over the last decades, the volatility of GDP and the probability of job loss both decreased. They both increased significantly in this recession. Is there a link?
SD: Yes, I think there is a strong and rather straightforward link. To see why, consider a simple model in which individual employers are hit by common and idiosyncratic shocks to employment growth rates. The common shocks include technology shifts, tax code changes, monetary policy innovations, commodity supply disruptions, and so on. For simplicity, suppose the shocks are mutually uncorrelated. Now make two key assumptions: First, employers differ in their loadings on the common shocks, i.e., in their response coefficients. Second, the average cross-product of the employer-specific loadings is positive for each common shock. The first assumption has abundant empirical support. The second captures the well-known fact that aggregate fluctuations exhibit much positive co-movement across sectors, regions and firms.Now calculate the time-series variance of the aggregate employment growth rate and the cross-sectional mean of the employer-specific growth rate variances. These and related measures appear in empirical studies of aggregate and firm-level volatility. Given my two key assumptions, it is easy to show that the aggregate and firm-level volatility measures move in the same direction in response to a change in any one of the shock variances, provided that the employment shares and loadings are reasonably stable over time. Thus, for example, fewer commodity supply disruptions (“good luck”) or smaller monetary policy innovations (“better policy”) lead to declines in both aggregate and firm-level volatility.My paper with several collaborators in the 2006 NBER Macroeconomics Annual sets forth this argument. We also show that the firm-level volatility of employment growth rates trended down from the early 1980s to 2001, in line with the declines in aggregate volatility stressed by the Great Moderation literature. The trend declines in firm volatility hold in every major industry sector of the US economy. My short paper on “The Decline of Job Loss and Why It Matters” considers several measures of job loss and shows that all of them trended downward during the Great Moderation period.Getting back to the recent recession, high job-loss rates are one manifestation of high aggregate and firm-level volatility. When big negative shocks hit the economy, there is an increase in the volatility and cross-sectional dispersion of firm-level outcomes, and in job destruction and layoff rates. Big positive shocks can also raise volatility and dispersion, but they seem to occur less frequently or abruptly than big negative shocks. More important, it is much easier for workers to move between jobs with no intervening unemployment spell, or a relatively short one, in a booming economy.
ED: Labor market flow research has focused on the study of worker transitions. Only recently have flows on the establishment side been analyzed. Can we look at vacancies in a symmetric way to unemployment?
SD: Actually, research on employer-side flows goes back a long ways. Sumner Schlichter’s 1921 book on The Turnover of Factory Labor and Wladimir Woytinsky’s 1942 report on Three Aspects of Labor Market Dynamics are major early contributions. They remain well worth reading today. Katharine Abraham and others did important early work on job vacancies. And there is my work with Haltiwanger and Scott Schuh on job flows and much related work by others. There is also a great deal of interesting research on labor market flows in other countries, some of which exploits data on job vacancies.Still, it is true that employer-side measures of vacancies and worker flows have received comparatively little attention. That has begun to change, partly because of new data on job vacancies and establishment-side measures of hires, layoffs and quits in the BLS Job Openings and Labor Turnover Survey (JOLTS). The JOLTS has been in production mode since December 2000, and it has been an eventful decade.Faberman, Haltiwanger and I have begun to explore the relationship between job vacancies and new hires using JOLTS micro data. Our paper on “The Establishment-Level Behavior of Vacancies and Hiring” uses a simple model of daily hiring dynamics to identify the job-filling rate for vacant positions. The job-filling rate is the employer counterpart to the much-studied job-finding rate for unemployed workers. Textbook search and matching theories and standard matching function have strong implications for the behavior of the job-filling rate. These implications have not received much attention in previous research.
ED: This year’s Nobel Prize in Economics was attributed to pioneers of search theory, in particular in how it applies to the labor market. How has this theory helped us in understanding labor market flows?
SD: Theories of search and matching provide an explicit, coherent framework for analyzing labor market flows and their relationship to frictional unemployment. There are many excellent contributions in this area, and I will mention only a few.In their 1989 article on “The Beveridge Curve,” Olivier Blanchard and Peter Diamond use concepts from search and matching theory to organize their influential study of job flows, worker flows, unemployment and vacancies. Dale Mortensen and Chris Pissarides develop an equilibrium search theory of frictional unemployment in their famous 1994 paper on “Job Creation and Job Destruction in the Theory of Unemployment.” They use the theory to analyze the effects of aggregate and idiosyncratic shocks on job flows, vacancies and unemployment. In his 2005 AER paper, Rob Shimer shows that a closely related theory cannot readily account for the observed magnitudes of fluctuations in unemployment, vacancies and job-finding rates when wages are determined by a generalized Nash bargain. Shimer, Bob Hall and others have followed up on this observation in an active line of research on how wage-setting behavior affects hiring incentives and unemployment dynamics. In contrast to the focus on job creation and hiring incentives in the recent work by Hall and Shimer, Lars Ljungqvist and Tom Sargent put the spotlight on the incentives and skill evolution process of unemployed workers. See, for example, their 1998 JPE paper on “The European Unemployment Dilemma.” Michael Pries and Richard Rogerson use search theory to analyze the role of firing costs and wage compression in persistent US-European differences in unemployment durations, hiring incentives and labor market flows. See their 2005 JPE paper.
ED: And can our recent understanding of the labor market dynamics data fashion theory?
SD: Yes, definitely. Let me give you some examples drawn from my recent research. In our 2010 AEJ-Macro article, my coauthors and I show that secular declines in business volatility (measured various ways) account for much of the large decline in US unemployment flows and unemployment rates in the period from the early 1980s to the mid 2000s. A natural way to explain this finding with search models is to allow for a decline over time in the variance of idiosyncratic match-specific shocks. The evidence in our AEJ-Macro paper should encourage researchers to model the idiosyncratic shock variance as a time-varying object, especially in quantitative studies of labor market flows and frictional unemployment.My second example draws on recent work with Faberman and Haltiwanger. As I mentioned above, we use JOLTS data to recover the job-filling rate for vacant positions. We proceed to show that the job-filling rate rises very strongly with the gross hires rate in the cross section. As we discuss, the textbook equilibrium search theory of unemployment cannot replicate this finding. Surprisingly, this statement holds even when one extends the theory to incorporate variable recruiting intensity by employers with vacancies.One way to reconcile the theory with the behavior of job-filling rates is to allow for variable recruiting intensity AND to drop the standard free-entry condition for job creation. The standard entry condition ensures that vacancies have zero asset value in equilibrium. Most search theories adopt this assumption because it simplifies the analysis of equilibrium. Our evidence indicates that the simplification comes at a fairly high cost. Another way to reconcile theory and evidence is to drop the random matching assumption in favor of directed search with wage posting. Directed search models can readily produce a positive relationship between job-filling rates and gross hires rates, as shown by Leo Kaas and Philipp Kircher in a recent working paper titled “Efficient Firm Dynamics in a Frictional Labor Market.”My third example also draws on work with Faberman and Haltiwanger. We consider a generalized matching function with three arguments: unemployed workers, vacant job positions, and recruiting intensity per vacancy. When interpreted through the lens of this function, our evidence on job-filling rates implies that the standard matching function under predicts hires in a weak labor market and over predicts hires in a strong labor market. That is exactly the pattern we see in the ten-year period covered by JOLTS data. We proceed to use our micro-based evidence to put structure on the generalized matching function and to develop an index for recruiting intensity per vacancy. We think these objects will be useful in helping to guide future theoretical developments.


Abraham, Katharine, 1987. “Help-Wanted Advertising, Job Vacancies, and Unemployment,Brookings Papers on Economic Activity, The Brookings Institution, vol. 18(1987-1), pages 207-248.
Blanchard, Oliver and Peter Diamond, 1989. “The Beveridge Curve,” Brookings Papers on Economic Activity, The Brookings Institution, vol. 20(1989-1), pages 1-76.
Davis, Steven, 2008. “The Decline of Job Loss and Why It Matters,” American Economic Review, American Economic Association, vol. 98(2), pages 263-67.
Davis, Steven, Jason Faberman and John Haltiwanger, 2010. “Labor Market Flows in the Cross Section and over Time.” Manuscript, University of Chicago.
Davis, Steven, Jason Faberman and John Haltiwanger, 2009. “The Establishment-Level Behavior of Vacancies and Hiring,” NBER Working Paper 16265, National Bureau of Economic Research.
Davis, Steven, Jason Faberman, John Haltiwanger, Ron Jarmin and Javier Miranda, 2010. “Business Volatility, Job Destruction, and Unemployment,” American Economic Journal: Macroeconomics, American Economic Association, vol. 2(2), pages 259-87.
Davis, Steven, John Haltiwanger and Ron Jarmin, 2007. “Volatility and Dispersion in Business Growth Rates: Publicly Traded versus Privately Held Firms,” NBER in: NBER Macroeconomics Annual 2006, Volume 21, pages 107-180 National Bureau of Economic Research, Inc.
Davis, Steven, John Haltiwanger and Scott Schuh, 1998. Job Creation and Destruction, MIT Press, Cambridge MA.
Hall, Robert, 2005. “Employment Fluctuations with Equilibrium Wage Stickiness,” American Economic Review, American Economic Association, vol. 95(1), pages 50-65.
Kaas, Leo and Philipp Kircher, 2010. “Efficient Firm Dynamics in a Frictional Labor Market.” Manuscript, University of Konstanz.
Ljungqvist, Lars and Thomas Sargent, 1998. “The European Unemployment Dilemma,” Journal of Political Economy, University of Chicago Press, vol. 106(3), pages 514-550.
Mortensen, Dale and Christopher Pissarides, 1994. “Job Creation and Job Destruction in the Theory of Unemployment,” Review of Economic Studies, Blackwell Publishing, vol. 61(3), pages 397-415.
Pries, Michael and Richard Rogerson, 2005. “Hiring Policies, Labor Market Institutions, and Labor Market Flows,” Journal of Political Economy, University of Chicago Press, vol. 113(4), pages 811-839.
Schlichter, Sumner, 1921. The Turnover of Factory Labor. New York. Schulz.
Shimer, Robert, 2005. “The Cyclical Behavior of Equilibrium Unemployment and Vacancies,” American Economic Review, American Economic Association, vol. 95(1), pages 25-49.
Woytinsky, Wladimir, 1942. Three Aspects of Labor Market Dynamics. Committee on Social Security, Social Science Coucil. Washington DC.

Volume 11, Issue 2, April 2010

Fabio Canova on the Estimation of Business Cycle Models

Fabio Canova is Professor of Economics at Universitat Pompeu Fabra and Research Professor at ICREA. His research encompasses applied and quantitative macroeconomics, as well as econometrics. Canova’s RePEc/IDEAS entry.
EconomicDynamics: What is wrong with using filtered data to estimate a business cycle model?
Fabio Canova: The filters that are typically used in the literature are statistical in nature and they do not take into account the structure of the model one wants to estimate. In particular, when one employs statistical filters he/she implicitly assumes that cyclical and non-cyclical fluctuations generated by the model are located at different frequencies of the spectrum — this is what has prompted researchers to identify cycles with 8-32 quarters periodicities with business cycles. But a separation of this type almost never exists in dynamic stochastic general equilibrium (DSGE) models. For example, a cyclical business cycle model driven by persistent but stationary shocks will have most of its variability located in the low frequency of the spectrum, not at business cycle frequencies and all filters which are currently in used in the literature (HP, growth filter, etc.) wipe out low frequency variability — they throw away the baby with the water. Similarly, if the shocks driving the non-cyclical components have variabilities which are large relative to the shocks driving the cyclical component, the non-cyclical component may display important variability at cyclical frequencies — the trend is the cycle, mimicking Aguiar and Gopinah (2007).Thus, at least two types of misspecifications are present when models are estimated on filtered data: important low frequency variability is disregarded; the variability at cyclical frequencies is typically over-estimated. This misspecification may have serious consequences for estimation and inference. In particular, true income and substitution effects will be mismeasured, introducing distortions in the estimates many important parameters, such as the risk aversion coefficient, the elasticity of labor supply, and the persistence of shocks. The size of these distortions clearly depends on the underlying features of the model, in particular, on how persistent is the cyclical component of the model and how strong is the relative signal of the shocks driving the the cyclical and the non-cyclical components. One can easily build examples where policy analysis conducted with the estimated parameters obtained with filtered data can be arbitrarily bad.This point is far from new. Tom Sargent, over 30 years ago, suggested that rational expectations models should not be estimated with seasonally adjusted data, because at seasonal frequencies (which are blanked out with seasonal adjustment filters) may contain a lot of non-seasonal information and, conversely, because the seasonal component of a model may have important cyclical implication (think about Christmas gifts: their production is likely to be spread all over the year rather than lumped just before Christmas). For the same reason we should not estimate models using data filtered with arbitrary statistical devices that do not take into account the underlying cross frequency restrictions the model imposes.As an alternative, rather than building business cycle models (stationary models which are log-linearized around the steady state) and estimating them on filtered data, several researchers have constructed models which, in principle, can account for both the cyclical and non-cyclical portions of the data. For example, it is now popular in the literature to allow TFP or investment specific shocks to have a unit root while all other shocks are assumed to have stationary autoregressive structure; solve the model around the balanced growth path implied by the non-stationary shocks; filter the data using the balanced growth path implied by the model and then estimate the structural parameters of the transformed model using the transformed data. While this procedure imposes some coherence on the approach — a model consistent decomposition in cyclical and non-cyclical components is used — and avoids arbitrariness in the selection of the filter, it is not the solution to the question of how to estimate business cycle models using data which, possibly, has much more than cyclical fluctuations. The reason is that the balanced growth path assumption is broadly inconsistent with the growth esperience of both developed and developing countries. In other words, if we take data on consumption, investment and output and filter it using a balanced growth assumption, we will find that some of the filtered series will still display upward or downward trends and/or important low frequency variations. Since in the transformed model these patterns are, by construction, absent the reverse of the problem mentioned above occurs: the low frequency variability produced by the model is over estimated; the variability at cyclical frequencies underestimated. Once again income and substitution effects will be mismeasured and policy analyses coinducted with estimated parameters may be arbitrarily poor.
ED: Does this mean we should not draw stylized facts from filtered data?
FC: Stylized facts are summary statistics, that is a simple way to collapse multidimensional information into some economically useful measure, easy to report. I see no problem with this dimensionality reduction. The problems I can see with collecting stylized facts using filtered data are of two types. First, because filters act on the data differently, stylized facts may be function of the filter used. I have shown this many years ago (Canova, 1998) and I want to reemphasize that it is not simply the quantitative aspects that may be affected but also the qualitative ones. For example, the relative ordering of the variabilities of different variables may change if a HP or a growth filter are used. Since the notion of statistical filtering is not necessarily connected with the notion of model-based cyclical fluctuations, there is no special reason to prefer one set of stylized facts over another and what one reports is entirely history dependent (we use the HP filter because others have used it before us and we do not want to fight with referees about this). To put this concept in another way: none of the statistical filters one typically uses enjoys certain optimaility properties given the types of models we use in macroeconomics.Second, it is not hard to build examples where two different time series with substantially different time paths (say, one produced by a model and one we find in the data) may look alike once they are filtered. Thus, by filtering data, we throw away the possibility to recognize in which dimensions our models are imperfect description of the data. Harding and Pagan (2002, 2006) have forcefully argued that interesting stylized facts can be computed without any need of filtering. These include location and clusting of turning points, the length and the amplitude of business cycle phases, concordance measures, etc. Their suggestion to look at statistics that you can construct from raw data has been ignored by the profession at large (an exception here is some work by Sergio Rebelo) because existing business cycle models have hard time to replicate the asymmetries over business cycle phases that their approach uncovers.
ED: How should we think about estimating a business cycle model?
FC: Estimation of structural models is a very difficult enterprise because of conceptual, econometric and practical difficulties. Current business cycle models, even in the large scale version now used in many policy institutions, are not yet suited for full structural estimation. From a classical perspective, to estimate the parameters of the model we need it to be the data generating process, up to a set of serially uncorrelated measurement errors. Do we really believe that our models are real world? I do not think so and, I guess, many would think as I do. Even if we take the milder point of view that models are approximations to the real world, the fact that we can not precisely define the properties of the approximation error, makes classical estimation approaches usuited for the purpose. Bayesian methods are now popular, but my impression is that they are so because they deliver reasonable conclusions not because the profession truely appreciate the advantages of explicitly using prior information. In other words, what it is often called Bayesian estimation is nothing more than interval calibration — we get reasonable conclusions from structural estimation because the prior dominates.I think one of the most important insights that the calibration literature has brought to the macroeconomic profession is the careful use of (external) prior information. From the estimation point of view, this is important because one can think of calibration as a special case of Bayesian estimation when the likelihood has little information about the parameters and the prior has a more or less dogmatic format. In practice, because the model is often a poor approximation to the DGP of the data, the likelihood of a DSGE has typically large flat area (and in these areas any prior meaningful prior will dominate), sharp multiple peaks, cliffs and a rougged appearance (and a carefully centered prior may knock out many of these pathologies). Thus, while calibrators spend pages discussing they parameter selection, macroeconomists using Bayesian estimation often spend no more than a few lines discussing their priors and how they have chosen them. It is in this framework of analysis that the questions of how to make models and data consistent, whether filtered data should be used, whether models should be written to account for all fluctuations or only the cyclical ones should be addressed.My take on this is that, apart for few notable exceptions, the theory is largely silent on the issues of what drives non-cyclical fluctuations, whether there are interesting mechanisms transforming temporary shocks into medium term fluctuations, whether non-cyclical fluctuations are distinct from cyclical fluctuations or not and in what way. Therefore, one should be realistic, and start from a business cycle model since at least in principle we have some ideas of what features a good model should display. Then, rather than filtering the data or tagging on to the model an arbitrary unit root process, one should specify a flexible format for the non-cyclical component, jointly estimate the structrual parameters and the non-structrual parameters of the non-cyclical component jointly using the raw data and let the data decide what it is cyclical and what it is not given the lenses of the model. Absent any additional information, estimation should be conducted using uninformative priors and should be precedeed by a careful analysis of the identification properties of the model. If external information of some sort is available it should be carefully documented and specified and the trade-off between sample and non-sample information clearly spelled out. The fact that a cyclical model once it is driven by persistent shocks implies that simulated time series have power at all frequencies of the spectrum can be formalized into a prior on the coefficients of the decision rules of the model and that could in turn be transformed into restrictions on the structural parameters using a change of variables approach. Del Negro and Schorfheide (2008) have shown how to do this formally, but any informal approach which takes that into consideration will go a long way in the right direction.
ED: There a spirited debate about the use of VARs to discriminate between business cycle model classes and shocks driving business cycles. What is your take on this debate?
FC: Since Chris Sims introduced structural VARs (SVARs) now almost 25 years ago, I have seen a paper every 4-5 years showing that SVARs have a hard time in recoving underlying structural models because of identification, aggregation, omitted variables, non-fundamentalness problems. Still SVARs are as popular as ever in the macroeconomic profession. The reason is, I think, twofold. First, the points made have been often over-emphasized and generality has been given to pathologies that careful researchers can easily spot.. Second, SVAR researchers are much more careful with what they are doing, many important criticisms have been into consideration and, in general, users of SVARs are much more aware of the limitations their analysis has.Overall, I do think SVARs can be useful tools to discriminate business cycle models and recent work by Dedola and Neri (2007) and Pappa (2009) show how they can be used to discriminate for example, RBC vs. New Keynesian transmission. But for SVARs to play this role, they need to be linked to classes of dynamic stochastic general equilibrium models currently used in the profession much better than it has been done in the past. This means that identification and testing restrictions should be explicitly derived from the models we use to organize our thoughts about a problem and should be robust, in the sense that they hold regardless of the values taken by the structural parameters and the specification of the details of the model. What I mean is the following. Suppose you start from a class of sticky price New Keynesian models and suppose you focus on the response of output to a certain shock. Then I call such a response robust if, for example, the sign of the impact response or the shape of the dynamic response is independent of the value of the Calvo parameter used and of whatever other feature I stick in the New Keynesian model in adidtion to sticky prices (e.g. habit in consumption, investment adjustment costs, capacity utilization, etc.). Robust restrictions, in general, take the form of qualitative rather than quantitative constraint — the latter do depend on the parameterization of the model and on the details of the specification — and often they involve only contemporaneous responses to shocks — business cycle theories are often silent about the time path of the dynamics adjustments. Once robust restrictions are found, some of them can be used for identification purposes and some to evaluate the general quality of the class of models. Restrictions which are robust to parameter variations but specific to certain members of the general class (e.g. dependent on which particular feature is introduced in the model) can be used in evaluating the importance of a certain feature.Note that if such an approach is used, the representation problems discussed in the recent literature (e.g. Ravenna (2007) or Chari, Kehoe and McGrattan (2008)) have no bite, since sign restrictions on the impact response are robust to the misspecification of the decision rules of a structural model discussed in these papers. Furthermore, use of such an approach gives a measure of economic rather than statistical discrepancy and therefore can suggest theorists in which dimensions the class of models needs to be respecified to provide a better approximation to the data. Finally, the procedure allow users to test business cycle models without estimating directly structural parameters and I believe this is a definitively a plus, given the widespread identification problems existing in the literature and the non-standard form the likelihood function takes in the presence of mispecification. I have used such an approach, for example, to evaluate the general fit of the now standard class of models introduced by Christiano, Eichenbaum and Evans (2005) and Smets and Wouters (2007), to measure the relative importance of sticky price vs. sticky wages – something which is practically impossible to do with structural estimation because of identification problems – and to evaluate whether the addition of rule of thumb consumers helps to reconcile the predictions of the class of models and the data.


Mark Aguiar, and Gita Gopinath. 2007. Emerging Market Business Cycles: The Cycle Is the Trend, Journal of Political Economy, vol. 115, pages 69-102.
Fabio Canova. 1998. Detrending and business cycle facts: A user’s guide, Journal of Monetary Economics, vol. 41(3), pages 533-40, May.
Fabio Canova. 2009. Bridging cyclical DSGE models and the raw data, Mimeo, Universitat Pompeu Fabra, August.
Fabio Canova, and Matthias Paustian. 2007. Measurement with Some Theory: a New Approach to Evaluate Business Cycle Models, Economics Working Paper 1203, Department of Economics and Business, Universitat Pompeu Fabra, revised Feb 2010.
V. V. Chari, Patrick Kehoe, and Ellen McGrattan. 2008. Are structural VARs with long-run restrictions useful in developing business cycle theory?, Journal of Monetary Economics, vol. 55(8), pages 1337-52, November.
Lawrence Christiano, Martin Eichenbaum, and Charles Evans. 2005. Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy, Journal of Political Economy, vol. 113(1), pages 1-45, February.
Luca Dedola, and Stefano Neri. 2007. What does a technology shock do? A VAR analysis with model-based sign restrictions, Journal of Monetary Economics, vol. 54(2), pages 512-549, March.
Marco Del Negro, and Frank Schorfheide. 2008. Forming priors for DSGE models (and how it affects the assessment of nominal rigidities), Journal of Monetary Economics, vol. 55(7), pages 1191-1208, October.
Don Harding, and Adrian Pagan. 2002. Dissecting the cycle: a methodological investigation, Journal of Monetary Economics, vol. 49(2), pages 365-381, March.
Don Harding, and Adrian Pagan. 2006. Synchronization of cycles, Journal of Econometrics, vol. 132(1), pages 59-79, May.
Evi Pappa. 2009. The Effects Of Fiscal Shocks On Employment And The Real Wage, International Economic Review, vol. 50(1), pages 217-244.
Federico Ravenna. 2007. Vector autoregressions and reduced form representations of DSGE models, Journal of Monetary Economics, vol. 54(7), pages 2048-2064, October.
Rusdu Saracoglu, and Thomas Sargent. 1978. Seasonality and portfolio balance under rational expectations, Journal of Monetary Economics, vol. 4, pages 435-458.
Christopher Sims. 1980. Macroeconomics and Reality, Econometrica, vol. 48, pages 1-48.
Christopher Sims. 1986. Are forecasting models usable for policy analysis?, Quarterly Review, Federal Reserve Bank of Minneapolis, issue Win, pages 2-16.
Frank Smets, and Rafael Wouters. 2007. Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach, American Economic Review, vol. 97(3), pages 586-606, June.

Volume 11, Issue 1, November 2009

Pete Klenow on Price Rigidity

Pete Klenow is Professor of Economics at Stanford University. His research encompasses the measurement of price rigidity and the causes of growth. Klenow’s RePEc/IDEAS entry.
EconomicDynamics: There has been in recent years a rapid expansion of research on price changes at the microeconomic level. What are the main lessons to be learned from this evidence?
Pete Klenow: As with other micro data, heterogeneity jumps out at you in the micro price data. Some prices change constantly (e.g., airfares). Other prices are stuck for a year or more (e.g. movie tickets). And when prices change, they usually do so by big amounts — an order of magnitude more than needed just to keep up with general inflation. So sectors differ in their pricing, and within sectors price changes are idiosyncratic. These are the two most consistent findings from micro price studies in the U.S., Euro Area, and beyond.It’s not clear which way this heterogeneity cuts for macro price flexibility. On the one hand, the frequent changers could be “waiting” for the stickier prices to fully incorporate a nominal macro shock. On the other hand, some of the stickiest categories don’t really have a business cycle (e.g., medical care). The more flexible categories, such as new cars, tend to be more cyclical.
ED: Macroeconomic research seems to indicate that price rigidities are stronger than at the microeconomic level. How can this be reconciled?
PK: The key is to have micro price changes that do not fully incorporate macro shocks. This can happen if wages or intermediate prices are sticky, or if there is coordination failure among competing sellers who do not synchronize their price changes. But the evidence for such “complementarities” is mixed.Another route would be some form of sticky information, as advanced by Mankiw and Reis, Sims, or Woodford. Firms may only periodically revise their pricing plans to incorporate macro information, as in Mankiw and Reis. Or they may be too busy paying attention to first order micro and sector shocks to pay much attention to second order macro shocks — that’s Sims’ rational inattention story.
ED: Several OECD countries have recently experienced episodes of deflation. Does this give us new insights into the rigidity of prices?
In the U.S. CPI, at least, the big individual price declines have been concentrated in food and energy. Outside of these categories the frequency of price increases has actually risen a few percentage points since early 2008 (from about 10% a month to 12% a month). News accounts of steep discounts nothwithstanding, sales have not become more common in this data — though it’s possible people are buying more quantities at sale vs. regular prices. These facts would seem to be a challenge for the conventional view of how pricing responds to adverse demand shocks.
ED: Speaking of sales, Eichenbaum, Jaimovich and Rebelo have challenged the measurement of price rigidity by focussing on reference prices excluding sales prices. What is your take on this?
The Eichenbaum, Jaimovich and Rebelo definition of a reference price (the most common price in a quarter) excludes more than just sale prices. It also excludes short-lived regular prices. This difference is quantitatively important: whereas regular (i.e., non-sale) prices in their grocery store chain move every few months, reference prices change about once a year. But excluding sales does a lot too — lowering the frequency of price changes from several weeks to several months.As recent papers by Nakamura and Steinsson and by Kehoe and Midrigan have emphasized, temporary price discounts are prevalent in food and apparel in the U.S. Sales are much less common in the Euro Area. In their “Billion Prices Project“, Cavallo and Rigobon look at online consumer prices from many countries and find sales are of intermediate importance in a number of Latin American countries.I think the jury is still out on whether there is any macro content to sale prices and other short-lived prices. At one extreme, one can imagine the frequency and magnitude of sales responding to unexpected inventory in cyclically important sectors such as autos and apparel. It’s possible most quantities are sold at discounts from list prices in these categories. At the other extreme, sale prices may reflect idiosyncratic price discrimination that is completely unresponsive to the macro environment.The fact that many sale price episodes end with a return to the previous regular price would seem to limit their macro content — at least for macro shocks that are persistent in levels. This point is stressed by Kehoe and Midrigan, for example.But in the U.S. CPI over 40% of sale prices episodes give way to new regular prices. Think clearance prices in apparel and electronics that pave the way for new products with new prices. And most regular price changes are not reversed. For the typical good two “novel” prices appear each year. This is what Ben Malin and I report in a chapter for the new volume of the Handbook of Monetary Economics.The bottom line is we really don’t know yet how important short-lived prices are for macro price flexibility.


Cavallo, Alberto, 2009. “Scraped Data and Sticky Prices: Frequency, Hazards, and Synchronization,” Unpublished paper, Harvard University.
Eichenbaum, Martin, Nir Jaimovich and Sergio Rebelo, 2008. “Reference Prices and Nominal Rigidities,” NBER Working Paper 13829.
Kehoe, Patrick, and Virgiliu Midrigan, 2008. “Temporary Price Changes and the Real Effects of Monetary Policy,” Staff Report 413, Federal Reserve Bank of Minneapolis.
Klenow, Pete, and Ben Malin, 2010. “Microeconomic Evidence on Price-Setting“, forthcoming, Handbook of Monetary Economics, Elsevier.
Mankiw, Gregory, and Ricardo Reis, 2006. “Pervasive Stickiness,” American Economic Review American Economic Association, vol. 96(2), pages 164-169, May.
Nakamura, Emi, and Jön Steinsson, 2008. “Five Facts about Prices: A Reevaluation of Menu Cost Models,” The Quarterly Journal of Economics, MIT Press, vol. 123(4), pages 1415-1464.
Sims, Christopher A., 2003. “Implications of rational inattention,” Journal of Monetary Economics, Elsevier, vol. 50(3), pages 665-690, April.
Woodford, Michael, 2008. “Information-Constrained State-Dependent Pricing,” NBER Working Paper 14620.

Volume 10, Issue 2, April 2009

Robert Barro on Rare Events

Robert Barro is the Paul M. Warburg Professor of Economics at Harvard University and a senior fellow of the Hoover Institution of Stanford University. He is currently interested in the interplay between religion and political economy and the impact of rare disasters on asset markets. Barro’s RePEc/IDEAS entry.
EconomicDynamics: In your recent work, you have emphasized the impact of rare, but large, events on the economy, in particular interest rates and premia. Does the current crisis correspond to the large event you had in mind, and what does this imply for interest rates in the medium term?
Robert Barro: In our study, we defined a macroeconomic crisis as a decline (over one or more years) of per capita real GDP or consumption by 10% or more. We do not yet know whether the U.S. situation will fall into this range–the probability at present, given the stock-market performance, is around 25%. However, the current crisis is global, and several countries will likely end up in the depression range. Iceland and Russia, I think, are already there.If the probability of disaster goes up–as it did last year after the spring–the real interest rate on safe assets, such as indexed U.S. Treasury bonds, should go down. The expected real return on risky assets, such as stocks, should go up (corresponding to a fall in stock prices). Thus, the equity premium goes up. Effects on nominal interest rates depend also on expectations of inflation. With low expected inflation over the short term, short-term nominal rates, such as on U.S. Treasury Bills, should become very low. Interest rates on medium term bonds should be higher than the short-term rates.
ED: The Great Moderation of the last two decades was characterized by dampened economic fluctuations and lower equity premia. Following your theory, can we interpret the latter as a consequence of the market putting lower probabilities on large events during this lull in fluctuations?
RB: A decrease in disaster probability would have the effect of lowering the equity premium. And the rise in price-earnings ratios up to 2007 is consistent with this reasoning. However, it’s hard to determine equity premia on a high-frequency basis, such as annually, because stock returns are so volatile. Thus, even 20 years is not a long time for computing the expected rate of return on equity (based on the observed average return).
ED: Beyond the themes just discussed, where do you see you research agenda on disaster probabilities bringing you?
RB: As a general matter, I think the rare-disasters perspective is important for many aspects of finance and macroeconomics. As an example, Gabaix (2009) applies this framework to explaining ten puzzles in asset pricing. The approach is also promising for understanding puzzles related to exchange rates and interest rates. An example involves the carry trade–borrowing in a low-interest-rate currency such as the Yen and investing in a high-interest-rate currency. This strategy can appear to be profitable for a long time but then suffer greatly in a disaster situation. Thus, the streak of high returns is not really a puzzle in a setting that allows for the chance of rare disasters. This setting is analogous to the persistently high returns achieved for years by AIG (and perhaps also Harvard University’s endowment).At present, I am working with various co-authors on extensions of the previous research. One project estimates the form of the size distribution of disaster events (using a Pareto or power-law distribution, which has been used to explain patterns in city sizes, stock-market price movements, and many other phenomena). This method for pinning down the fat tail of the disaster distribution seems to lead to a better explanation of the equity premium. I am also working on the interplay between stock-market crashes and depressions to see how properties of the stock-market crash (What is the size of the crash? Is it accompanied by a financial crisis? Is it local or regional or global? Is it war related?) affects predictions for macroeconomic outcomes. I am also studying the nature of recoveries from macroeconomic disasters (notably wars and financial crises) and assessing the durations of depressions. Another extension, being pursued by my student Jose Ursua, will assess the Great Influenza Epidemic of 1918-20 as a possible source of the macroeconomic disaster that showed up in many countries, including the U.S. and Canada, with a trough around 1921. (By the way, President Wilson suffered from the flu in 1919 and this illness may have led to the harsh Versailles Treaty that likely caused WWII, and Max Weber–a key figure in the social science of religion–died of the flu in 1920.)
ED: Gabaix (2009) shows that a large range of puzzles can be explained by rare disasters. There have been (partially) successful attempts to address these puzzles with non-standard preferences such as hyperbolic discounting and loss aversion. You worked with Epstein-Zin preferences. Could the issue rather be framed in terms of preference formation? Would, for example, loss aversion reduce the need for disasters to explain data?
RB: In many contexts, shifting disaster probabilities have effects on asset prices that resemble those from shifting preferences–specifically, from changing degrees of risk aversion. Of course, I am myself inclined toward models in which preferences are stable and other factors move around. But the key matter is how to distinguish these approaches empirically. One difference is that changes in disaster probabilities have implications for the actual future course of disasters, whereas shifting preferences do not have these implications. The key challenge is to make these different predictions operational in a context where the frequency and size distributions of actual disasters are hard to pin down with short samples of data.
ED: In early work, you initiated a rich literature on the Ricardian Equivalence. After much additional theoretical and empirical work, do you think the Ricardian Equivalence holds? And if not, how and why?
RB: I think Ricardian Equivalence provides the right baseline in thinking about fiscal deficits. Then reasonable analyses have to specify precisely in which dimensions the results depart from this equivalence–and, thereby, have implications for interest rates, investment, and so on. Much of the research in this area, such as tax-smoothing ideas, look more like public finance, rather than macroeconomics, per se. Tax smoothing has important empirical predictions about how fiscal deficits should and do behave–for example, the prediction that deficits are large in wars and depressions. Political-economy elements, such as the strategic debt model (originally crafted to explain the large Reagan budget deficits), have also been important.


Barro, Robert J., 1974. “Are Government Bonds Net Wealth?,” Journal of Political Economy, University of Chicago Press, vol. 82(6), pages 1095-1117, Nov.-Dec.
Barro, Robert J., 1979. “On the Determination of the Public Debt,” Journal of Political Economy, University of Chicago Press, vol. 87(5), pages 940-71, October.
Barro, Robert J., 2006. “Rare Disasters and Asset Markets in the Twentieth Century,” Quarterly Journal of Economics, vol. 121 (3), pages 823-866.
Barro, Robert J., 2007. “Rare Disasters, Asset Prices, and Welfare Costs,” NBER Working Paper 13690.
Barro, Robert J., and José F. Ursúa, 2008. “Macroeconomic Crises since 1870,” NBER Working Paper 13940.
Barro, Robert J., and José F. Ursúa, 2009. “Stock-Market Crashes and Depressions,” NBER Working Paper 14760.
Gabaix, Xavier, 2009. Variable Rare Disasters: An Exactly Solved Framework for Ten Puzzles in Macro-Finance, manuscript, New York University.

Volume 10, Issue 1, November 2008

Christopher Pissarides on the Matching Function

Christopher Pissarides holds the Norman Sosnow Chair in Economics at the London School of Economics. He is interested in the macroeconomics of the labor market, structural change and economic growth. Pissarides’s RePEc/IDEAS entry.
EconomicDynamics: The Mortensen-Pissarides matching function has been a standard feature of the analysis of labor markets with unemployment, in particular over the business cycle. With Shimer (2005), its business cycle features have come under fire. Do you see the matching function survive the Shimer (2005) criticism?
Christopher Pissarides: The matching function has been influential in the modeling of labor markets because it is a simple device for capturing the impact of frictions on individuals and market equilibrium. It is the kind of function that economists like to work with, a stable function of a small number of variables, increasing and concave in its arguments and homogenous of degree one. It gives the rate at which jobs form (or “created”) over time as a function of the search effort of workers and firms, which in the simplest and most common form are represented by unemployment and job vacancies. In a market without frictions this rate is infinite. So for any given search effort from each side of the market, the speed at which jobs are created is a measure of the frictions in the market. If two countries have identical search inputs, the one with a lower speed of job creation – or alternatively, with a matching function further away from the origin in vacancy-unemployment space – has more frictions.Once the matching function is introduced to capture frictions in an otherwise conventional model of the labor market, the modeling needs to change fundamentally. The frictions give monopoly power to both firms and workers, and job creation becomes like an investment decision by the firm. Models with a matching function have found many applications, such as the study of unemployment durations, disincentives from social security, employment protection legislation and many others. Empirically it has performed well in its simplest forms for many countries.The claim made in Pissarides (1985) was that the rents created by the matching function imply that outside non-cyclical variables, such as the imputed value of leisure, influence directly the wage rate. When there are shocks to the productivity variables that also influence the wage rate, the outside influences stop the wage rate short from fully reacting to the shock. This “wage stickiness” implies more volatility in job creation and employment than in a frictionless equilibrium model and less volatility in wages.Shimer’s critique was about the quantitative implications of this model. He argued that although in principle the claim was correct, at conventional parameter values the volatility of employment increased by very little, far less than in the data that it was supposed to match, and the volatility of wages decreased by even less. However, as shown by Hagedorn and Manovskii (2008) in particular, other parameter values that restrict the firm’s profit flow to a small fraction of the revenue from the job, deliver more employment volatility. And as shown by Hall (2005) and Hall and Milgrom (2008), other wage mechanisms from the conventional Nash wage equation, can also deliver more employment volatility.In these (and in many others) extensions the matching function has been preserved intact. Frictions are still represented by the simple matching functions with constant returns and a very small number of arguments. So one cannot really argue that Shimer’s critique has diminished the usefulness of the matching function. His critique was about other aspects of the model and was targeted at some popular versions of the model that have been used in the recent literature. It also shifted the focus to wage behavior, which intensified research in this area within the framework of the matching model.
ED: Search models have made considerable advances in recent years, in particular by trying to make explicit relationships that were typically represented in reduced form. Will the matching function survive this as well as the production function?
CP: Search models are aggregative models with all matching frictions represented by an aggregate matching function. I take it that this is what you mean by “reduced form”, that instead of representing frictions more specifically according to their origin and their type, we lump them all together into an aggregate function. My reaction to this is that our understanding of labor markets would improve if we could represent frictions in more specific forms, but I don’t see it happening yet. We can represent some very special types of frictions by more explicit processes. For example, the most famous one, the urn-ball game, assumes that the frictions are due to conflicts when two workers apply for the same job and only one gets it. The transition of workers from unemployment to employment derived from this friction is well behaved and satisfies specific restrictions. Significantly, it can be represented as a special case of the more general “black-box” matching function used in the standard model. But unfortunately the special case that the urn-ball game supports has found no support at all in the empirical literature.Other types of frictions have also been explored in the recent literature. For example, in stock-flow matching, originally due to Melvyn Coles and Eric Smith (1998), the friction is that in the local market there is no match that can be made, as opposed to the implicit assumption of the aggregate function that there are matches that can be made if only the partners knew where to find each other. In stock-flow matching an unmatched agent has to wait for new entry to try and match. The transitions derived from this specification are again well-behaved and not different from the transitions that can be derived from special forms of the aggregate matching function (as shown by Shimer, 2008). This specification has found more support in the data than the urn-ball one has, but again I wouldn’t say that finding support for stock-flow matching makes the reduced-form matching function obsolete.My point more generally is that although some special cases can be worked out from more primitive assumptions about frictions, none of them has contradicted the aggregate matching function. Frictions in labor markets are due to so many different reasons that I do not yet see either a complete or near complete characterization of frictions in a micro model. The convenience of the aggregate matching function, its support in the data and our experience with more micro models so far are such that they convince me that there is a bright future for the aggregate matching function for a long time to come. We shouldn’t miss the fact that the success of the aggregate matching function stimulated a lot more research in the role of frictions in labor markets and in its foundations. If one were to succeed to come up with a good micro founded alternative it would be real progress.


Coles, Melvyn G., and Smith, Eric, 1998. “Marketplaces and Matching,” International Economic Review, vol. 39(1), pages 239-54.
Hagedorn, Marcus, and Iourii Manovskii, 2008. “The Cyclical Behavior of Equilibrium Unemployment and Vacancies Revisited,” American Economic Review, vol. 98(4), pages 1692-1706.
Hall, Robert E., 2005. “Employment Fluctuations with Equilibrium Wage Stickiness,” American Economic Review, vol. 95(1), pages 50-65.
Hall, Robert E., and Paul R. Milgrom, 2008. “The Limited Influence of Unemployment on the Wage Bargain,” American Economic Review, vol. 98(4), pages 1653-74.
Pissarides, Christopher A, 1985. “Short-run Equilibrium Dynamics of Unemployment Vacancies, and Real Wages,” American Economic Review, vol. 75(4), pages 676-90.
Shimer, Robert, 2005. “The Cyclical Behavior of Equilibrium Unemployment and Vacancies,” American Economic Review, vol. 95(1), pages 25-49.
Shimer, Robert, 2008. “Stock-Flow Matching,” working paper.
Volume 9, Issue 2, April 2008

James Heckman and Flávio Cunha on Skill Formation and Returns to Schooling

James Heckman is the Henry Schultz Distinguished Service Professor of Economics at The University of Chicago. His recent research deals with such issues as evaluation of social programs, econometric models of discrete choice and longitudinal data, the economics of the labor market, and alternative models of the distribution of income. Flávio Cunha is Assistant Professor of Economics at the University of Pennsylvania. He is interested in inequality and skill formation. Heckman’s RePEc/IDEAS entry, Cunha’s RePEc/IDEAS entry.
EconomicDynamics: In recent work, you highlight that it is essential to understand skill formation in childhood as a multiperiod problem. Why is it so?
James Heckman, Flavio Cunha: There are different reasons. First, different types of skills are produced at different periods of childhood. Economists, social scientists, and policy makers alike tend to focus on cognitive skill. Although it is necessary for success in life, it is not enough for many aspects of performance in social life. Furthermore, the evidence shows that cognitive skills crystallize early in life, so it is difficult to change them at later stages. Until recently, few economists considered non cognitive skills although Marxist economists considered them very early on. Of course, psychologists have studied them, in addition to some work by sociologists. These skills play an important role in determining many socioeconomic measures, as shown in Heckman, Stixrud and Urzua (2006). They affect crime participation, teenage pregnancy, education, and many others important outcomes. More importantly, there is ample evidence from neuroscience showing that the prefrontal cortex, which is believed to be the neural focal point for noncognitive skills, matures later. This is consistent with the view that the way we motivate ourselves, the way we control our impulses, are malleable into later ages.Second, early advantages reinforce each other. Skills that are developed at one stage of life serve as an input to produce skills at other stages of life. For example, if we are given a paragraph that describes an algebra problem, it is necessary to know how to read so we can construct the correct equations from the description in the text. Once we have written down the equations, we can use our algebra skills to solve the equation and get the correct answer. Children who cannot read will have a hard time developing their algebra skills, because they will fail in the first stage of this task, namely decoding the information from the text into equations. This example supports our work on Job Training Programs. Public job training programs try to improve dropouts. Many do not know how to read or write well. These programs reflect the current view of public policy that it is possible to make up for 17 years of neglect. Our work in this area has shown that the success rate is really low. Creating the foundation of early skills is important.Based on this evidence, we developed models of skill formation allowing for investments at different stages to complement each other. The multiple periods formulation is a crucial feature of the models because (1) some skills may be formed more easily at certain given periods, (2) some skills produced at earlier stages may serve as an input in the production of later skills.
ED: In related work, you argue that a substantial fraction of ex post returns to schooling are predictable by the agents, but not necessarily by the econometrician. Market structure is important here to identify deep parameters. How is it then possible that you obtain similar results across different market structures?
JH, FC: In Cunha, Heckman, and Navarro (2005) we developed a framework that can be used by economists to separate heterogeneity from uncertainty in life cycle earnings by looking at economic choices made by agents. Given preferences and market structure, we show how one can use an educational choice model to generate restrictions that allow one to recover the distributions of predictable heterogeneity and uncertainty separately. In that paper, we specified a complete-market environment and we found that almost 50% of the variance of unobservable components in returns to schooling are known and acted on by individuals when making schooling choices. This framework was extended in Cunha and Heckman (2007) where we showed that a large fraction of the increase in inequality in recent years is due to the increase in the variance of unforecastable components. Cunha, Heckman, and Navarro (2004) and Navarro (2005) extended the model to an incomplete-market environment. These papers also consider assets accumulation data, which, given a market structure, imposes identifying assumptions on preferences.There is an open question: if we have data on choices (education decisions, consumption, etc…), outcomes of choices (earnings in a given education group), and assets accumulation, how far can we go in nonparametrically identifying information sets, preferences, and market structures? This is a question we first stated in the original Cunha, Heckman, and Navarro (2005) paper and the literature has not yet settled on a definite answer.The question you raise is a good one. But remember – only one market structure generates the data. If for example, a full insurance model characterized the data, an econometric model that allowed for constraints on transfers across states would show that the constraints are not binding, and would reproduce the complete markets model. Some version of this story seems to be at work in our estimates.
ED: Given your results, where should the policy focus be? In particular, how does your work distinguish itself in this regard from other work in macroeconomics?
JH, FC: The current work in economics emphasizes the importance of market incompleteness with regards to shocks agents experience while they are already adults: for example, there are many theoretical and applied papers discussing the allocation and welfare loss with respect to uninsurable shocks in income agents face during their adulthood. In the type of Bewley economies, Huggett (1993) and Aiyagari (1994) show that the inability of agents to transfer resources across states of nature and over periods of time distorts allocations and generates welfare losses that tend to be larger the higher the persistence of these shocks.In our work (see Cunha and Heckman, 2007), we have tried to point out to the profession the lifetime importance of the shock represented by the accident of birth. It is clearly a very persistent shock since children who are born in a disadvantaged family will spend many years in a disadvantaged environment, with terrible consequences for their skill acquisition and opportunities in life, as we show in Cunha, Heckman, and Schennach (2008). If markets were complete, children would be able to buy insurance against those shocks and they would use the resources to improve the environment in the family they grow up. Clearly, this is not feasible: at the very least, such markets would require children to start making allocation decisions as soon as they are born, something they clearly are not ready to do. It becomes imperative for our society to devise mechanisms or policies that will implement an allocation as close as possible to the first best even if markets are incomplete. This intervention can be justified on efficiency or fairness grounds, as is laid out in Heckman and Masterov (2007). Our work shows that a possible answer to the policy question is present in early childhood intervention programs. Recent work by Heckman, Moon, Pinto, and Yavitz (2008) shows that these programs are successful in a number of dimensions: they promote education; they reduce participation in crime for boys and reduce teenage pregnancy for girls all of which represent large costs to society. The basic idea underlying these programs is to provide children with an environment that would resemble the children’s environments if they were not born in disadvantaged families.It is important to emphasize that our work does not mean that early intervention programs are sufficient. The inter-temporal complementarity of investments that we estimate in our joint work with Susanne Schennach states that early investments must be followed up with late investments: high-quality early programs, such as the Perry pre-school, do not replace the importance of having good schools and prepared teachers. At the same time, the complementarity indicates that children from very disadvantaged households will not be able to extract the full benefits from school unless they receive early investments that make them school ready.


S. Rao Aiyagari, 1994. “Uninsured Idiosyncratic Risk and Aggregate Saving,” The Quarterly Journal of Economics, MIT Press, vol. 109(3), pages 659-84, August.
Flavio Cunha & James Heckman, 2007. “The Technology of Skill Formation,” American Economic Review, American Economic Association, v. 97(2), p. 31-47, May.
Flavio Cunha & James Heckman, 2008. “Formulating, Identifying and Estimating the Technology of Cognitive and Noncognitive Skill Formation,” Journal of Human Resources, forthcoming.
Flavio Cunha, James Heckman & Salvador Navarro, 2005. “Separating uncertainty from heterogeneity in life cycle earnings,” Oxford Economic Papers, Oxford University Press, v. 57(2), p. 191-261, April.
Flavio Cunha, James Heckman & Susanne Schennach, 2008. “Estimating the Elasticity of Intertemporal Substitution in the Formation of Cognitive and Non-Cognitive Skills”, unpublished mimeo, University of Chicago.
James Heckman & Dimitriy Masterov, 2007. “The Productivity Argument for Investing in Young Children“, Review of Agricultural Economics, v. 29(3), p. 446-493
James Heckman, Jora Stixrud & Sergio Urzua, 2006. “The Effects of Cognitive and Noncognitive Abilities on Labor Market Outcomes and Social Behavior,” Journal of Labor Economics, University of Chicago Press, v. 24(3), p. 411-482, July.
James Heckman, Seong Hyeok Moon, Rodrigo Pinto & Adam Yavitz, 2008. “A Reanalysis of The Perry Preschool Program”, unpublished mimeo, University of Chicago, first draft 2006, revised 2008.
Mark Huggett, 1993. “The risk-free rate in heterogeneous-agent incomplete-insurance economies,” Journal of Economic Dynamics and Control, Elsevier, vol. 17(5-6), pages 953-969.
Salvador Navarro, 2005. “Understanding Schooling: Using Observed Choices to Infer Agent’s Information in a Dynamic Model of Schooling Choice when Consumption Allocation is Subject to Borrowing Constraints,” PhD dissertation, University of Chicago.

Volume 9, Issue 1, November 2007

Q&A: Timothy Kehoe and Edward Prescott on Great Depressions

Timothy Kehoe is the Distinguished McKnight University Professor at the University of Minnesota and Adviser of the Federal Reserve Bank of Minneapolis. Edward Prescott is the W. P. Carey Chaired Professor of Economics at Arizona State University and Senior Monetary Adviser at the Federal Reserve Bank of Minneapolis. Both are interested in the theory and the application of general equilibrium models. Kehoe’s RePEc/IDEAS entry, Prescott’s RePEc/IDEAS entry.
EconomicDynamics: Depressions have recently received a lot of interest, witness the special issue of the Review of Economic Dynamics in 2002 and now the book you have edited. Why this sudden interest?
Timothy Kehoe, Edward Prescott: In 1999 Hal Cole and Lee Ohanian published a paper in the Quarterly Review of the Federal Reserve Bank of Minneapolis that broke a long standing taboo. They analyzed the U.S. Great Depression using the neoclassical growth model.  What they found was fascinating. Productivity recovered by 1935, but labor supply remained depressed by 25 percent and did not begin to recover until 1939.  An important question is why. Another important question, of course, is why productivity fell so sharply starting in 1929.The Cole and Ohanian study motivated us to organize aa conference at Minneapolis Fed in 2000 at which people presented analyses of great depressions in other countries using the neoclassical growth model. Six of the studies were from the interwar period and the other three from the postwar period. We encouraged the authors to work more on their papers and to submit revised versions to RED.  With the help of the graduate students in our workshop, we edited a special volume of RED with these studies.  Just this past year, we have published a book with revised versions of Hal and Lee’s original article and the articles in the RED volume, as well as six other studies.The success of this enterprise is leading to a shift from studying small business cycle fluctuations to the study of big movements in the output relative to trend.  With this shift, we have learned a lot.  The studies in the RED volume and in the new book have identified important puzzles. The central message is that there is an overwhelming need for a theory of how policy arrangements affect TFP.
ED: The neoclassical growth model has been used extensively to study business cycles. Lucas described business cycles as being all alike, and thus the quest was for a single model of the business cycle. Are all depressions alike? If not, can we still use the same model for all?
TK, EP: The real business cycle model developed by Finn Kydland and Ed has been successful in capturing the regularities in business cycle fluctuations, not just in the United States, but in other countries. Business cycles are small deviations from balanced growth, driven largely by small persistent changes in TFP. In real business cycle theory, fluctuations in TFP are modeled as a Markov process.Great depressions are large deviations from balanced growth. If we look at a graph of U.S. real GDP per working age person over the past century or more, we see small fluctuations around a path with growth of two percent per year. The Great Depression of the 1930s and the subsequent build up during World War II jump out of the graph as being something different.We have found that great depressions are like business cycle downturns in that they are driven mostly by drops in TFP, but these drops are very large and often prolonged. Great depressions are not alike, and they are not like business cycles, but we have found that the general equilibrium growth model is very useful for identifying regularities and puzzles. In some depressions, TFP drives everything, and we need to identify the factors the cause the large and prolonged drop in TFP. In other depressions, such as the U.S. and German great depressions of the 1930s, labor inputs are depressed more or longer than the model predicts, and we need to identify the factors that disrupted the labor market.
ED: Thus, depressions can be characterized by deeper and more prolonged deviations from trend than usual business cycles. But the reasons for these deviations vary, contrarily to business cycles. Each depression is then a case study. How can the validity of such a case study be established? In particular, to use statistical terms, how can the lack of degrees of freedom and out of sample testing be overcome?
TK, EP: In fact, we think that it is the other way around: Our study of depressions has been so fruitful that we think that it is useful for studying the business cycle. We have found that we can use the methodology that we have developed to study the factors that gave rise to the relatively small movements that we call business cycles. Treating populations, tax rates, productivity paths, and other factors as exogenous, we are determining which of these factors give rise to particular small depressions and booms.We are finding deviations from the theory and puzzles. One particularly interesting deviation from theory was the behavior of the U.S. economy in the 1990s. Another is the behavior of the Spanish economy from 1975 to 1985. Ed is studying the first with Ellen McGrattan, and Tim is studying the second with Juan Carlos Conesa. Even though these episodes are not great depressions, we are using the same methodology as in the great depressions studies.Macro has progressed beyond accounting for the statistical properties called business cycle fluctuations to predicting the time path of the economy given the paths of the exogenous variables. The great depression methodology points to what is causing the problems in a particular economy, whether it be productivity, labor market distortions, credit market problems, and so on. This is progress.
ED: What is the specific contribution of the volume you edited in the study of aggregate fluctuations?
TK: EP: Great Depressions of the Twentieth Century has 15 studies that involve the work of 26 researchers and use the same basic theoretical framework to organize the data and interpret the behavior of the different economies during depressions. The list of collaborators on this project is an impressive list of economists from all over the world. Having this set of great depression studies that use the same theoretical framework in a single volume should be valuable for researchers, especially graduate students, in deciding which conjectures to explore. We hope and expect that this volume will stimulate research on important problems in macroeconomics.Great depressions are not things of the past. They have occurred recently in Latin America and in New Zealand and Switzerland. Unless we understand their causes, we cannot rule out great depressions happening again.


Cole, Harold L., and Lee E. Ohanian, 1999, The Great Depression in the United States from a Neoclassical Perspective, Quarterly Review, Federal Reserve Bank of Minneapolis, issue Winter, 2-24.
Kehoe, Timothy J., and Edward C. Prescott, 2002, Great Depressions of the Twentieth Century, Review of Economic Dynamics, 5(1), 1-18.
Kehoe, Timothy J., and Edward C. Prescott (editors), 2007, Great Depressions of the Twentieth Century, Federal Reserve Bank of Minneapolis.
Kydland, Finn E., and Edward C. Prescott, 1982, Time to Build and Aggregate Fluctuations, Econometrica, 50(6), 1345-1370.
McGrattan, Ellen R., and Edward C. Prescott, 2007, Unmeasured Investment and the Puzzling U.S. Boom in the 1990s, Staff Report 369, Federal Reserve Bank of Minneapolis.
Conesa, Juan Carlos, Timothy J. Kehoe and Kim J. Ruhl, 2007. Modeling Great Depressions: The Depression in Finland in the 1990s, NBER Working Paper 13591.
Volume 8, Issue 2, April 2007

Q&A: Per Krusell on Search and Matching

Per Krusell is Professor of Economics at Princeton University and Visiting Professor at the Institute for International Economic Studies in Stockholm. Per has worked on macroeconomic issues including technology and economic growth, optimal fiscal policy and political economy, and consumer inequality, recently focusing in particular on wage inequality, labor-market frictions, and time-inconsistencies in policy for both consumers and government. Krusell’s RePEc/IDEAS entry.
EconomicDynamics: Labor search models have recently come under attack. Hall (2005) and Shimer (2005) show that they lack some important cyclical properties. With Hornstein and Violante, you show that they also do not generate any significant wage dispersion. Why are these results important?
Per Krusell: To answer the question, I think it may be helpful just to be a little more precise about what we do in our work. Our question is a rather fundamental one in economics, but applied to labor markets: do products (workers) that are “identical” receive more or less the same price (wage) in the market? We approach the question by using quantitative theory. We first choose what we think is the most natural, or at least the most commonly used, theory for why the competitive model would not be a good one for labor markets—the Mortensen-Pissarides model. The idea here is simply that it is costly to search, and some workers are lucky, so they find good, well-paid jobs, whereas other workers are unlucky. Then we assign reasonable values to the model parameters and see what the model predicts about wage dispersion. We find that there is a wage dispersion statistic— the mean-min wage ratio, that is, the average wage divided by the lowest wage, all for equivalent workers—that the model has very sharp predictions about, independently of the shape of the wage distribution the workers are drawing from. The key finding in our work is that the mean-min ratio predicted by a large class of search models is very close to zero: workers at the bottom of the wage distribution pretty much do just as well as the average worker.We calibrate the model based on some well-known statistics. The most important one is the duration of unemployment. The intuitive interpretation of our main result is the following: given that unemployed workers take jobs very quickly (the mean duration of unemployment is about 16 weeks), they must think that they cannot get much higher wages if they wait and search more. That is, their observed search behavior reveals that there cannot be much upside potential in the wage distribution. We think this is an important result. If our theory is a good rough approximation of reality, it says that the competitive model is a good approximation to how labor markets work, at least for the purpose of understanding wage dispersion. However, there is a problem: just about any measure of wage dispersion, including those that we painstakingly develop in our research, suggests that the mean-min ratio for similar workers is very high—on the order of magnitude of 2, whereas the calibrated model suggests something like 1.05. There are several alternative ways to interpret this discrepancy. One is that search frictions really are unimportant, and that the large dispersion we see in the data is just reflecting our inability to isolate “identical workers”: the wage dispersion is due to workers really being different. Many do take just this away from our work, though we really have to caution against this conclusion, because we think that it is unlikely that the measurement problems are so severe. The alternative conclusion, then, would be that there is something wrong with the theory. We go through many different versions of the search theory and find that most of them are strikingly unable to deliver wage dispersion. The one version of search theory that seems to have promise is the one where workers search actively for jobs also when they are employed. These models come closer to matching the data, but there are still important discrepancies.You draw a parallel to the findings by Robert Shimer and others that the same kind of model has trouble generating aggregate fluctuations. It’s a good parallel. Search theory has two main predictions: how prices/wages are dispersed in the cross-section (our focus) and how match creation varies over time (Robert’s focus). Robert finds that unemployment—for both searching parties, workers as well as firms—fluctuates much less in the model than in the data. This has generated a large number of attempts to better understand the origins of unemployment and vacancy fluctuations. Our work is far from the first to look quantitatively at frictional wage dispersion, but I think we have come up with a very simple and transparent argument that says that there is a severe quantitative limitation that others had not noticed. I think this is roughly what Robert did too, and I hope that our work will help stimulate even more research in this area. I find these research projects very interesting because they ask central questions: do markets work well, and if they don’t, just how do we know and what are the implications? Wage dispersion and unemployment are also related in that they are both mechanisms through which workers face risk: risk of not getting any income at all, and risk of getting income, but very low income. Andreas, Gianluca, and I are also interested in how economic policy might be able to shield workers from risk, but it is important to first understand the nature and origins of this risk.
ED: Can we still trust labor search models to give us important insights about the labor markets?
PK: Economics is a young discipline, and we should be wary of trusting our models, I think. But if I interpret the question more broadly as if I think labor search models are useful for understanding labor markets, then I have no hesitation in saying yes. They are very useful, and I think that the current research is producing lots of new insights. One reason, I think, is that this new research is quantitative; up until recently, search theory was a “pencil-and-paper” discipline where the model predictions were mainly compared qualitatively with data. Finn Kydland and Ed Prescott showed how much more insight could be gained from quantitative business cycle theory than from abstract discussions about what drive cycles, and I hope that future researchers will view the current research on frictions in the labor market in a similar way. It is true that we have discovered several quantitative limitations of the basic search/matching model. This raises the standards in the search literature, I think—we need the models to be quantitatively convincing in their first-order implications. As a result, search models are better understood now, and new varieties of search models which come closer to matching the data are being developed as we speak. We saw this kind development as a response to Finn’s and Ed’s work—most business-cycle theories today can be viewed as developments of the quantitative model Finn and Ed formulated—and I hope that the current work will be equally successful.I this context, I remember reading an interview with Tom Sargent where he was talking about the “good old days” when he, Bob Lucas and Ed Prescott were developing rational-expectations macroeconomics. He said that Bob and Ed detached themselves from formal econometrics because they felt that “too many good models were being rejected”. I couldn’t agree more with this description, and I am very happy that Bob and Ed did that. We should see the current work on labor-macro exactly the same way. It’s important to realize that economic research involves a constant interplay between data and theory, and that we are still at a stage where the question is not about the fine details of how to understand the economy we live in, but rather about whether our theories can even roughly capture the central features of economic data. These are the kinds of questions we are now working with in this field, and I am very excited about being able to participate in this research. I think it is one of the most exciting developments in macroeconomics in a while.
ED: You focus in your analysis on the mean/min measure of wage dispersion. Why focus on this one and not a more complete description of the wage distribution? In particular, labor economists argue that we know very little about the tails of the wage distribution, and in ways that matter for the mean.
PK: The whole point of our paper is that the previous literature, by looking at more standard measures of wage dispersion, has missed a simple and yet central implication of the theory, which is the one for the mean-min ratio. Moreover, although the mean-min ratio is only one measure of wage dispersion—and one might care about others as well—in practice knowledge of the mean-ratio places strong restrictions on wage dispersion as measured by other moments. But more importantly, perhaps, because the benchmark search theory we look at assumes risk neutrality, for the searching agents no other moments than the mean matter. Risk neutrality is also a key assumption behind our main formula, though we obtain approximate bounds on wage inequality using preferences with risk aversion.


Hall, Robert E. (2005). “Employment Fluctuations with Equilibrium Wage Stickiness,” American Economic Review, American Economic Association, vol. 95(1), pages 50-65, March.
Hornstein, Andreas, Per Krusell and Giovanni L. Violante (2006a). “Frictional wage dispersion in search models: a quantitative assessment,” Working Paper 06-07, Federal Reserve Bank of Richmond.
Hornstein, Andreas, Per Krusell and Giovanni L. Violante (2006b). “Technical appendix for Frictional wage dispersion in search models: a quantitative assessment,” Working Paper 06-08, Federal Reserve Bank of Richmond.
Kydland, Finn E., and Edward C. Prescott (1982). “Time to Build and Aggregate Fluctuations,” Econometrica, Econometric Society, vol. 50(6), pages 1345-70, November.
Mortensen, Dale T., and Christopher A. Pissarides (1994). “Job Creation and Job Destruction in the Theory of Unemployment,” Review of Economic Studies, Blackwell Publishing, vol. 61(3), pages 397-415, July.
Evans, George W., and Seppo Honkapohja (2005). “An Interview With Thomas J. Sargent,” Macroeconomic Dynamics, Cambridge University Press, vol. 9(04), pages 561-583, October.
Shimer, Robert (2005). “The Cyclical Behavior of Equilibrium Unemployment and Vacancies,” American Economic Review, American Economic Association, vol. 95(1), pages 25-49, March.

Volume 8, Issue 1, November 2006

Enrique Mendoza on Financial Frictions, Sudden Stops and Global Imbalances

Erique Mendoza is Professor of International Economics & Finance at the University of Maryland and Resident Scholar at the International Monetary Fund. He has written extensively on international finance, in particular in emerging economies. Mendoza’s RePEc/IDEAS entry.

EconomicDynamics: In a previous Newsletter (April 2006), Pierre-Olivier Gourinchas argued that the US imbalance in the current account is not as bad as one thinks once the expected valuation effect is taken into account: US assets held by foreigners will have a lower return than foreign assets held by Americans. Is there also such an effect in heavily indebted emerging countries, such as Mexico?
Enrique Mendoza: Yes there is a similar effect but there are important details worth noting. Countries like Mexico, the Asian Tigers and China and India, as well as many oil exporters, have built very large positions in foreign exchange reserves, which consist mostly of U.S. treasury bills. To give you an idea of the magnitudes, out of the U.S. net foreign asset position as a share of world GDP of -7 percent in 2005, emerging Asia (the “Tigers” plus China and India) accounts for about 4 percentage points! In this case, the fall in the value of the dollar and the low yields on U.S. Tbills played a nontrivial role. The difference is that Treasury bills are a risk-free asset, whereas in comparisons vis-a-vis industrial countries the differences in returns pertain to equity, FDI, corporate bonds and other risky assets. This observation highlights a puzzling fact: the U.S. portfolio of foreign assets includes a large negative position in government bonds (and largely vis-à-vis developing countries) but a positive position in private securities (and particularly vis-à-vis other industrial countries).But perhaps the more important point raised by your question is whether the global imbalances are good or bad. In this regard, the work Vincenzo Quadrini, Victor Rios-Rull and I have been doing has interesting implications. On the one hand, there is nothing wrong with the large negative current account and net foreign assets of the U.S. because we can obtain them as the result of the integration of capital markets across economies populated by heterogeneous agents and with different levels of financial development. We document empirical evidence showing that indeed capital market integration has been a global phenomenon, but financial development has not. In our analysis, the observed external imbalances are perfectly consistent with solvency conditions and there is no financial crisis as some of the gurus in the financial media have predicted. We can also explain the U.S. portfolio structure (i.e., a large negative position in public debt and yet a positive position in private risky assets) as another outcome of financial globalization without financial development. On the other hand, we find that agents in the most-financially developed country make substantial welfare gains (of close to 2 percent in the Lucas measure of utility-compensating variations in consumption) at the expense of similarly substantial welfare costs for the less-financially developed country. To make matters worse, the burden of these costs is unevenly distributed, affecting more the agents with lower levels of wealth in the poorest country.
ED: Sudden stops are current account reversals that coincide with a rapid and deep drop in real activity. What is your take on why there is such a sharp drop in GDP during sudden stops?
EM: The short answer is a credit collapse, but let me explain. Let’s split the output collapse of a Sudden Stop into two phases. The initial stage, on impact in the same quarter as the current account reversal, and the second stage, which is the recession in the periods that follow. Growth accounting I have done for Mexico shows that standard measures of capital and labor explain very little of the initial output drop, while changes in capacity utilization and demand for imported intermediate goods played an important role, along with a still important contribution of a decline in TFP that we still need to understand better. In the second stage, the collapse in investment of the initial stage starts to affect demand for other inputs and production, so it starts to play a role as well. These changes can be ultimately linked to the loss of credit market access reflected in the current account reversal if we consider environments in which credit frictions result in constraints linking access to credit to the market value of incomes or assets used as collateral. In models with these features, the constraints can become suddenly binding as a result of typical shocks to “fundamentals” like the world interest rate, the terms of trade or “true” domestic TFP when economic agents are operating at high leverage ratios (e.g., South East Asia in 1997). Agents rush to fire-sale assets to meet these constraints, but when they do they make assets and goods prices fall, tightening credit conditions further, and producing the classic debt-deflation spiral that Irving Fisher envisioned in his classic 1933 article.
ED: But how exactly does a debt-deflation cause the two stages of output drop that you mentioned, and how large can we expect these effects to be?
EM: For the first stage, the decline in the value of collateral assets, and in the holdings of those assets, tightens access to credit for working capital, thus reducing factor demands, capacity utilization and output. Here the key issue is not just that some or all costs of production are paid with credit, but that the access to that credit is vulnerable to occasionally binding collateral constraints. In addition, if the deflation hits adversely relative prices in some sectors (e.g. the relative price of nontradables, as it occurs in Sudden Stops), the value of the marginal product of factor demands falls in those sectors, and leads them to contract. If they are a large sector of the economy, as is the case with the nontradables sector in emerging economies, then aggregate GDP can also fall sharply. For stage two, the decline in the capital stock induced by the initial investment collapse, and the possibility of continued weakness in credit access for working capital, can explain the recession beyond the initial quarter. Recovery can then be fast or slow depending on “luck” (i.e., terms of trade, world interest rates, “true” TFP, etc.) and/or the speed of the endogenous adjustment that returns the economy to leverage ratios at which the collateral constraints and the debt-deflation spiral do not bind.My research on models with these features shows that the debt- deflation mechanism produces large amplification and asymmetry in the responses of macro aggregates to shocks of standard magnitudes, conditional on high-leverage states that trigger the credit constraints. Moreover, current account reversals in these models are an endogenous outcome, rather than an exogenous assumption as in a large part of the Sudden Stops literature. The declines in investment and consumption, and the current account reversals, are very similar to the ones observed in Sudden Stops. The output collapse is large, but still not as large as in the data. On the other hand, precautionary saving behavior implies that long-run business cycle dynamics are largely invariant to the presence of the credit constraints. Interestingly, this is also a potential explanation for the large accumulation of net foreign assets in emerging economies that I mentioned in response to your first question: this can be viewed as a Neo-mercantilist policy to build a war chest of foreign reserves to self insure against Sudden Stops. All these findings are documented in my 2006 piece in the AER Papers & Proceedings and in a recent NBER working paper.
ED: In your first answer, you argue that there are substantial costs from living in a country with underdeveloped financial markets. In the second, sudden stops happens at least in part due to an extensive use of credit markets instead of internal funds in financing economic activity. What is then the policy advice?
EM: Actually, the two arguments are quite consistent if you think about them this way. In the model of global imbalances, a country’s degree of financial development is measured by the degree of market completeness, or contract enforcement, that its own institutional and legal arrangements support. If agents cannot steal at all, then the model delivers the predictions of the standard Arrow-Debreu complete markets framework. If agents can steal 100 percent of the excess of their income under any particular state of nature relative to the “worst state of nature,” then the model delivers the predictions of a setup in which only non-state contingent assets are allowed to exist. However, as Manuel Amador showed in a recent discussion of our Global Imbalances paper, this enforceability constraint can also be expressed as a borrowing constraint that limits debt not to exceed a fraction of the value of the borrowers’ income. Now, this is the same as one variant of the credit constraints used in the Sudden Stop models I have studied (particularly one that limits debt denominated in units of tradable goods not to exceed a fraction of the value of total income, which includes income from the nontradables sector valued in units of tradables).In both, the model of global imbalances and the Sudden Stop models, the main problem is the existence of credit constraints affecting borrowers from financially underdeveloped countries that originate in frictions in credit markets, such as limited enforcement. In both models, domestic borrowers face these frictions whether they borrow at home or abroad (although the Sudden Stop models are representative agent models, so all the borrowing at equilibrium is from the rest of the world). Given this similarity between the models, you can expect that the policy advice is broadly the same: The optimal policy is to foster financial development by improving the contractual environment of credit markets. Actually, in the Imbalances paper, the less- developed country can avoid the welfare costs of globalization by just bringing its enforcement level to par with that of the most financially developed country: it does not need to eliminate the enforcement problem completely. Assuming improving financial institutions and contract enforcement is not possible, or that it takes too long, policies like the build up of foreign reserves as self insurance, or proposals going around now for partially completing markets by having international organizations support markets for bonds linked to GDP or terms of trade, or to prevent asset price crashes using mechanisms akin to price guarantees on the emerging markets asset class, are a distant second best, but still much preferred to remaining vulnerable to the deep recessions associated with Sudden Stops.


Fisher, I. 1933. “The Debt-Deflation Theory of Great Depressions”, Econometrica, vol. 1, pp. 337-357.
Gourinchas, P.-O. 2006. “The Research Agenda: Pierre-Olivier Gourinchas on Global Imbalances and Financial Factors“, EconomicDynamics Newsletter, vol. 7 (1).
Mendoza, E. G. 2006. “Lessons from the Debt-Deflation Theory of Sudden Stops”, American Economic Review Papers and Proceedings, vol 96 (2), pp. 411-416 (extended version: NBER working paper 11966).Mendoza, E.G. 2006. “Endogenous Sudden Stops in a Business Cycle Model with Collateral Constraints: A Fisherian Deflation of Tobin’s Q“. Mimeo, NBER working paper 12564.
Mendoza, E. G. , V. Quadrini and J.-V. Ríos-Rull 2006. “Financial Integration, Financial Deepness and Global Imbalances“. Mimeo, University of Maryland.

Volume 7, Issue 2, April 2006

David Levine on Experimental Economics

David Levine is the Armen Alchian Professor of Economic Theory at the University of California, Los Angeles. He is interested in the study of intellectual property and endogenous growth in dynamic general equilibrium models, the endogenous formation of preferences, institutions and social norms, learning in games, and the application of game theory to experimental economics. He will be the next president of the Society for Economic Dynamics. Levine’s RePEc/IDEAS entry.

EconomicDynamics: In your 1998 Review of Economic Dynamics article, the most cited so far in this journal, you show that one can easily account for the altruistic and spiteful behavior in dynamic experimental games. How did this article influence subsequent experiments and the relevant literature?
David Levine: There were several elements of that paper: as you indicate, it examines altruism and spite in experimental games. One finding is that altruism and spite cannot be explained as a static phenomenon, but rather arises dynamically as actions by players trigger feelings of altruism and spite by their opponents. However, it is apparent to anyone with a modicum of sense that players in these experiments have altruism and spite, and many papers making this point and proposing various models appeared before I wrote on the subject – Rabin in particular was a leader. There were two innovative elements of the paper. First, it proposed a particular signalling model of altruism and spite. While this approach has not overrun the profession, there is some very beautiful recent work by Gul and Pesendorfer taking an axiomatic approach to interpersonal preferences, of which I am pleased that my rather ad hoc model turns out to be a special case of. However, I think the main element of the paper that was significant was the effort to do quantitative theory – that is, to follow the calibration methodology of looking for a small number of behavioral parameters – in this case measuring attitudes towards other players – that are the same across many different data sets. This methodology has been used with great success by Fehr and Schmidt – who have their own model of altruism and spite (“inequality aversion”), and who have done a great many clever experiments giving players opportunities to punish and reward each other.
ED: In which areas do you see this kind of economic behavior to have an impact? In other words, for which questions may the responses be significantly influenced by this behavior?
DL: It is easier to answer the opposite question: where does altruism and spite not have important economic consequences? I think there is fairly widespread agreement about this: in competitive environments, interpersonal preferences do not play much role, because there is little scope for harming or helping other people. Setting too high a price for your good doesn’t harm your customer because they can easily find another seller to sell to them at the competitive price. Similarly, you don’t help or harm your competitors who are able to sell pretty much what they want at the competitive price no matter what you do. Setting too low a price helps your customer of course – but you are just transferring value to them on a 1-1 basis, and while the evidence is that people are willing to be altruistic when the gains to the other party are greater than the cost to themselves, they are less willing to do so when the gains and the costs are equal. There is plenty of experimental evidence that in competitive environments the competitive model – with selfish preferences – does quite well; this is the thrust of the first experimental economics literature by Plott, Smith and many others. This idea is also reflected in an earlier literature – particularly by Gary Becker – pointing out that racial discrimination and other related interpersonal preferences do not thrive in a competitive environment.It should also be emphasized that while some strangers in the lab playing anonymously do behave altruistically and spitefully, the majority behave selfishly. In non-competitive environments, some games are relatively robust to the introduction of some players with deviant preferences; others are not. Centipede grab-a-dollar type games, of the type studied by McKelvey and Palfrey, are very sensitive to players willingness to give money away in the final period. Even more economically important are finitely repeated games. Here a little bit of non-selfish play – willingness to reward an opponent in the last round – goes a long way. It is pretty well established both in the laboratory and out that a long finite horizon can induce quite a bit of cooperation. I think it is pretty likely a consequence of preferences on the part of some people that care mildly about other players. In practice I think a lot of institutions rely pretty heavily on a mild degree of altruism by most people, and a lot of spite by a few. The judicial system – the rule of law – depends in my view quite heavily on this. Without the desire for revenge, many fewer crimes would be reported. Bear in mind that there is an important public goods aspect to reporting crimes. If relatively neutral witnesses didn’t have a mild preference for telling the truth and “seeing justice done” it is hard to see how the system could function particularly well.
ED: In game theory, the frequent multiplicity of Nash equilibria is dealt with by refinements that sometimes are quite ad hoc. In which sense are refinements in preferences such as the ones discussed above not subject to this criticism?
DL: I’m not entirely sure of the analogy – the problem of multiplicity is that the theory is inadequate; it does not have enough predictive power. The problem with preferences is that the theory – at least using selfish preferences – is wrong. But the issue of being ad hoc is certainly relevant in both cases.A great deal of work on preferences that are altruistic and spiteful is ad hoc; and some of it does not stand up well to scrutiny as a result. The big problem I see is that preferences that are an ad hoc solution to one problem at the same time lead to nonsense results, or contradict existing theory in setting where existing theory does quite well. It isn’t enough to show that a particular model of altruism and spite solves some particular problem. It has to be shown also that (1) it remains consistent with existing theory in domains where existing theory already works and (2) that is works across a broad variety of different problems where altruism and spite exist.Moreover, most preference puzzles are quantitative not qualitative, so I think the scope for theories showing that particular modifications to preferences shifts things in the right direction to explain one particular anomaly are not all that useful at this stage. My paper used an ad hoc description of preferences, but tried to bring some of this discipline to bear. Fehr and Schmidt’s work has also been aimed at providing broad quantitative solutions.I also think that axiomatic theory of the type being conducted by Gul and Pesendorfer is an important antidote to the ad hoc approach – it is a good thing, in my view, that the type of preferences I examined satisfy the Gul-Pesendorfer axiom system. The reason that the axiomatic approach is so important is that it gives us a much broader understanding of what preferences are like, what domains they are consistent with selfish preferences over, and some general reassurance that they do not have strange and undesirable features. It is not easy, given a particular functional form, to see transparently what implications that function has in all settings – that is, what axioms it might satisfy and violate. The axiomatic approach also enables us to know which variations on preferences are “reasonable” in the sense of satisfying the same set of axioms, while limiting the range of solutions we look for. How difficult would economics be, for example, if for the most part we did not think that preferences were concave?


Becker, Gary S. (1971): The Economics of Discrimination. 2nd edition. 178 pages. University of Chicago Press.
Benoit, Jean-Pierre, and Vijay Krishna (1985): “Finitely Repeated Games“, Econometrica, vol. 53(4), pp. 905-922.
Dal Bó, Pedro (2005): “Cooperation under the Shadow of the Future: experimental evidence from infinitely repeated games“, American Economic Review, December.
Fehr, Ernst, and Klaus M. Schmidt (1999): “A Theory Of Fairness, Competition, and Cooperation“, Quarterly Journal of Economics, vol. 114(3), pp. 817-868.
Gul, Faruk, and Wolfgang Pesendorfer (2005): “The Canonical Type Space for Interdependent Preferences,” mimeo, Princeton University.
Levine, David K. (1998): “Modeling Altruism and Spitefulness in Experiments,” Review of Economic Dynamics, vol. 1, pp. 593-622.
McKelvey, R. and Thomas Palfrey (1992): “An experimental study of the centipede game,” Econometrica, vol. 60, pp. 803-836.
Plott, Charles R., and Vernon L. Smith (1978): “An Experimental Examination of Two Exchange Institutions,” Review of Economic Studies, vol. 45(1), pp. 133-53, February.
Rabin, Matthew (1993): “Incorporating Fairness into Game Theory and Economics,” American Economic Review, vol. 83, pp. 1281-1302.
Roy Radner (1980): “Collusive behavior in noncooperative epsilon-equilibria of oligopolies with long but finite lives.” Journal of Economic Theory, vol. 22, pp. 136-154.
Smith, Vernon L. (1962): “An experimental study of competitive market behavior”, Journal of Political Economy, vol. 70, pp. 111-137.

Volume 7, Issue 1, November 2005

Q&A: Peter Ireland on Money and the Business Cycle

Peter Ireland is Professor of Economics at Boston College. He has published extensively on monetary economics, in particular on testing monetary theories and the business cycle impact of monetary policies. Ireland’s RePEc/IDEAS entry.

EconomicDynamics: In your 2003 JME, you show that one cannot reject the stickiness of nominal prices at business cycle frequencies. In your 2004 JMBC, you argue that money plays a minimal role in the business cycle. Do these two papers contradict each other?
Peter Ireland: I do not think there is any substantive contradiction, as those two papers address somewhat different sets of issues.In the 2003 JME paper on “Endogenous Money or Sticky Prices,” I try to distinguish between two interpretations of the observed correlations between nominal variables, like the nominal money stock or the short-term nominal interest rate, and real variables, like aggregate output.The first interpretation, present most famously in the work of Friedman and Schwartz, attributes this correlation to a causal channel, involving short-run monetary nonneutrality, running from policy-induced changes in the nominal variables to subsequent changes in real variables. The second interpretation, first associated with James Tobin’s critique of the Friedman-Schwartz hypothesis but also advanced by others–most notably by Scott Freeman in some of his best work–attributes these correlations instead to a channel of “reverse causation,” according to which movements in real output, driven by nonmonetary shocks, give rise to movements in nominal variables as the monetary authority and the private banking system also respond systematically to those shocks. My JME paper finds that an element of monetary nonneutrality, perhaps reflecting the presence of nominal price rigidity, does seem important in accounting for the correlations that we find in the data. But the endogenous money or reserve causation story contains an important element of truth as well–a full understanding of the postwar US data, in order words, requires that one take seriously the high likelihood that causality runs in both directions.The 2004 JMCB paper on “Money’s Role in the Monetary Business Cycle,” as I said, addresses a slightly different set of issues. It asks, conditional on having a model with monetary nonneutrality, whether the effects of monetary policy are transmitted to real output through movements in the nominal interest rate or through movements in the money stock. That paper finds that, at least when it comes to explaining the post-1980 US data, movements in the nominal interest rate seem to be what really matter for understanding the dynamic behavior of output and inflation. Importantly, though, that result by no means implies that policy-induced movements in the monetary base or in the broad monetary aggregates have no impact on output or inflation–to the contrary my estimated model does imply that movements in M matter. The point is more subtle: movements in M do matter, but they matter because those movements in M give rise first to movements in R.Your question is a good one, though, since it gets at a much bigger and more important result that comes out of the recent literature on New Keynesian economics. The dynamic, stochastic, New Keynesian models of today are very, very different from older style Keynesian models in that they admit that output can fluctuate for many reasons, not just in response to changes in monetary policy or other so-called demand-side disturbances, but also in response to shocks like the technology shock from Kydland and Prescott’s real business cycle model. Likewise, these New Keynesian models also admit that to the extent that output fluctuations do reflect the impact of technology shocks, those aggregate fluctuations represent the economy’s efficient response to changes in productivity, just as they do in the real business cycle model. This insight runs throughout all of the recent theoretical work on New Keynesian economics: by Woodford, by Clarida, Gali and Gertler, by Goodfriend and King, and many others. Meanwhile, empirical work that seeks to estimate New Keynesian models, including my own work along those lines, has consistently suggested that monetary policy shocks have played at most a modest role in driving business cycle fluctuations in the postwar US–other shocks including technology shocks consistently show up as being much more important. Those results echo the findings of others, like Chris Sims and Eric Leeper, who work with less highly constrained vector autoregressions and also find that identified monetary policy shocks play a subsidiary role in accounting for output fluctuations.And so, the recent literature on New Keynesian economics draws this important distinction: historically, over the postwar period, Federal Reserve policy does not seem to have generated hugely important fluctuations in real output–shocks other than those to monetary policy have been much more important. But by no means does that finding imply that larger monetary policy shocks than we’ve actually experienced in the postwar US would not have more important real effects. The models imply that bigger monetary policy shocks would have bigger real effects.
ED: Is the Friedman Rule optimal?
That is another great question–a question that together with the one that asks why noninterest-bearing fiat money is valued in the first place represents one of the most important questions in all of monetary theory. A lot of great minds have grappled with both questions without arriving at any definitive answers, so far be it from me to suggest that I have any clear answers myself. But issues concerning the optimality of the Friedman rule have run through a lot of my published work and those issues still intrigue me today.More recently, what has fascinated me is this alternative view of low inflation and nominal interest rates that contrasts so markedly with the view provided by Milton Friedman’s original essay on the “Optimum Quantity of Money.” Friedman argues that zero nominal interest rates are necessary for achieving efficient resource allocations in a world in which money is needed to facilitate at least certain types of transactions, and that result is echoed in many contemporary monetary models: cash-in-advance models, money-in-the-utility function models and so on. But then there is this alternative view that the zero lower bound on the nominal interest rate implies that in a low-inflation environment, the central bank can “run out of room” to ease monetary policy in the event that the economy gets hit by a series of what, again, might loosely speaking be called adverse demand-side shocks.In his 1998 Brookings paper, Paul Krugman provocatively associated this zero-lower-bound problem with the old-style Keynesian liquidity trap and around the same time that paper was published, Harald Uhlig also had a paper asking what links might exist between those two ideas. Reconciling these two views of zero nominal interest rates–the Friedman rule versus the liquidity trap–remains, I think, one of the most important outstanding problems in monetary economics. I took my own initial stab at understanding certain aspects of the problem in my recent IER paper on “The Liquidity Trap, the Real Balance Effect, and the Friedman Rule,” recent work by Joydeep Bhattacharya, Joe Haslag, Antoine Martin and Stevee Russell grapples with similar issues. But this is clearly an area in which much more work remains to be done.Before closing, I have to mention that these possible connections between the Friedman rule and the Keynesian liquidity trap were to my knowledge first alluded to in a paper by Charles Wilson, “An Infinite Horizon Model with Money,” edited by Jerry Green and Jose Scheinkman. That amazing paper by Wilson is jam-packed with insights and consequently has provided the inspiration for countless others. Hal Cole and Narayana Kocherlakota’s great paper from the Minneapolis Fed Review on “Zero Nominal Interest Rates: Why They’re Good and How to Get Them”–that is another great paper that picks up on some of the results scattered throughout Wilson’s; and then my 2003 article from RED on “Implementing the Friedman Rule” just builds on Cole-Kocherlakota. It seems that anyone who reads Wilson’s article comes away with ideas for yet another new paper–without a doubt it is one of the classic contributions to the field of monetary economics.


Bhattacharya, Joydeep, Joseph H. Haslag, and Antoine Martin (2005): Heterogeneity, Redistribution, and the Friedman Rule, International Economic Review vol. 46, pages 437-454, May.
Bhattacharya, Joydeep, Joseph H. Haslag, and Steven Russell (forthcoming): The Role of Money in Two Alternative Models: When is the Friedman Rule Optimal, and Why?, Journal of Monetary Economics, forthcoming.
Clarida, Richard, Jordi Galí, and Mark Gertler (1999): The Science of Monetary Policy: A New Keynesian Perspective, Journal of Economic Literature, vol. 37, pages 1661-1707, December.
Cole, Harold L. and Narayana Kocherlakota (1998): Zero Nominal Interest Rates: Why They’re Good and How to Get Them, Federal Reserve Bank of Minneapolis Quarterly Review, vol. 22, pages 2-10, Spring.
Freeman, Scott (1986): Inside Money, Monetary Contractions, and Welfare, Canadian Journal of Economics, vol. 19, pages 87-98, February.
Freeman, Scott and Gregory W. Huffman (1991): Inside Money, Output, and Causality, International Economic Review, vol. 32, pages 645-667, August.
Freeman, Scott and Finn E. Kydland (2000): Monetary Aggregates and Output, American Economic Review, vol. 90, pages 1125-1135, December.
Friedman, Milton (1969): The Optimum Quantity of Money In The Optimum Quantity of Money and Other Essays. Chicago: Aldine Publishing Company.
Friedman, Milton and Anna Jacobson Schwartz (1963): Monetary History of the United States, 1867-1960. Princeton: Princeton University Press.
Goodfriend, Marvin and Robert King (1997): The New Neoclassical Synthesis and the Role of Monetary Policy. In: Ben S. Bernanke and Julio J. Rotemberg, Eds. NBER Macroeconomics Annual 1997. Cambridge: MIT Press.
Ireland, Peter N. (2003a): Implementing the Friedman Rule, Review of Economic Dynamics, vol. 6, pages 120-134, January.
Ireland, Peter N. (2003b): Endogenous Money or Sticky Prices?, Journal of Monetary Economics, vol. 50, pages 1623-1648, November.
Ireland, Peter N. (2004): Money’s Role in the Monetary Business Cycle, Journal of Money, Credit, and Banking, vol. 36, pages 969-983, December.
Ireland, Peter N. (2005): The Liquidity Trap, the Real Balance Effect, and the Friedman Rule, International Economic Review, vol. 46, pages 1271-1301, November.
Krugman, Paul R. (1998): It’s Baaack: Japan’s Slump and the Return of the Liquidity Trap, Brookings Papers on Economic Activity, pages 137-187.
Kydland, Finn E. and Edward C. Prescott (1982): Time To Build and Aggregate Fluctuations, Econometrica, vol. 50, pages 1345-1370, November.
Leeper, Eric M., Christopher A. Sims, and Tao Zha (1996): What Does Monetary Policy Do?, Brookings Papers on Economic Activity, pages 1-63.
Tobin, James (1970): Money and Income: Post Hoc Ergo Propter Hoc?, Quarterly Journal of Economics, vol. 84, pages 301-317, May.
Uhlig, Harald (2000): Should We Be Afraid of Friedman’s Rule?, Journal of the Japanese and International Economies, vol. 14, pages 261-303, December.
Wilson, Charles (1979): An Infinite Horizon Model with Money. In: Jerry R. Green and Jose Alexandre Scheinkman, Eds. General Equilibrium, Growth, and Trade: Essays in Honor of Lionel McKenzie. New York: Academic Press.
Woodford, Michael (2003): Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton: Princeton University Press.

Volume 6, Issue 2, April 2005

Ellen McGrattan on Business Cycle Accounting and Stock Market Valuation

Ellen McGrattan is Monetary Advisor at the Research Department of the Federal Reserve Bank of Minneapolis and Adjunct Professor of Economics at the University of Minneapolis. She has worked on a large number of topics, such as business cycles, equity premiums, optimal debt and solution methods. McGrattan’s RePEc/IDEAS entry.

EconomicDynamics: With V.V. Chari and Pat Kehoe, you show that the driving forces in business cycle model are well summarized by efficiency, labor and investment wedges. Using the Great Depression and the 1982 recession in the US, you argue that investment wedges are not relevant. In work with Prescott, you demonstrate that the recent stock market boom can be traced to changes in dividend taxation. Are these two result not contradictory?
Ellen McGrattan: No they are not contradictory.But to explain that requires some background. In ‘Business Cycle Accounting’, we propose a methodology—one that is, in my opinion, much better than the SVAR methodology—to isolate promising classes of business cycle theories. There are two parts. The first is to show that a large class of models are equivalent to a prototype growth model with time-varying “wedges” resembling time-varying productivity, labor income tax rates, investment tax rates, and government consumption. The second is the accounting part: measure wedges using data and feed them into the prototype growth model to determine their contributions to aggregate fluctutations. We find that the investment “wedge” (which looks just like a time-varying tax rate on investment) does not contribute significantly to aggregate fluctuations. Therefore, models in which frictions manifest themselves as investment wedges are not promising for studying business cycles. These include those with credit market frictions such as Bernanke and Gertler (1989).In the paper with Ed entitled ‘Taxes, Regulations, and the Value of US and UK corporations’, we consider the dramatic secular changes in the value of US and UK corporate equities that occurred between the 1960s and 1990s, when there was little change in corporate capital stocks, after-tax corporate earnings, or corporate net debt. In particular, we ask what growth theory predicts for equities given estimates of taxes and productive capital stocks. There were two innovations that we made that are worth noting. The first innovation was a method to estimate the value of corporate intangible capital, which is not included in measures of productive capital but adds to the value of corporations. Our estimate for intangible corporate capital is large, roughly 2/3 as big as tangible corporate capital. The second innovation was to bring public finance back into finance and relate the large movements in equity values to large movements in the effective tax rate on corporate distributions (e.g., dividends). A key proposition is that a decline in the tax rate on corporate distributions implies a rise in stock values and (if revenues are rebated back) no change in the reproducible cost of capital. This is what we see in the data.Now let me go back to your question about possible inconsistencies between Chari-Kehoe-McGrattan and McGrattan-Prescott. One reason they are not inconsistent is the key tax rate for MP is the tax rate on corporate distributions. The level of the tax rate on distributions does not enter the dynamic Euler equation, only the growth rate if it is time-varying. If the variation in tax rates quarter by quarter is not large, then the implied investment wedge in big downturns is relatively small and not particularly relevant for cyclical behavior. MP focus on secular change over 40 or 50 years.
ED: Again with V.V. Chari and Pat Kehoe, you have recently worked on sudden stops and how financial crises alone cannot trigger drops in output. In fact, such a crisis would increase output. What critical ingredient is our basic intuition missing here?
EM: The basic intuition of the paper is simple. Using the idea in ‘Business Cycle Accounting,’ we show an equivalence between equilibrium outcomes in a small open economy and a closed-economy growth model. A rise in net exports in the small open economy (a sudden stop) is equivalent to a rise in government spending in the closed economy. We know what happens when government spending goes up in the closed economy model: output rises. Thus, we show that sudden stops, by themselves, do not lead to decreases in output. They lead to increases. To account for both sudden stops and output drops, one needs some other friction.I think what the literature has missed is that the sudden stop is not the primary shock but rather a symptom of domestic problems, bad policies, or distortions. If one treats it as the primary shock, the economy will look just like one that had a big increase in government spending (since government spending and net exports enter the resource constraint in the same way). Researchers should be thinking about the driving forces behind the sudden stops.
ED: With your business cycle accounting procedure, you determine what proportion of the output volatility can be accounted for by the various wedges. But is it fitting to call this business accounting? We all learned that the business cycle is not just characterized by output volatility, but also by relative volatilities and comovements. In other words, isn’t reducing all possible shocks and frictions to three or four wedges oversimplifying?
EM: We don’t just look at output — we decompose labor and investment as well. And we are (in a revision) putting in details of relative volatilities and comovements because a referee was interested in comparing the results to other papers in the business cycle literature.Because the wedges are correlated, there are subtle issues about exactly how one should attribute total variances to each shock. We acknowledge that — but it is something one can’t avoid. We do compare our realization-based accounting procedure to spectral decompositions. The two ways of accounting for the business cycle are both informative in my opinion.


Bernanke, Ben & Gertler, Mark, 1989. Agency Costs, Net Worth, and Business Fluctuations, American Economic Review, vol. 79(1), pages 14-31.
Chari, V. V., Patrick J. Kehoe and Ellen R. McGrattan, 2004. Business Cycle Accounting, Staff Report 328, Federal Reserve Bank of Minneapolis.
Chari, V. V., Patrick Kehoe and Ellen R. McGrattan, 2005. Sudden Stops and Output Drops, American Economic Review Papers and Proceedings, forthcoming.
McGrattan, Ellen R., and Edward C. Prescott, 2000. Is the stock market overvalued?, Quarterly Review, Federal Reserve Bank of Minneapolis, pages 20-40.
McGrattan, Ellen R., and Edward C. Prescott, 2005. Taxes, Regulations, and the Value of U.S. and U.K. Corporations, Review of Economic Studies, forthcoming.
Volume 6, Issue 1, November 2004

Q&A: Thomas Holmes on Dynamic Economic Geography

Thomas Holmes is Professor at the Department of Economics of the University of Minnesota. He has recently been working on the spatial distribution of economic activity as well as basic issues in the organization of production. Holmes’ RePEc/IDEAS entry.
EconomicDynamics: In your RED article, you demonstrate that if an industry is situated at an inefficient location through accidents of history, it will eventually migrate to efficient locations. Does this result apply to globalization, which should thus be regarded as inevitable? And why is the modeling of dynamics so important in this regard?
Thomas Holmes: I have to confess that my result in the RED article, “Step-by-Step Migrations” wouldn’t usually apply to globalization. It is more about the migration of industry within a country or even a region within a country. But I am glad you asked the question, because it is a great one for clarifying exactly what my result does cover. And it gives me a chance to plug some related work!A large literature, e.g. Paul Krugman, Brian Arthur, and others, has emphasized that when agglomeration economies are important, industries can get “stuck” in an inefficient location. No individual firm is unilaterally willing to leave for the better location because of the “glue” of agglomeration benefits. In other words, there is a coordination failure. My model differs from the previous literature in that rather than being forced to take a big discrete “jump” to go to some new location and forego all agglomeration benefits, a firm can take a small “step” in the direction of the new location. With a small step, the firm retains some of the agglomeration benefits of the old location, but also begins to enjoy some of the advantages of the new location. I show that industries never get stuck at locations that are inefficient from the perspective of local optimality, and I present a condition under which migration rates are efficient. As an application of the theory, I discuss the migration of the automobile industry in the U.S. from Michigan to the South. It is clear that this industry has moved in a step-by-step fashion.So we see that this specific model doesn’t apply to globalization. If the textile industry is in North Carolina, and the efficient location is Africa, by taking a small step in this direction, firms would have to set up shop in the Atlantic Ocean! Nevertheless, the broad idea of the project that migrations can take place in a step-by-step fashion does apply to the issue of globalization. In a 1999 Journal of Urban Economics article, I observe that an industry like textiles has many different kinds of products. Agglomeration benefits are not so important for the production of coarse cloth, but they are important for advanced textiles. Low-end products tend to migrate first, but these then set up a base of agglomeration benefits that may draw in medium-level products, which in turn may drawn in the next level products, and so on.The larger point of this set of papers is that the attractive force of production efficiency can be powerful even when agglomeration forces are important. There will usually be somebody who will be drawn in by production advantages of a better location and the first migration makes the second one easier. The modeling of dynamics in this analysis is crucial, since step-by-step migrations are inherently a dynamic phenomenon.
ED: Modern Macroeconomics is based on strong microfoundations, yet one of its essential components, the production function, is still a black box. Your recent work with Matt Mitchell is looking at the interaction of skilled work, unskilled work and capital. What should a macroeconomist used to a Cobb-Douglas representative production function retain from this work?
TH: One goal of this project is precisely to get into the black box of the production function. Important recent work, such as Krusell, Ohanian, Rios-Rull, Violante, utilitizes capital-skill complementarity properties of the production function to explain phenomena such as changes in the skill premium. But there is little in the way of micro-foundations of the production function that delivers capital-skill complementarity.My recent work with Matt Mitchell develops such a micro model of production. The central idea of our model is that unskilled labor relates to capital in the same way that skilled labor relates to unskilled. Unskilled labor has general ability in the performance of mechanical tasks. An unskilled worker can easily switch from the job of tightening a bolt, picking up a paint brush, or emptying a trash. It may be possible to substitute a machine to undertake any one of these tasks, but this would generally require upfront investment, a fixed cost, to design a machine specific to this task. In an analogous way, skilled labor has general ability in the performance of mental tasks. A worker with a degree in engineering can be put in charge of a production line and has general knowledge to make appropriate decisions when unexpected problems arise. Alternatively, a production process may be redesigned and routinized to reduce the amount of uncertainty, making it possible for an unskilled worker to run the production line, instead. In this analysis, the scale of production is crucial for determining how to allocate tasks. For small-scale production, e.g. for a prototype model, unskilled labor will do the mechanical tasks and skilled labor will manage the production line. For large-scale production, capital will do the mechanical tasks and unskilled labor will manage the production line.We use our model to (1) provide micro foundations for capital-skill complementarity, (2) provide a theory for how factor composition (e.g. capital intensity and skilled worker share) varies with plant size and (3) provide a theory of how expansion of markets through increased trade affects the skill premium. Our theory is consistent with certain facts about factor allocation and factor price changes in the 19th and 20th centuries.Since you brought up Cobb Douglas, a good question is whether or not our theory can provide micro foundations of Cobb-Douglas, analogous to Houthakker. The answer is no. Not only is our aggregate production function not Cobb-Douglas, it isn’t even homothetic. In fact, one of the key points of our paper is that a proportionate increase of all factors of production-which is what happens when two similar countries begin to trade-can change relative factor prices.I know it’s an uphill battle to wean macroeconomists off the Cobb-Douglas production function. Macroeconomists love it not just because of its tractability but also because of the constancy of capital share. But if macroeconomists want to understand phenomena like changes in the skill premium, I believe they have to get out of a Cobb-Douglas world. At this point, my model with Matt is too primitive to make it suitable for a quantitative macro analysis. But I believe that there is a potential for a next generation of models in this class to be useful for quantitative analysis.


Holmes, Thomas J. (1999): How Industries Migrate When Agglomeration Economies Are Important, Journal of Urban Economics, vol. 45, pages 240-263.
Holmes, Thomas J. (2004): Step-by-step Migrations, Review of Economic Dynamics, vol. 7, pages 52-68.
Holmes, Thomas J. and Matthew F. Mitchell (2003): A Theory of Factor Allocation and Plant Size, Federal Reserve Bank of Minneapolis Staff Report 325.
H. S. Houthakker (1955): The Pareto Distribution and the Cobb-Douglas Production Function in Activity Analysis, The Review of Economic Studies, vol. 23, pages 27-31.
Krusell, Per, Lee Ohanian, José-Víctor Ríos-Rull, and Giovanni Violante (2000): Capital-Skill Complementarity and Inequality: A Macroeconomic Analysis, Econometrica, vol. 68, pages 1029-53.
Volume 5, Issue 2, April 2004

Q&A: Vincenzo Quadrini on Firm Dynamics

Vincenzo Quadrini is Associate Professor of Economics at the Marshall School of Business, University of Southern California. His field of research is Entrepreneurship, Financial Economics and Macroeconomics. Quadrini’s RePEc/IDEAS entry.
EconomicDynamics: In recent work with Claudio Michelacci, you started exploring the relationship between firm size and wages. What new insight have you obtained?
Vincenzo Quadrini: A well-known stylized fact in labor economics is that large firms pay higher wages than small firms. There are several studies in the theoretical literature that try to explain this fact. However, none of the existing studies investigate the role of financial constraints. In the joint work with Claudio Michelacci we ask whether financial factors can contribute to explaining the dependence of wages on the size of the employer.Our interest in understanding the importance of financial factors for the firm size-wage relation is motivated by a set of regularities about the link between the financial characteristics of firms and their size. The view that results from the financial literature is that smaller firms face tighter constraints. It is natural then to ask whether the dependence of wages from the size of the firm could derive from the tighter financial constraints faced by small firms.We study a model in which firms sign optimal long-term contracts with workers. Due to limited enforceability, external investors are willing to finance the firm only against collateralized capital. If the investment financed with external investors is limited–that is, the firm is financially constrained–the optimal wage contracts offered by the firm to the workers is characterized by an increasing wage profile. By paying lower wages today, the firm is able to generate higher cash-flows in the current period, which relax the tightness of the financial constraints. Because firms with tighter constraints operate at a sub-optimal scale–which then they gradually expand until they become unconstrained–small firms pay on average lower wages than large firms. At the same time, because constrained firms are the ones that grow in size, the model also captures the empirical regularity that fast growing firms pay lower wages. Through a calibration exercise we show that the model can generate a firm size-wage relation which is comparable in magnitude to the estimates obtained in the empirical literature.The increasing wage profile raises the question of whether the firm may have the incentive to renegotiate the wage contract in later periods. The key modeling feature that prevents the firm from renegotiating the contract is the loss of sunk investment if the worker quits. This investment could derive from recruiting costs or training expenses that enhance the job specific human capital of the worker. The firm’s loss of valuable investment endows the worker with a punishment tool which is not available to external investors. This allows the firm to implicitly borrow from workers beyond what it can borrow from external investors.
ED: You show that the liquidation of a firm in a long-term contract may occur even when the contract is renegotiated, and sometimes only when the contract is renegotiated. What does this imply for the design of corporate law, and in particular bankruptcy law?
VQ: A well established result in models with agency problems, is that more stringent punishments can support superior allocations. However, punishments may be time-inconsistent in the sense of being ex-post inefficient. For instance, in a financial relation between an investor and an entrepreneur characterized by information asymmetries, the threat of liquidation may be ex-ante optimal because it induces the desired action from the entrepreneur. However, after the entrepreneur’s action has been taken and the firm’s outcome observed, it may be inefficient to liquidate the firm. This implies that the liquidation threat is not credible and it will be renegotiated. This raises the question of whether an optimal financial contract would ever lead to the liquidation of a viable firm. In “Investment and Liquidation in Renegotiation-Proof Contracts with Moral Hazard” I show that the firm can still be liquidated in an optimal contract even if we impose the renegotiation refinement. In general, allowing for the renegotiation of contracts reduces the allocation efficiency because it makes more difficult to create incentives.What does this imply for the design of corporate and bankruptcy law? In principle, there are some important policy recommendations.The time-consistency problem outlined above would be avoided if the contractual parties could commit to not renegotiate the contract in future dates. If the policy maker could legally prevent the renegotiation of private contracts, the problem would also be avoided. This requires that when a contract prescribes the liquidation of the firm, the firm should be legally forced to exit and liquidate its assets. While this may seem paradoxical, this is what the theory suggests.Of course, the time-consistency problem can also arise for the policy maker. When a firm would otherwise be inefficiently liquidated and the parties would have been willing to renegotiate, the policy maker would also prefer that the parties renegotiate. However, changing the current rules may undermine the credibility of all existing contracts. In other words, reputation considerations may limit the regulator’s incentive to make exceptions.These policy considerations seem at odds with standard bankruptcy laws. One of the main goals of bankruptcy law is to facilitate the renegotiation of contracts in order to prevent default or liquidation. However, although renegotiation may be ex-post desirable in the event of financial distress, it is never optimal ex-ante.This conclusion also holds in the international context, that is, for borrowing and lending countries. The creation of an effective enforcement system, however, is extremely difficult in the international context.
ED: In your work with Charles Himmelberg, you show that entrepreneurs in young firms should be compensated with options, but that options should be less and less used as firms mature and grow. CEO compensation is the subject of intense criticism these days in the US, partly due to large option packages. Does this mean that current CEO compensations are inefficient?
VQ: Of course, if we believe that the recent corporate scandals were caused by the compensation structure of managers, then there must be something wrong with this structure. It is true that recent managerial compensations have been dominated by generous stock options and/or stock grants. However, this does not mean that options are not useful to create the right managerial incentives. The recent corporate scandals only show that the options offered to managers were not well designed. And probably this was caused by a misunderstanding about the fundamental agency problems between managers and investors.In the type of models I have been working with, the agency problems derive from the non-observability of the manager’s use of the firm’s resources. In these models, the incentive for the manager is to under-report the firm’s performance because this allows him or her to divert some of the firm’s resources. Consequently, to prevent the manager from under-reporting, his or her compensation must rise when the performance of the firm is good. One way to achieve this outcome is with the use of stock options. But if the rewards for good outcomes are excessive, then the manager starts to have the opposite incentive, that is, to inflate the firm’s performance. In this sense, the problem with recent managerial compensation does not derive from the use of options per se, but from their “excessive” use.


Himmelberg, Charles, and Vincenzo Quadrini (2002): Optimal Financial Contracts and the Dynamics of Insider Ownership. Manuscript, New York University.
Michelacci, Claudio, and Vincenzo Quadrini (2004): Financial Markets and Wages, Manuscript, CEMFI.
Quadrini, Vincenzo (2004): Investment and Liquidation in Renegotiation-Proof Contracts with Moral Hazard. Journal of Monetary Economics, vol. 51, pages 713-751.

Volume 5, Issue 2, April 2004

Patrick Kehoe on Whether Price Rigidities Matter for Business Cycle Analysis

Patrick Kehoe is Monetary Adviser at the Research Department of the Federal Reserve Bank of Minneapolis and the Frenzel Professor of International Economics at the University of Minnesota. His interests span the study of international business cycles as well as monetary and fiscal policy. Kehoe’s RePEc/IDEAS entry.

Let me take this question in several parts. First, are there any price rigidities? Well, in the data, the prices of many individual goods stay fixed for weeks or even months even though there are high frequency fluctuations in demand and supply. So it looks like there are definitely some sort of rigidities.

Second, do these rigidities play an important role in determining the magnitude and persistence of deviations from trends of the major aggregates? I think that a fair answer has to be that the verdict is still out. There are several uphill battles, both empirical and theoretical that need to be won before anyone can claim to have demonstrated the price rigidities are important. Let me turn to these challenges.

Challenges for sticky price enthusiasts

A. Empirical: Observed price stickiness is short

On the empirical side Bils and Klenow (2003) and Klenow and Krystov (2003) have dug up some interesting BLS data on individual goods price that shows that a key feature of the data is

  • The average time between price changes is relatively short, about 4 months
B. Theoretical: Existing sticky price and sticky wage models don’t generate enough persistence

On the theoretical side there is one key challenge:

  • Despite any claims to the contrary, existing models of sticky prices cannot generate anywhere near the level of persistence in output seen in the data.

The increasingly popular New Keynesian paradigm takes a Dixit-Stiglitz-Spence monopolistic competition framework and embeds some type of sticky prices. Chari, Kehoe, and McGrattan (2000, 2002) take that paradigm and add to it a Taylor-type overlapping price contracts. We show that with the parameters chosen so that when the average time between price changes matches that in the Bils and Klenow data, then the model can deliver much less persistence in output that is observed in the data. This lack of persistence holds up even when we incorporate several features in the model that are designed to increase persistence–so-called real rigidities–such as convex demand, intermediate goods, specific factors and adjustment costs of various kinds.

The new work by Golosov and Lucas (2003) raise an even greater challenge for sticky price models. Their work is motivated by some of the empirical work of Klenow and Krystov who find the following at the level of an individual retailer, like a store.

  • The average size of price increases in the data is about 9%.

Likewise, the average size of price decreases is a little over 8%. To interpret this feature of the data, recall that the average time of a price change is about 4 months and that over this time the average inflation rate is less than 1%. The price changes seem enormous relative to what one would expect if prices were changed mainly because of money shocks. Hence, it seems fair to say that the bulk of the large price changes in the data are driven by some idiosyncratic factors at the individual level that seem to have little to do with monetary policy.

This type of reasoning motivates Golosov and Lucas to construct a model with fixed costs of changing prices with large idiosyncratic shocks at the level of the individual retailer along with aggregate monetary shocks. They then compare the persistence of output in their state-dependent pricing models to that of a Calvo-type model with an exogenously specified timing of price changes that is so popular in the literature. They find that they get about 1/5th of the amount of persistence that a Calvo model does with the same average time of price changes.

The intuition for this result is that in their state dependent model, when a money shock hits the retailers that endogenously choose to change their prices are the ones whose prices are the most out of whack. Thus, the retailers that don’t change their prices are the ones for whom not changing is not a big deal “in terms of output loss” anyway. Hence, the model generates very little persistence. The obvious conjecture is that even if we try to load up the model with all kinds of bells and whistles that go under the rubric of real rigidities the 1/5th rule will approximately hold up. That is, once the sticky price literature squarely confronts the micro data on price changes replacing the Calvo pricing with a fixed cost model, it will find that the persistence of output is cut by a factor of 5. I would also conjecture that if researchers forthrightly confront this problem they will give up on the current strand of sticky price models.

C. Doubtful Claims of Success

There is some recent work that claims that existing sticky price and sticky wage models can generate as much persistence as there is in the data. I take issue with these claims. My issues can all be summarized as follows: The literature that claims success, lowers the bar from 10 feet to 2 feet, jumps over it and then declares victory. Here is how the bar is lowered.

The retreat to VARs

The reason macroeconomists are interested in nominal rigidities in the first place is that many think they play a key role over the business cycle in determining how the economy reacts to nominal shocks. (Here by nominal shocks I mean something a little broader than what people typically mean. I mean both the epsilons on the end of estimated policy rules and policy mistakes, defined as the difference between the observed systematic component of policy and the what would be the optimal systematic component of policy given a model.) Many of the sticky price enthusiasts have retreated from even claiming that their models can account for the overall business cycle patterns and instead retreat to the rather tiny component of the business cycle identified by a VAR to be attributable to the epsilon shocks on an estimated policy rule. (A notable exception to this pattern is the work by Bordo, Erceg and Evans 2000.) Thus, these macroeconomists replace the very interesting question of how much of the business cycle can monetary shocks in the presence of price rigidities account for, to the fairly much less interesting question of can our model reproduce the impulse response to a blip to the epsilon on the end of an estimated monetary policy rule. To drive this point home, note that if you plot the last hundred years of detrended GDP for the United States one event outshines all others: the Great Depression. All postwar business cycles look like little blips compared to the Great Depression. Almost all economists who have studied the Great Depression argue that broadly defined monetary shocks played a critical role in generating the depth and the length of the Depression. Indeed, one of the main forces spurring the Keynesian revolution is the perceived inadequacy of the simplest classical models without frictions to generate such a depression. The sticky price enthusiasts seem to shy away from building serious quantitative models that can generate the Great Depression. From the point of view of business cycle theorists, retreating to VARs and ignoring the Great Depression is amounts to saying that the game you want me to play is too hard so I am going to take my marbles and go home.

Making prices and wages stickier than they are in the data

Part of this work simply makes prices much more sticky than they are in the data–3 quarters instead of 1 quarter–and shows that these models generate about 3 times as much persistence as the existing models with one quarter. Somehow this work misses the point. Another part of this work claims that sticky wage models can generate much more persistence than sticky price models. That is not really true. Chari, Kehoe and McGrattan (2003) showed that if for the same degree of exogenous stickiness, sticky wage models and sticky price models generate similar degrees of persistence. What the literature on sticky wages actually does is simply increase the degree of exogenous stickiness for wages to be 3 times as much as it is for prices and argues that sticky wages generate more persistence. (See, for example, Bordo, Erceg and Evans (2000) and Christiano, Eichenbaum and Evans (2003).) The logic for having long exogenous stickiness for wages with labor contracts determined in a spot market is that many people receive only yearly changes in their nominal wages.

Punting on measuring stickiness in long-term relationships

The deeper problem with this literature is the idea that actual wage contracts in the U.S. economy are well approximated by a sequence of spot market transactions. For example, in our profession most assistant professors have fairly stable wages for about 7 years and at that time their wages jump up by varying degrees at tenure time. All assistants understand that if they work harder during those 7 years the probability distribution over their post-tenure wages increases. To an outside observer who naively models the situation as a sequence of spot-market trades it will look like wages are sticky. As was pointed out over 20 years ago, in a long term relationship the fact the changes to payment streams occurs in lumps in no way restricts the ability of the contract to achieve real outcomes. Before this literature can make any progress the issue of how best to use data to shed light on whether wages in long-term relationships are indeed sticky needs to be addressed.

D. Summary

In sum I think the following. The serious work on incorporating interesting price rigidities into serious dynamic stochastic equilibrium models capable of confronting the data is still in its infancy. Price rigidities may turn out to be important, but the current models we have for addressing them seem not very promising quantitatively. Currently, I see a large number of economists writing papers that takes the existing sticky price models as they stand and tries to use them to address a number of issues, especially policy issues. I think that this is not a productive use of time. A better use of time for the sticky price enthusiasts is to go back to the drawing board and dream up another version of the model that has a chance at generating the patterns observed in the Great Depression. Doing so may be difficult, but the payoff is worth it.


Bils, Mark and Peter J. Klenow (2002): Some Evidence on the Importance of Sticky Prices, NBER Working Paper 9069.
Bordo, Michael D., Christopher J. Erceg and Charles L. Evans (2000): Money, Sticky Wages and the Great Depression, American Economic Review, vol. 90, pages 1447-1463.
Chari, V.V., Patrick J. Kehoe and Ellen McGrattan (2000): Sticky Price Models of the Business Cycle: Can the Contract Multiplier Solve the Persistence Problem? Econometrica, vol. 68, pages 1151-1180.
Chari, V.V., Patrick J. Kehoe and Ellen McGrattan (2002): Can Sticky Price Models Generate Volatile and Persistent Exchange Rates? Review of Economic Studies, vol. 69, pages 533-563.
Christiano, Lawrence, Martin Eichenbaum and Charles Evans (2003): Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy, Journal of Political Economy, forthcoming.
Golosov, Mikhail and Robert E. Lucas Jr. (2003): Menu Costs and Phillips Curves, NBER Working Paper 10187.
Klenow, Peter J. and Oleksiy Kryvtsov (2003): State Dependent Pricing or Time-Dependent Pricing: Does it Matter for Recent U.S. Inflation? Manuscript, Federal Reserve Bank of Minneapolis.
Volume 5, Issue 1, November 2003

Q&A: Jordi Galí on Price Rigidities

Jordi Galí is Director of the Centre de Recerca en Economia Internacional (CREI), and Professor at Universitat Pompeu Fabra. His recent research has focused on the analysis of the interaction of monetary policy with various shocks and its role in determining the real and nominal features of the business cycle. Galí’s RePEc/IDEAS entry.
EconomicDynamics: Do price rigidities matter for business cycles?
Jordi Galí: I would say that nominal rigidities, understood as less-than-fully-flexible prices and wages, are an important element of any realistic theory of the business cycle. It is hard to make sense of the key role played by central banks in modern economies unless one recognizes the presence of some nominal frictions. Such frictions make it possible for a changes in the short term nominal interest rate to have an effect on intertemporal relative prices, and hence on consumption, investment and output decisions. There exist frictions other than nominal rigidities that can in principle account for the non-neutrality of money, but none of them seems to match the existing evidence so well as the presence of nominal rigidities of some sort.This is very different from saying that nominal rigidities play an important role, always and everywhere, in determining the observed properties of the business cycle. Instead, I would say monetary factors and, most importantly, monetary policy, can potentially have that role. Somewhat ironically, however, many of our models imply that the optimal monetary policy is one that seeks to replicate the equilibrium allocations generated by a frictionless RBC model. If the central bank follows that policy the economy may end up behaving like one that did not have any nominal rigidities. A recent paper of mine with López-Salido and Vallés argues that, far from being a theoretical curiosity, something close to this may have actually happened in the US during the Volcker-Greenspan era. Of course, the equivalence is only observational (not structural), and conditional on the monetary policy in place. Things may be very different in the future if the monetary regime changes, precisely because of the presence of nominal rigidities (possibly exacerbated by a period of low and steady inflation). By contrast, someone who takes RBC or flexible price models seriously should not be too concerned about who may end up replacing Greenspan.
ED: You imply that there is currently something of an observational equivalence between models with flexible and rigid prices. This implies that we should be able to find some independent evidence of substantial nominal price rigidity. How could you convince those who doubt about that?
JG: I would point to three different sources of evidence. First, direct evidence on price and wage setting based on micro-level data. That research, surveyed by John Taylor in his macro handbook article, indicates the existence of substantial rigidity in individual prices and wages (reflected in long spells without adjustment) as well as a lack of synchronization of adjustments. It also points to a great deal of heterogeneity across goods and sectors (groceries vs. magazines), as Bils and Klenow have highlighted in their recent work. Unfortunately, the latter feature is not easy to incorporate in our models.A second source of evidence is more indirect, and requires a bit more structure. But the basic argument can be summarized as follows. Inflation (and its changes) is the result of firms resetting their prices at each point in time. If firms find it too costly to adjust prices continuously, they will tend to do so to a larger extent when their markups are more out of line, relative to their desired markups. But if that is the case one should detect some negative relationship between economy-wide measures of markups and inflation. In recent work of Mark Gertler and myself, as well as in that of Argia Sbordone, we uncover such relationship in the data, and show that the cross-correlations between those variables are largely consistent with price-setting being forward.looking (a property that should hold in any model with price rigidities and profit-maximizing firms).A third piece of evidence on the extent of nominal rigidities (and its practical relevance) is the one provided by Mussa and others, pointing to large differences in the behavior of real exchange rates across countries/historical periods characterized by different nominal exchange rate regimes. It is just very hard to look at a time series plot for the changes in nominal and real exchange rates between Germany and Italy throughout the postwar period without concluding that aggregates prices must display a lot of inertia.
ED: If there is evidence about price rigidities, then it begs the question why they exist. Menu costs have less support than before because they are too small to generate the large rigidities you describe. How could we then rationalize price rigidities?
JG: It is not obvious that menu costs, when understood in a broad sense (i.e., including the costs of re-computing optimal prices, conveying the information to the sales force, advertising, etc.) are as small as your question seems to imply. Furthermore those costs have to be compared to the profits that are foregone by not adjusting prices continuously. To the extent that marginal costs are stable at the firm level (largely because of sluggish wage adjustment and infrequent technical change) and close competitors do not adjust prices frequently, I do not see where those large foregone profits might come from. Some economists have argued that the observation of frequent price changes associated with week-end or one-day sales (or similar) is bad news for sticky price models. While that evidence may conflict with a narrow interpretation of menu costs, it may not be that relevant for the sort of nominal rigidities that may matter at business cycle frequencies: quite often those frequent price adjustments are easier to interpret as price discrimination devices, rather than price changes that mirror on a one-to-one basis any changes in marginal costs, as standard models with fully flexible prices would predict.


Bils, Mark, and Pete Klenow (2002): Some Evidence on the Importance of Sticky Prices, NBER working paper 9069.
Galí, Jordi, David López-Salido and Javier Vallés (2003): Technology Shocks and Monetary Policy: Assessing the Fed’s Performance, Journal of Monetary Economics, vol. 50, pages 723-743.
Galí, Jordi, and Mark Gertler (1999): Inflation Dynamics: A Structural Econometric Analysis, Journal of Monetary Economics, vol. 44, pages 195-222.
Mussa, Michael (1986): Nominal Exchange Rate Regimes and the Behavior of Real Exchange Rates: Evidence and Implications, Carnegie-Rochester Series on Public Policy, vol. 25, pages 117-214.
Sbordone, Argia (2002): Prices and Unit Labor Costs: A New Test of Price Stickiness, Journal of Monetary Economics, vol. 49, pages 265-292.Taylor, John (1999): Staggered Price and Wage Setting in Macroeconomics, in: John Taylor and Michael Woodford (eds.), Handbook of Macroeconomics, North-Holland, pages 1009-1050.
Volume 5, Issue 1, November 2003

Thomas Cooley

Thomas Cooley is past president of the Society for Economic Dynamics, past editor of the Review of Economics Dynamics, and Dean of the Stern Business School at New York University. The questions were prepared by Jeremy Greenwood (University of Rochester).
JG: You are the founding Editor of the Review of Economic Dynamics. What motivated you to start the Review, such an enormous undertaking?
TC: The idea of creating the Review of Economic Dynamics actually germinated several years before the first issue appeared in January of 1998. The historical context is kind of important. In the 1970’s a group of systems control engineers and economists began to hold meetings to discuss research problems in which they had a common interest and for which they used similar methods. Their view was that control theory could be used to design better economic policies. The conceit was to think that, if they could send rockets around the moon and back, they could do the same with economies. This led to the formation of the Society for Economic Dynamics and Control and eventually to a journal of the same name that was wholly owned by Elsevier the publisher. Over time the value of this collaboration between disciplines diminished, as did the interest in the annual meetings of that Society. Eventually Tom Sargent was induced to take over as president of the Society and he organized a series of annual meetings that were more focused on modern macroeconomics and he also revived interest in the journal. Roger Craine did a superb job of editing the JEDC and attracting good papers. But, it was still the case that the journal was intended to serve two audiences – economists and control theory types. Because of that it lacked coherence and it was the view of Tom and Ed Prescott who succeeded Tom as President, that the journal would never be recognized as a top tier publication unless it had editorial coherence. We had discussions with Elsevier about the direction of the journal and about the Society having more editorial control. I was the designated intermediary in these discussions and it fell to me to articulate what our vision for the journal would be.It eventually became clear that to have a journal that answered the needs of the growing ranks of outstanding economists who were engaged with the Society and that had a chance to be first rate meant having our own publication and having editorial control. When we decided on the structure of the journal and the governance procedures, our goal was to ensure editorial vitality. We did not want to fall into the trap of having the editors serve for decades. Our model was the Review of Economic Studies which was originally created to provide an outlet for rising young scholars who felt blocked out of the premier journals. Starting a journal from scratch is a lot of work and negotiating with publishers and putting it all together took a lot of time. And editing a journal takes a lot of time. But it was also rewarding to create something of lasting value to the profession and I feel pride every time a new issue arrives in my mail box. I think it is a pretty good journal!
JG: You were a graduate student at Penn in the late 60’s and early 70’s. In your career you have witnessed the rise of the rational expectations hypothesis, equilibrium modeling, and quantitative theory. What do you see in the future for macroeconomics?
TC: Your second question forced me to stop and think a bit about how the landscape has changed. One thing that is very clear is that the big questions macroeconomists address “why are some countries rich and he others not, why does our economic well-being fluctuate over time?” are the same. Nevertheless, economics is far more unified now than it was when I started because of the developments of the last twenty or thirty years. There is more agreement on what constitutes valid scientific method and formal reasoning and because of that the scope of the questions that an economist like you or I might tackle has expanded greatly. Public finance, industrial organization, labor economics, urban economics are all fair game for a well trained economist equipped with the methods of dynamic general equilibrium theory.That said, when you look at the broader picture of what is taking place in macroeconomic research these days, it is that we finally making progress at doing the things that economists started to talk about in the 1960’s. That is to provide microeconomic foundations to our understanding of macroeconomic issues. This is happening because we are better able to deal with heterogeneous agents and firms in our dynamic models. Understanding how those things aggregate to observed phenomena is verye revealing and will only become more so over time. It may well be too that the current fascination with behavioral economics and the puzzles that it uncovers will be addressed by thinking about heterogeneity more broadly.
Volume 4, Issue 2, April 2003

Q&A: Narayana Kocherlakota

Narayana Kocherlakota is Professor of Economics at Stanford University He works on optimal taxation, social insurance, and the micro-foundations of money. Kocherlakota’s RePEc/IDEAS entry.
EconomicDynamics Newsletter: Your recent research on optimal unemployment insurance and optimal capital taxation shows that small differences in the information structure can have dramatic impacts on the optimal design of these institutions. Does this not make it an impossible task for a policy maker to design a sensible policy?
Narayana Kocherlakota: My new paper on optimal unemployment insurance, and my RESTUD (2001) paper with Harold Cole, both show that the nature of optimal social insurance changes dramatically if people can save secretly. But I don’t view this change as being “small”. Think about the government’s costs of monitoring savings. When savings are observable, these costs are zero. When savings are unobservable, these costs are infinite. In this sense, the change is big.I think these kinds of results point to two important directions for future research. The first is theoretical: to provide sharp characterizations of optimal social insurance when the government can monitor savings and income, but only by paying auditing costs. The second is empirical: to obtain some kind of measure of how big these costs are.
ED: Would you argue the same would hold with your work on capital taxation? You show that when individual skills are unobservable and evolve stochastically, capital tax rates should be positive. The standard result was that capital tax rates should be zero.
NK: The now standard results on capital tax rates were derived by Chamley and Judd using the Ramsey approach to optimal taxation. This approach assumes that lump-sum taxes are unavailable, and that the government is forced to use distortionary linear taxes. Chamley and Judd show that even though lump sum taxes are unavailable, it is generally optimal for capital tax rates to be zero in the long run.In my paper with Mikhail Golosov and Aleh Tsyvinski, “Optimal Indirect and Capital Taxation,” we abandon the Ramsey approach to optimal taxation. Instead, we consider a large class of model economies that are dynamic extensions of James Mirrlees’ original optimal taxation setup. The main ingredients of these models are that skills are privately observable and that they evolve stochastically. We show that if individual capital holdings can be monitored, then it is optimal to tax those holdings.Note that unemployment is one example of the kind of privately observable skill shocks that we have in mind. People are unemployed either because they can’t find a job (formally, their “skills” are low in the given period) or they can find a job (their skills are high) and they choose not to work. If savings are observable, then optimal unemployment insurance requires taxation of individual savings.Of course, it is impossible to tax capital holdings (or equivalently savings) if individuals can save secretly. In that case, the nature of optimal social insurance against skill shocks changes dramatically. As I suggested in my first answer, I think it would be very fruitful to study intermediate cases in which savings can be monitored at a finite cost.
ED: In an influential JET paper, you argue that “Money is Memory.” Does this mean with should add a new role for money in the Economics Principles textbooks?
NK: Actually, I would argue something far stronger: we should eliminate all of the standard explanations (medium of exchange, unit of account, and store of value) from the textbooks. Why do I say this? These “explanations” do not capture why money is necessary to achieve good outcomes in a society. Rather, they are merely descriptions of what money does.Why is money necessary to achieve good outcomes in a society? Well, imagine a world without money, but with a perfect record of all past transactions. (One way to imagine this is a giant spreadsheet which lists everyone’s name, and every event that ever happened to them.) In this world, we can accomplish anything that we could have with money (without adding any additional penalties or punishments that we might typically associate with credit). We do so using elaborate chains of gifts.For example, suppose I go to the bookstore and ask for a book. The bookseller checks my past transactions. If I’ve given sufficiently more gifts than I’ve received in the past, then he gives me a textbook. Why is he willing to do so? Because in the future, when he goes to the grocery store, the grocer is willing to give the bookseller more bananas than if the bookseller had not given me the book. Why is the grocer willing to do so? Because he is rewarded by being able to receive more gifts in the future, etc, etc.This is the key insight in “Money is Memory”: a monetary equilibrium is merely an elaborate chain of gifts. After all, when I give the bookseller a fifty dollar bill in exchange for a book, he receives nothing of intrinsic value. All he receives is a token that indicates to others that he gave up something worth $50 … that he has made a gift and so has kept up his part in the gift-giving chain that is a monetary economy. Without money, we would need to have the giant spreadsheet – or the gifts would never take place (because the only reason to make the gifts is to let others know that you have!).This is the role of the intrinsically useless object termed money: to credibly record some aspects of past transactions and to make that record accessible to others. It is for this reason that I write that money is always and everywhere a mnemonic phenomenon.


Chamley, Christophe (1986). “Optimal Taxation of Capital Income in General Equilibrium with Infinite Lives“, Econometrica, 54, 607-622.
Golosov, Mikhail, Narayana Kocherlakota, and Aleh Tsyvinski (2001). “Optimal Indirect and Capital Taxation“, Federal Reserve Bank of Minneapolis Staff Report 293.
Judd, Kenneth (1985). “Redistributive Taxation in a Simple Perfect Foresight Model”, Journal of Public Economics, 28, 59-83.
Kocherlakota, Narayana (1998). “Money is Memory”, Journal of Economics Theory, 81, 232-251.
Kocherlakota, Narayana (2003). “Simplifying Optimal Unemployment Insurance: The Impact of Hidden Savings“, working paper, March.
Kocherlakota, Narayana, and Harold Cole (2001). “Efficient Allocations with Hidden Income and Hidden Storage“, Review of Economic Studies, 68, 523-542.
Mirrlees, James (1971). “An Exploration in the Theory of Optimum Income Taxation“, Review of Economic Studies, 38, 175-208.
Mirrlees, James (1976). “Optimal Tax Theory: A Synthesis”, Journal of Public Economics, 6, 327-358.
Volume 4, Issue 1, November 2002

Q&A: Boyan Jovanovic on Technology Adoption

Boyan Jovanovic is Professor of Economics at New York University and Visiting Professor of Economics at the University of Chicago. His works evolves, among other topics, around industrial organization, especially technology adoption. Jovanovic’s RePEc/IDEAS entry.
EconomicDynamics: In a recent Review of Economic Dynamics article with Peter Rousseau, you made the bold prediction that consumption should grow at the yearly rate of 7.6% in the 21th century. This is based on a model of learning by doing where growth is essentially fueled by computer technology. Your estimate is based on the assumption that experience can be measured by cumulative sales in hardware and software. How sensitive is your estimate to alternative measures, in particular the introduction of depreciation or obsolescence?
Boyan Jovanovic: The model has obsolescence of capital in it. New capital devalues the old, and that is why the term g(p) enters the user cost formula in equation (7). But depreciation is indeed zero — it implies the stock of capital is the same as the cumulative number of machines produced and simplifies the algebra. But I do not think that it has much to do with the particular estimate that you report.The high estimate of 7.6% derives from the fact that a high fraction of equipment is getting cheaper very fast. Much revolves around how big a fraction of the stock of equipment is involved, and whether the price index is accurately measured. We highlight this number partly because the parameter values that imply it also give the model a good fit to the 1970-2001 experience of the U.S.. But I simply invite the reader to read the paper on this. Instead, let me now say a couple of things that are not in the paper about reasons why the share of equipment and the price index of are both hard to predict.The share of equipment is in efficiency units. Even if knew the growth rate of efficiency units of IT capital, we cannot infer its share in equipment if we do not know the initial share of IT equipment. We need an initial condition. We may overestimate the importance of IT capital if we assume too large an initial condition.Second, the price decline of computers may be exaggerated, at least in cases when quality cannot be directly measured. Bart Hobijn argues that markups probably decline over the life of a product or the life of a product line, and that new products are introduced with a high markup. we cannot anchor the quality of a new product accurately relative to the quality of old products, this may then appear as declining prices per unit of quality when, in fact, there may in the long run be no price decline at all. In other words, BLS data may overstate quality change for this reason.Overall, I do believe that IT will form the basis for more and more products and processes and, since those who know tell us that Moore’s Law will continue at its historical pace for at least 20 more years, it seems clear that the world’s output per head will grow a lot faster the 21st century than it did in the 20th.
ED: In work with Jan Eeckhout, you show that knowledge spillovers in production at the firm level do not necessarily lead to technology convergence. Rather, the fact that followers may want to free ride on leaders creates endogenous and permanent inequality across firms. You apply this concept to city growth as well. Would this also apply to countries and how does your work relate to North-South models of technology diffusion?
BJ: Yes. I do believe it. But the cross-country TFP numbers seem to say otherwise. Eeckhout’s and my model implies that TFP is higher for followers than for the leaders as followers accumulate less measured capital than the leaders, but they derive more spillovers from the leaders and therefore appear like they are using their capital more efficiently. Evidence on U.S. firms and plants supports this, in that large firms and plants have lower TFP than small ones. But cross-country evidence does not — Hall and Jones report that TFP is positively related to development of countries. Since large firms are found mainly in rich countries, this evidence seems to say the opposite. If we are to believe the cross country evidence, it says that if we look at the world market for steel, say, the large producers have higher TFP whereas if we only were to look at the U.S. producers, large producers would have lower TFP. I have to believe the U.S. evidence because it comes from a variety of sources and is based on better data than the cross-country evidence. But offhand I do not see what the source of the discrepancy is. At any rate, a referee was adamant that the model does not apply to the issue of development, and we more or less concede this in footnote 4 of the version that will come out in the AER in December.
ED: With Peter Rousseau, you also work on merger cycles and show that they are essentially linked to major technology innovations. One consequence is that merger activity is also correlated with stock prices. As more and more people think that stock prices have been overvalued recently, would you say too many mergers have occurred? Does history exhibit such merger overshooting with proportionally more ex-post inefficient mergers toward the end of waves?
BJ: If the buyer is overvalued and the target is not, and if the buyer is using a share swap to buy the target, then the overvaluation argument goes through. If the stock market as a whole is overvalued, and the target is a private company that presumably is not overvalued, then the argument again goes through. But most of the capital that has been acquired in this way has been in public companies. In other words, targets themselves are quoted in the stock market, at least once we weigh the targets by their value. Moreover, many targets have been recent IPOs, and on the NASDAQ which is held to have been the place where firms were overvalued the most. So, if the targets are the ones that were overvalued, then I think that the overvaluation story says not that there have been too many mergers, but too few.


Hall, Robert E., and Charles I Jones, 1999. “Why Do Some Countries Produce So Much More Output Per Worker Than Others?,” Quarterly Journal of Economics, 114, 83-116.
Hobijn, Bart, 2001. “Is equipment price deflation a statistical artifact?,” Federal Reserve Bank of New York Staff Report 139.
Jovanovic, Boyan, and Jan Eeckhout, 2002. “Knowledge Spillovers and Inequality”, American Economic Review, forthcoming.
Jovanovic, Boyan, and Peter Rousseau, 2002. “Mergers as Reallocations,” NBER working paper 9279.
Jovanovic, Boyan, and Peter Rousseau, 2002. “Moore’s Law and Learning by Doing,” Review of Economic Dynamics, 5, 346-375.
Volume 3, Issue 2, April 2002

Q&A: Urban Jermann on Asset Pricing

Urban Jermann is Associate Professor of Finance at the Wharton School, University of Pennsylvania. His general fields of interest are international macroeconomics and asset pricing. In this interview, he talks about various aspects of his recent research. Urban Jermann’s RePEc/IDEAS entry.
EconomicDynamics: In recent work with Fernando Alvarez, you find that the permanent component of the pricing kernel has to be very large to be consistent with the low returns of long terms bonds relative to equity. You also find the permanent component of consumption to be lower than that of those bond returns. Do you see a parallel with the quest to find an endogenous propagation mechanism in business cycle models?
Urban Jermann: I am reasonably confident about our estimate of the size of the permanent component of the marginal utility of wealth. However, our estimate of the size of the permanent component of consumption, based on standard statistical techniques, show large standard errors. Thus any interpretation of the comparison of these two findings is somewhat tentative.To answer your question: Yes, I see a parallel between to the quest of finding endogenous propagation mechanisms in business cycle models. Our result suggests the need to depart from the standard time-separable utility specification because it would not imply any difference in the size of the permanent components of the marginal utility of wealth and of consumption. Using non-time-separable preference specifications clearly has the potential to propagate shocks through time. For instance, some recent studies by Fuhrer or McCallum and Nelson have shown that habit-formation utility leads to improved dynamic behavior of their macroeconomic models.So far, we haven’t explored systematically the quantitative implications of specific utility functions for the size of the permanent components of the implied marginal utility compared to consumption. We have, however, a general result in our paper for utility functions of the type proposed by Epstein, Zin and Weil that allow for nonseparability across time and states of nature. Specifically, we show that even if consumption has no permanent component, the marginal utility of wealth always has a permanent component.
ED: In other work with Fernando Alvarez, you use asset prices to determine the cost of business cycles. As in other studies using aggregate data or representative agent constructs, the cost is small. Yet, there is substantial evidence that the cost is distributed very unevenly across households. Thus, how does your work represent a step forward in the determinantion of the cost of business cycles?
UJ: The “cost of business cycles” is an answer to the question of what is an upper bound to the welfare gains associated with macro-economic stabilization policies such as monetary and fiscal policies. If these gains are small, then it seems hard to justify encurring significant costs avoiding them. If the costs of economic fluctuations are unevenly spread, this would require efforts along other dimensions. For instance, considering institutions such as bankruptcy laws that directly impact the ability of cross-sectional risk sharing.My reading of the literature on the cost of business cycles is that earlier studies, by using various utility functions, have reported a wide variety of different estimates. Many came up with small numbers, but some, in particular those that required their utility function to be able to replicate the equity premium, came up with considerably larger numbers. Our estimates of the cost of business cycles is directly based on asset prices without the tricky intermediate step of specifying and calibrating a utility function. Our finding that the cost of business cycles is smaller than half a percent of lifetime consumption was an update to some of my priors, because the framework that we use is also consistent with the historical equity premium of more than six percent.Our work has also something to say about the distinction between the costs of consumption fluctuations at business cycle frequencies and at all frequencies. While we find the cost of business cycles to be smaller than one half of a percent of lifetime consumption, we also find that eliminating consumption uncertainty at all frequencies would be worth a lot more, easily topping the equity premium. That is, the representative asset pricing agent seems to require very large compensation to bear the low frequency risk components that are in consumption.
ED: In his recent book, Peter Bossaerts argues that current asset pricing models are routinely rejected by the data, yet they continue to form the basis of theory. What is your stand on this? Should we still use CAPM and APT?
UJ: I believe it takes a model to beat a model. Until we have models that do not get rejected anymore, our answers to concrete questions will have to be based on whatever the best available tools are.While a lot of our attention, not surprisingly, is focused on puzzles and rejections, it is good to remind ourselves of some of the successes of economic models. For instance, the basic idea of no-arbitrage has lead to the development of powerful models for pricing derivatives. Currently, derivatives from some of the most active segments of the global financial markets. Transaction volumes are in the multi-billion dollars per day. These markets would not be able to function the way they do without the asset pricing models based on no-arbitrage principles. In the two papers we have discussed here, my co-author and I have tried to apply some of these ideas to macroeconomic issues.
ED: You have also worked on the recent stock market boom, with Vincenzo Quadrini. Your point here is that prospects of productivity gains generated immediate productivity gains, as financing became easier for firms. How can this be reconciled with theories proposing that the technological progress was a result of improvements that started several decades ago?
UJ: The ideas this work is based on can also be seen as resulting from the improvements that started several decades ago.In our work, we show that the mere prospect of a New Economy, where productivity would be growing at a higher rate, can have immediate consequences not only for stock market valuations but also for measured aggregate labor productivity. This happens because financial constraints are looser for firms with a promising future and, thus, these firms will hire more workers. The increased labor demand drives up wages and forces a reallocation of workers as all firms strive to increase the marginal product of labor. Measured aggregate labor productivity increases even without technological improvment at the firm level.In our story, optimism about future growth rates of firm level productivity is the driving force. It would seem to me that it was necessary to see some of the successes in information and telecommunication technologies in order for a widespread belief in a New Economy to be possible.


Alvarez, Fernando, and Urban Jermann 2000, “Using Asset Prices to Measure the Cost of Business Cycles.” NBER working paper 7978.
Alvarez, Fernando, and Urban Jermann 2002, “Using Asset Prices to Measure the Persistence of the Marginal Utility of Wealth.” Mimeo, University of Pennsylvania.
Bossaerts, Peter 2002, “The Paradox of Asset Pricing.” Princeton University Press.
Epstein, Larry, and Stan Zin 1989, “Substitution, Risk Aversion and the Temporal Behavior of Consumption and Asset Returns: a Theoretical Framework.” Econometrica. Vol. 57, pages 937-69.
Fuhrer, Jeffrey 2000, “Optimal Monetary Policy in a Model with Habit Formation.” Federal Reserve Bank of Boston Working Paper 00-5.
McCallum, Bennett, and Edward Nelson 1999, “Nominal Income Targeting in an Open-Economy Optimizing Model.” Journal of Monetary Economics. Vol. 43, pages 553-578.
Quadrini, Vincenzo, and Urban Jermann 2002, “Stock Market Boom and the Productivity Gains of the 1990s.” Mimeo, New York University.
Weil, Philippe 1990, “Nonexpected Utility in Macroeconomics.” The Quarterly Journal of Economics. Vol. 105, pages 29-42.
Volume 3, Issue 1, November 2001

Q&A: Mehmet Yorukoglu on Economic Revolutions

Mehmet Yorukoglu is Assistant Professor of Economics and of Social Sciences at the University of Chicago. His fields of interests are technology innovations, fluctuations and investment.
EconomicDynamics: In “Engines of Liberation”, you argue with Jeremy Greenwood and Ananth Seshadri that the increase in female labor market participation is due to the introduction of significant technology improvements in the household sector. Would you therefore say that the various movements fighting to liberate women were not a factor in their access to the labor market?
Mehmet Yorukoglu: There is an underappreciated relationship between social norms and economic (usually technological) constraints. Individuals and societies may have tastes over social norms and they can certainly develop these tastes. Therefore, one must recognize that social norms have to be in the utility function but it is always subject to economic and technological constraints. Social ideals and movements are like seeds which need appropriate environment for their cultivation. This environment is usually determined by technology. To me, arguing that social norms solely determine social equilibrium is a bit like arguing that automobiles are invented because suddenly people developed a taste for faster travel.Take democracy for instance. That ideal was around at least since Ancient Greeks. The city-state of ancient Athens was a direct democracy between 500 BC and 320 BC. Although only around 15% of the population was eligible to vote, important public affairs were decided according to citizens’ votes. But democracy then was so time consuming and slow that voting about one issue usually took a day before thousands of people filling large amphitheaters voted. This of course proved to be a disadvantage for Athens when some fast decision were necessary for a vital issue. Some historians argue that this inability of Athens in fast action prepared its end. It took millenniums for a successful application of democracy to prevail. Social ideals usually remain as utopias until economics and technology favor their successful application. The same arguments can be made for prohibition of slavery, and other social changes.In “Engines of Liberation”, we argue that before the technological improvements in the household sector, the economic environment was in an equilibrium where women specialized in household production and men specialized in market work. This specialization had social outcomes far beyond itself (allocation of power and decision making in the house and in the market). This equilibrium continued until technological improvements in the household sector freed up female labor allowing it to participate in the market production. Actually, there is some evidence that during that period public opinion about female work did not change significantly. For instance, after reviewing public opinion poll evidence, Oppenheimer (1970) concludes “it seems unlikely that we can attribute much of the enormous postwar increases in married women’s labor force participation to a change in attitudes about the propriety of their working.”Therefore, I think although social movements definitely catalyzed the increase in female labor force participation, the engine that removed the bottleneck still resides on the technology side.
ED: In “1974”, you argue with Jeremy Greenwood that the introduction of a new technology leads to a sudden increase in a stock prices, followed by higher growth rates. Are you still convinced by your model and story seeing the recent developments in stock prices, in particular for the IT sector?
MY: In “1974”, IT revolution is modeled in the following way. Until the date of technological breakthrough the economy is assumed to be on a balanced growth path where individuals solve their problems assuming that the economy will be on this path forever. But suddenly, and unexpectedly, the technological breakthrough occurs creating a change in individuals’ expectations. They suddenly become aware of the technological breakthrough and they have perfect foresight about the future of the economy from then on. Therefore, at the date of breakthrough the stock value of firms jump up and converges to a higher balanced growth path. This setup of expectations is not realistic in at least two ways. First, in reality, both technological improvements and people’s understanding and expectations about these improvements change only gradually. Second, people do not have perfect foresight about the future of a new technology.Also diffusion of a new product can create cycles in economic activity and in the value of the firms producing the new product. In a study with Jeremy Greenwood titled “From Model T to Great Depression: Automobilization and Suburbanization of US”, we model the diffusion of automobiles and the suburbanization wave in US during the first part of the 20th century. After Henry Ford’s genius application of assembly lines to automobile production (Model T), the US experienced an era of fast diffusion of automobiles across households. By 1929, more than 30 million cars had already been produced, more than half of households in US had at least one car–a figure which would not increase much until the end of the WWII. We show that diffusion of new goods can create cycles in output and values of new good producing firms. The faster is the diffusion of the new good, the larger can be the cycles. Therefore, one can suspect that fast diffusion of IT related products can also lead to similar cycles in output and stock value. The automobile diffusion data also shows that the producers of a new product can make big expectational mistakes since forecasting future technological progress and demand for a new product is a hard task. The data shows that the large automobile producers like Ford and GM made very large investments just before the depression. Now, they couldn’t forecast the depression, no big deal, nobody did. But it seems like by the end 1929 demand for automobiles entered into a temporary saturation point separate from the depression itself. Because of the fast diffusion of cars, before the early adopters wanted to replace their cars, most of the households who were willing to buy one at a reasonable price already bought one. Similar expectational mistakes on the firms’ side are also possible for IT related new products.Additionally, I think one important point which was not realized early on is that IT improvements increase consumer surplus more than they increase firms’ profits. With cheaper information available to consumers, competition among firms become fiercer, driving down profits. Increasing variety of products in the market, and increased customization of products to individual consumers improve their utility without benefiting firms’ bottom lines much.
ED: One recurring result in your research is that faster growth leads to a more unequal distribution of income or assets. Yet, the literature is not so clear cut. What distinguishes your models form the rest in this respect?
MY: In “1974”, faster technological progress increases income inequality because skilled individuals facilitate the adoption of the new technologies. When new technologies really bring breakthroughs, meaning that they are very different from the existing ones, there is much more to learn about them and the demand for skilled individuals and the premium that they enjoy increases. This argument is, in general, true for all technological breakthroughs. There is nothing specific to the nature of IT in it.However, I think the new economy, driven by IT, has strong inequality creating mechanisms far beyond what the technological breakthrough model presented in “1974” provides. The new economy is becoming more and more information based. One can categorize the goods in the market according to their information intensity. With information becoming cheaper, information intensity of the goods increase. One thing that is key for information is that once it is produced it can be reused at an insignificantly small marginal cost. This makes the markets for information intensive goods very concentrated with only few producers. This is a very strong inequality creating mechanism. As the goods get more information intensive, very productive top few producers capture the whole market. Asymptotically, as the good becomes totally an information good, a small difference in human capital (productivity) of producers creates an infinite amount of difference in their output. Broader implications of such an economy are studied in “The New Economy: Some Macroeconomic Implications of An Information Age” which is joint work with Thomas F. Cooley.


Cooley, T. F., and Yorukoglu, M. 2001. “The New Economy: Some Macroeconomic Implications of An Information Age“, New York University, mimeo.
Greenwood, J., and Yorukoglu, M. 1997. “1974”, Carnegie-Rochester Conference Series on Public Policy, 46, 49-95.
Greenwood, J. and Yorukoglu, M. 2001. “From Model T to Great Depression: Automobilization and Suburbanization of US”, University of Rochester, mimeo.
Greenwood, J., Seshadri, A., and Yorukoglu, M. 2001. “Engines of Liberation“, University of Rochester, mimeo.
Oppenheimer, V. K. 1970. The Female Labor Force in the United States: Demographic and Economic Factors Governing its Growth and Changing Composition, Institute of International Studies, Berkeley.
Yorukoglu, M. 1998. “The Information Technology Productivity Paradox“, Review of Economic Dynamics, 2, 551-592.
Volume 2, Issue 2, April 2001

Harald Uhlig on Dynamic Contracts

Harald Uhlig is Professor of Economics at the Institute for Economic Policy I at Humboldt University (Berlin, Germany). His interests lie broadly in macroeconomics, with a focus on banking, business cycles and numerical methods, among many others. Uhlig’s RePEc/IDEAS entry.
EconomicDynamics: In your work on financial institutions, you show that competing banks can drive themselves into ruin if unchecked. How can this be considering that they are rational?
Harald Uhlig: Competition between financial intermediaries is indeed something that interests me quite a bit, in particular regarding its consequences for the functioning of the aggregate economy. And indeed, this type of competition can lead to a “financial collapse”, where no lending takes place in the end. I have one paper with Hans Gersbach, the framework of which I also used in an earlier paper of mine on “Transition and financial collapse” and a somewhat related paper with Dirk Krueger. In the financial collapse story, there are entrepreneurs seeking funding from banks, but who differ in their qualities. Banks compete in the contracts they offer. Two forces are at work. On the one hand, banks need to cross-subsidize the losses they make on bad entrepreneurs with the profits they make on good ones. On the other hand, competition means that other banks will try to lure away the good entrepreneurs with better contracts, so that not too much profit can be made on them. Under some circumstances, there is no equilibrium where the banks break even, and the credit market collapses.Likewise, markets in insurance contracts between competing insurers can collapse in my paper with Dirk Krueger. It is rather similar to the “market of lemons” phenomenon, which Akerlof described much earlier, although my paper with Dirk Krueger does not even rely on asymmetric information. These collapses can be the byproduct of rational agents trading with each other. In the end, banks make neither profits nor losses. What I do not model is the entry in that industry, but one could. Now, if there is a sunk cost in entering the market, and banks could foresee that they would make neither profits nor losses, once they do, it would obviously be irrational to enter in the first place.
ED: What is your take on the current remodeling of the Basle accord, in particular regarding the proposed self-assessment by banks of their capital requirements? Is there a moral hazard problem looming?
HU: I should not comment too much on the Basle accord: there are much greater experts than me. The remodeling has certainly become necessary: all exchange rate crises of recent history are typically also banking crises. Many think that much harm could have been avoided if the financial institutions in these countries had been subject to more stringent standards in the first place: that sounds right to me although there may be important tradeoffs here. As for the self-assessment by banks: that may be perfectly OK. Actually, all our theories of optimal mechanism design rely on the “revelation principle” which essentially says, that one might as well restrict attention on mechanisms in which participants truthfully reveal their situation. Truthful revelation does not happen out of the goodness of the heart, of course: instead, truthful revelation needs to be incentive-compatible, it needs to be in the interest of the agent who does the revealing, otherwise there is indeed a moral hazard issue. So that should be the crucial issue: are the revelation rules in the new Basle accord incentive compatible? Would a bank in trouble have enough incentives to say that they are in trouble? Reputational considerations surely matter here a lot, and it may be hard but also very interesting to sort this all out.
ED: A smooth financial environment is generally credited for spurning growth. But can the setup of financial institutions also influence the business cycle, except for occasional credit crunches?
HU: In my paper on “transition and financial collapse”, it actually is possible that cycles arise precisely because of the presence of financial intermediaries – more precisely, because of asymmetric information which the financial intermediaries are there to solve. The story goes roughly as follows. There are young entrepreneurs, who run small projects, and middle-aged entrepreneurs, who can take small successful projects, which they ran as a young entrepreneur previously, and turn them into large successful projects. Imagine now that the current young generation of entrepreneurs is flush with cash, but the previous young generation was not. While the current young then find it easy to obtain additional funding to finance their projects and therefore create lots of successful ones, the current middle generation only has very few projects which they can continue. As a result, the economy will not do too well currently, wage earnings will be low, and the next generation of young entrepreneurs will not have the chance to build up sufficient cash to run many projects. But the currently young cash-flush entrepreneurs with their many projects will create a booming economy next period, endowing the young entrepreneurial generation two periods from now with sufficient wage earnings to successfully start many projects.One can imagine versions with more generations here, and an intriguing web of interactions. I find it plausible that this mechanism actually does play an important role in business cycle fluctuations. One could probably tell this story without financial intermediation, but with asymmetric information and thus financial intermediation, small effects of this type can be vastly amplified. And that, I think, is the major key from this literature for understanding business cycles.
ED: Why is it important to model contract relationships within a dynamic general equilibrium framework?
HU: Putting contract relationships into a dynamic general equilibrium framework imposes an enormous discipline, and that is why it has been so hard to do. For example, aggregate information cannot be “hidden”: in a general equilibrium framework, there are typically many ways for agents to observe it in some aggregate variable. Next, a static view of a credit relationship can allow you to assume that agents or intermediaries will be in utter misery in some states of the world, you can model payoffs pretty much anyway you like. In a dynamic general equilibrium framework, agents may try to intertemporally smooth consumption and circumvent some of the forces imposed upon them in a two-period static partial equilibrium model. The model needs to have some stability properties to work, so that means that returns etc cannot be too crazy. A dynamic general equilibrium framework makes it possible to meaningfully talk about monetary policy, and how it interacts with these issues: understanding that interaction seems to me to be crucial for understanding how and where monetary policy can have an effect. Finally, a dynamic general equilibrium framework allows one to investigate the quantitative aspects of the whole issue, to see whether the numbers come out about right. That is not done often in this literature, here it is still a wide-open research field.
ED: Why are contracts with limited commitment so little studied with dynamic general equilibrium models?
HU: The literature is still evolving, so I think we will see more of that kind of research in the future. It is certainly one of the interesting frontiers. Now, in some ways, the models in the literature are already set in a dynamic general equilibrium framework, but the real test would be to also allow for aggregate uncertainty. This is generally hard to do because contracts allow to condition on all available information at some point in time. So part of the game in the literature is to keep that information down to a minimum, e.g. one or two agent-specific state variables. Once aggregate information is allowed, the contracts could become a lot more complicated, and the models much harder to keep track of. Then one either needs heavy numerical tools or one needs to find clever tricks to keep things manageable. I think Andrew Atkeson and Pat Kehoe, for example, have successfully used such a strategy to explain the volatility of exchange rates: their trick has been to rig things so that there is no trade in equilibrium. Also, Fernando Alvarez and Urban Jermann and more recently Hanno Lustig are using limited commitment contracts to explain asset pricing facts, so they use some form of aggregate uncertainty as well. But all this is still at the level of tailoring things to a specific issue and keeping things that happen at the aggregate level very much under control. It would be nice if we had an elegant way of putting these things into standard stochastic dynamic general equilibrium models routinely and would be able to meaningfully address important issues that way. So some clever people have to come up with a way to make that happen. If not, the frontier will probably move someplace else.


Alvarez, F. and Jermann, U. 1999. “Quantitative Asset Pricing Implications of Endogenous Solvency Constraints“, NBER working paper 6952.
Alvarez, F., Atkeson, A. and Kehoe, P. 2000. “Money, interest rates, and exchange rates with endogenously segmented markets“, Minneapolis Fed Staff Report 278.
Gersbach, H. and Uhlig H. 1999. “On the Coexistence Problems of Financial Institutions.” Mimeo.
Gersbach, H. and Uhlig H. 1999. “Financial Institutions and Business Cycles.” Mimeo.
Gersbach, H. and Uhlig H. 1998. “Debt Contracts, Collapse and Regulation as Competition Phenomena.” Tilburg Center for Economic Research working paper.
Krueger, D. and Uhlig, H. 2000. “Competitive Risk-Sharing Contracts with One-Sided Commitment.” Mimeo, Stanford University.
Lustig, H. 2000. “Understanding Endogenous Borrowing Constraints and Asset Prices.” Mimeo, Stanford University.
Uhlig, H. 1995. “Transition and Financial Collapse.” Tilburg Center for Economic Research working paper.
Volume 2, Issue 1, November 2000

Stephen Parente on the barriers to development

Stephen Parente is Assistant Professor at the Department of Economics, University of Illinois, Urbana-Champaign. He specializes in development economics and industrial economics, in particular technology adoption. Parente’s RePEc/IDEAS entry.
EconomicDynamics: In your work with Ed Prescott, you show how barriers to the implementations of new technologies and production processes may explain the vast disparities of income levels across the world, disparities that standard growth theory cannot explain. Do you think your theory could also explain differences within OECD countries, for example why the United States have taken over the leader role from the United Kingdom, why France is not as rich as the United States, or why Ireland as recently gone through a growth spurt?
Stephen Parente: Without doubt! There is a lot of evidence of firms in Europe being far more constrained in their use of technology compared to firms in the United States. Ford Europe is not able to use just-in-time production processes, but Ford U.S.A. is. Martin Bailey working with the McKinsey Global Institute documents the greater regulations faced by European firms regarding the choice of technology and work practices in a number of industries such as airlines, telecommunications, retailing, and banking. So I think there is a lot of evidence that supports our theory for the current income differences within the OECD.You brought up Ireland. Our theory predicts that a country that currently applies a small amount of the stock of available knowledge in the world to the production of goods and services can realize large increases in output if it reduces the constraints imposed on firms’ technology and work practice choices. In 1985, Ireland’s per capita income was roughly 45 percent of the U.S. level, and so there was a considerable amount of knowledge out there that Ireland failed to exploit. There were also a considerable amount of constrains on firms. Starting in 1986, these constraints were lowered as industries were deregulated, state enterprises privatized, and trade barriers lowered. Following the reduction of these barriers to technology adoption, Ireland underwent a growth spurt, just as our theory predicts.
ED: There is a large body of literature using models of leaders and imitators in innovations to explain North-South differences in output. Is your theory contradicting this?
SP: At a very general level, I see no contradiction between our theory and these North-South models. We purposely abstract from innovation (the North); in our work, the stock of ideas evolves exogenously over time. We are interested in understanding why some countries are so poor relative to the United States today, when there is a lot of proven, available technology they could adopt. We are not trying to account for why the United States is so much richer today than two hundred years ago. For the question we are interested, abstracting from innovation is reasonable. If we had some other question in mind, say one that involves the pattern of trade between rich and poor countries, we might use a North-South model.At a very specific level, our theory does contradict those North-South models that predict the growth rate of the South is lower under free trade. Our theory is one of relative income levels, not relative growth rates. In our theory, international trade has a positive effect on an economy’s relative income level, since an economy that is open has fewer barriers to technology adoption, ceteribus paribus.
ED: And does your theory contradict the arguments about protecting of infant industries in developing countries?
SP: Most definitely! Ed and I examined a number of industry studies, some contemporaneous and others covering the industrial revolution, in an attempt to understand why regulations were in place that prevented firms from using better technologies and work practices. These studies led us to the hypothesis that many of these constraints were erected by the state to protect the interests of specialized factor suppliers vested in current production processes. We put forth a model whereby groups with monopoly rights over the supply of factor inputs prevent the adoption of superior technology, and we showed that the effect of these rights on an economy’s standard of living is large.The policy implications of our theory are pretty strong. Governments should not give groups the incentive to organize and acquire monopoly rights. Temporary protection of an industry, whether it be an infant or a mature one, is far too likely to lead to permanent protection, as it provides factor suppliers the incentive to lobby the government to grant them monopoly rights.
ED: Your argument relies much on the existence of industry lobbies. Why would they arise more often and more powerfully in developing economies?
SP: That’s a good question. Prescott and I showed how the existence of industry insider groups with monopoly rights over factor inputs to current production processes lead to the inefficient use of inferior technology, but we offered no theory of why societies differ in the prevalence of these groups. Understanding this is the next important question in this research agenda. It is a complicated issue, and I hope that our book will stimulate work in this area.There probably is not a single reason for these differences. Clearly, political institutions matter. A number of researches, including Prescott and I, argue that market-preserving federalism, that is a political system with a hierarchy of governments with sufficient authority at the lower levels, is more conducive to economic development. Initial conditions might matter as well. I have a paper that emphasizes the concentration of land holdings two hundred years ago. The reason why this is important for the formation of industry insider groups is that landowners want to restrict the flow of workers out of agriculture so as to maintain a high rental price of land. If landowners have political power, which is more likely when land holdings are concentrated, they will have the state erect barriers to industry start-ups, the consequence of which is that few industries form. With fewer industries, workers in each industry have a greater incentive to organize and obtain monopoly rights because the demand for a particular industry’s good is decreasing in the number of industrial goods. I have another paper, still in progress that examines the development experiences of Russia and China since market reforms. There is a large amount of evidence that suggests that monopoly rights are far more prevalent today in Russia compared to China. My own view on this is that these monopoly rights are something that carried over from central planning, which gave workers in industry rights to jobs. The challenge here is to understand why the state in Russia chose to preserve these rights whereas the state in China did not.
ED: You have applied your work to the formal and industrial sector of developing economies. But what about the the informal and/or agricultural sectors? In particular, the fact that the difference between industrial and agricultural productivity is much larger than in developed economies seems to contradict your theory.
SP: I don’t think there is a contradiction here. The technology adoption model with Prescott has only one sector, so it has really nothing to say on these sectoral differences. However, given the success of this theory in accounting for international income differences and development miracles, one would obviously like to know if the model appropriately modified can account for the structural differences observed across countries, and over time within a given country. In a paper with Doug Gollin and Richard Rogerson, I took up this question. Since the Parente and Prescott technology adoption model aggregates up to the neoclassical growth model augmented with intangible capital and with cross-country differences in Total Factor Productivity due to differences in the size of the barriers to technology adoption, we analyzed an agricultural extension of the neoclassical growth model. We found that this model fails to account for key sectoral differences observed across countries, including the relative productivity difference you mentioned. This failure led us to consider the role of home production, something that Richard Rogerson, Randy Wright and I had explored within the one-sector growth model in an earlier paper. We found that the introduction of home production goes along way towards accounting for these sectoral differences. It doesn’t go all the way so there is surely more work to be done here. But I am fairly confident that with a few additional modifications, the growth model can account for the structural differences observed across countries and across time within a given country.


Bailey, M. 1993, “Competition, Regulation and Efficiency in Service Industries.” Brookings Papers on Economic Activity, Microeconomics 2, 71-130.
Gollin, D., Parente, S. and Rogerson, R. 2000, “Farm Work, Home Work, and International Productivity Differences,” mimeo, UIUC.
Parente, S. in progress, “Monopoly Rights as a Barrier to Economic Reforms in Russia and China: the Advantage of Economic Backwardness,” UIUC.
Parente, S. 2000, “Landowners, Vested Interests and the Endogenous Formation of Industry Insider Groups,” mimeo, UIUC.
Parente, S. and Prescott, E. 1994, “Barriers to Technology Adoption and Development,” Journal of Political Economy 102 (April), 298-321.
Parente, S. and Prescott, E. 1999, “Monopoly Rights: A Barrier to Riches,” American Economic Review 89 (5), 1216-1233.
Parente, S. and Prescott, E. 2000, “Barriers to Riches,” MIT Press.
Parente, S., Rogerson, R. and Wright, R. 1999, “Household Production and Development,” The Federal Reserve Bank of Cleveland Economic Review 35 (QIII), 21-36.
Parente, S., Rogerson, R. and Wright, R. 2000, “Homework in Economic Development: Home Production and the Wealth of Nations,” Journal of Political Economy 108 (4), 680-687.

Volume 1, Issue 2, April 2000

Q&A: Lee Ohanian on the Great Depression

Lee Ohanian is Associate Professor at the Department of Economics, University of California, Los Angeles. He specializes in macroeconomic theory, the study of business cycles and growth. He has published in the best journals on monetary policy, war finance, VARs, and other topics. Ohnahian’s RePEc/IDEAS entry.
EconomicDynamics: With Hal Cole, you show in your Minneapolis Fed Quarterly Review article that the major peculiarity of the Great Depression was not so much the sharp decline in 1929-33 but rather the extremely slow recovery until 1939. You argue that the key fact to explain is stagnant hours. Why?
Lee Ohanian: Productivity grew rapidly after 1933. Theory predicts that the economy should have recovered to trend by 1936, with above-trend labor input supporting higher consumption and investment. But hours worked remained 20-25 % below trend until World War II. So why was labor input so low given rapid productivity growth? It wasn’t because other shocks were negative – banking panics and deflation ended in 1933, and real interest rates were low. Hours per adult should have been a lot higher after 1933.
EL: In current work with Hal Cole, you argue that New Deal policies encouraging cartelization linked to high wages are responsible for the slow recovery. Why? How could the government be so wrong?
LO: There must have been a major negative shock to offset the recovery of economic fundamentals and keep the economy depressed. The government adopted some extreme labor and industrial policies (the National Industrial Recovery Act) in 1933 that really distorted markets. These policies suspended the antitrust laws and permitted collusion, provided that the rents were shared with labor. This was accomplished through immediate wage increases and collective bargaining.Our new paper, “New Deal Policies and the Persistence of the Great Depression”, quantitatively analyzes these policies. We build a model of the policies, and embed that model within a dynamic GE business cycle model. In contrast to the fast recovery predicted by standard theory, our model predicts economic activity remains far below trend after 1933. We concluded that these policies were a key factor behind the persistence of the Depression – they can account for about 60 % of the deviation between the predicted trend levels and the actual data.Ironically, President Roosevelt thought that “excessive” competition was responsible for the Depression, and that these policies would bring recovery. He was wrong. My colleague Armen Alchian was a student at Stanford at the time, and told me that his professors thought the policies were crazy – they couldn’t understand how promoting monopoly could raise employment. It is unfortunate that Roosevelt didn’t listen to these mainstream economists – if he had, the recovery would have been much stronger. These policies were finally weakened during World War II – and employment rose substantially.There also seemed to be a sentiment to redistribute income during the 1930s. But this policy was a really inefficient method of redistribution. It created a lot of inequality by shutting down employment.
ED: Why do you think it is necessary to use a dynamic general equilibrium model to study the Great Depression? We have all been taught that the economy was not in equilibrium during that period.
LO: Theory has changed a lot since the Depression. I believe that economists took the disequilibrium route because the general equilibrium language of Arrow, Debreu, and McKenzie wasn’t well known at the time. We now know that disequilibrium models should be used very reluctantly, because there are an infinite number of ways an economy can be out of equilibrium. The model Hal and I developed for 1933-1939 is a dynamic general equilibrium model – but with a cartel policy arrangement that generates very low labor input, consumption, and investment.GE theory is important for understanding the Depression. There are a lot of stories about the Depression, but without an explicit GE model you don’t know if the stories hold water. One of the benefits of GE theory is that it forces you to look beyond the direct effects of shocks, and assess the indirect effects. Hal and I are writing a paper for the NBER Macro Annual that uses GE models to study the two most popular shocks for 1929-33: the money stock decline and bank failures. Using GE models, we found that many of the indirect effects of these shocks offset the direct effects, or were at variance with the data.For example, several economists think money shocks depressed the economy through imperfectly flexible wages. Nominal wages were high in manufacturing because President Hoover told the Fortune 500 C.E.O.’s not to cut wages. But wages did fall in other sectors, so a multi-sector GE model is needed to evaluate this story. We found that high manufacturing wages reduced aggregate output only about 3 % between 1929-33. This is because the direct effect of the wage shock is pretty small, and because the indirect, general equilibrium effects offset some of the direct effect.The paper also develops a GE model with a banking sector. The model predicts that bank failures should lead firms to substantially increase retained earnings as a substitute for bank finance. However, firms cut retained earnings like crazy during the Depression – In 1930, dividend payments fell by 4% while profits fell by 63%. The model also predicts that regions with more bank failures should have had deeper depressions. But we found little correlation between state-level economic activity and state-level bank failures.Hal and I thought monetary shocks were the key factor for 1929-33 when we wrote our Minneapolis Fed QR paper. Our view has changed – either we need alternative theories to revive the money and banking hypothesis, or some other shock was responsible for 1929-33.
ED: Were Keynes, and Friedman and Schwartz all wrong?
LO: Keynes didn’t have the benefit of modern theory to help understand the Depression. He was wrong about “animal spirits” driving down investment, employment, and output. Ed Prescott argues in his review of our Minneapolis Fed Quarterly Review paper that the investment decline of the 1930s is not a mystery – it is exactly what theory predicts, given the policy shock that kept labor input so low.Friedman and Schwartz suggest that government policies contributed to the post-1933 depression, which is consistent with our view. But we suspect their emphasis on monetary shocks as a cause for 1929-33 may be misplaced. Believe it or not, productivity (TFP) fell about 15 % relative to trend between 1929-33. This drop isn’t technological regress, and it doesn’t seem to be input measurement error. Hoover intervened in the private economy significantly during this period. Perhaps his interference went beyond wage policies, and affected work practices and expectations about future returns to investment. We don’t know the source of this TFP drop, or all the consequences of Hoover’s actions, but these factors might be important for 1929-33.
ED: Would you make a parallel between the slow recovery of the Great Depression and the “jobless recovery” in the early 1990s?
LO: There are some key differences between these two recoveries. Employment growth was stagnant in the 1930s because of government policies that raised wages and reduced competition. These types of policies weren’t in place during the 1990s. My guess is that a mismatch between new technologies and the existing stock of labor may have contributed to the more recent “jobless recovery”. It is interesting that employment growth has been rapid the last few years as the pool of workers able to use these technologies has increased.
ED: Would you say, like Ed Prescott, that the conclusions of your work with Hal Cole could be applied to contemporaneous France, Spain, and Japan?
LO: Economies normally recover rapidly from downturns. It is pathological for a country to enjoy normal productivity growth but remain depressed for many years. Japan is one of these pathologies. Many economists think that Japan’s problems could be solved if their banks could make more loans and if fiscal and monetary policies stimulated aggregate demand. Some economists even argue that higher inflation expectations would bring recovery.We believe that Japan has more fundamental problems than finding the right mix of fiscal and monetary policy. Japan has tried all sorts of Keynesian stimuli, and it hasn’t worked. When an economy stagnates year after year, you have to ask: “What is preventing people from working and producing more?” Banking problems affected their economy, but we don’t think it is the whole story. 15 years ago, 80 % of Ireland’s banks shut down for six months because of a strike. Their economy did not falter – people found substitutes for closed banks.If banking is the key to Japan’s Depression, why haven’t the Japanese found substitutes after all these years? The persistence of their depression and the failure of Keynesian policies suggest some other shock is responsible for Japan’s stagnation. My work with Hal suggests we should look for policies that keep employment low. Ed, Hal, and I are planning to start research along these lines soon.


Cole Harold L., and Lee E. Ohanian, 1999, “The Great Depression in the United States from a neoclassical perspective“, Federal Reserve Bank of Minneapolis Quarterly Review v. 23, p. 2-24 (Winter).
Cole Harold L., and Lee E. Ohanian, 2000, “New Deal Policies and the Persistence of the Great Depression”, mimeo.
Cole Harold L., and Lee E. Ohanian, 2000, “How Much did Money and Banking Shocks Contribute to the Depression?“, in progress.
Cole Harold L., Lee E. Ohanian and Edward C. Prescott, 2000, “Cartelization, Policy and the Great Depression”, in progress.
Friedman Milton, and Anna Schwartz, 1963, Monetary History of the United States, 1867-1960, Princeton University Press.
Prescott Edward C., 1999, “Some observations on the Great Depression“, Federal Reserve Bank of Minneapolis Quarterly Review v. 23, p. 25-29 (Winter).
Volume 1, issue 1, November 1999

Q&A: David Backus on international business cycles

David K. Backus is the Heinz Riehl Professor of Finance and Economics at the Stern Business School of New York University. He has published extensively on International Business Cycles as well as on Foreign Exchange Theory. In particular, he teamed with Patrick Kehoe and Finn Kydland to launch the current research agenda around international real business cycle (IRBC) models. Backus’ RePEc/IDEAS entry.
EconomicDynamics: Can IRBC modeling shed its “R”, that is say something about monetary phenomena, especially exchange rates?
David Backus: I don’t think there’s much question that RBC modeling shed its “R” long ago, and the same applies to IRBC modeling. There’s been an absolute explosion of work on monetary policy, which I find really exciting. It’s amazing that we finally seem to be getting to the point where practical policy can be based on serious dynamic models, rather than reduced form IS/LM or AS/AD constructs. Lots of people have been involved, but names that cross my mind are my colleagues Jordi Gali and Mark Gertler, their coauthor Rich Clarida, and the team of Julio Rotemberg and Mike Woodford.So we really need a better term than RBC. Maybe you should take a poll.
ED: In 1980, Feldstein and Horioka argued that if the correlation of savings and investment rates was close to one, markets must be incomplete. Cardia (1991) and Baxter & Crucini (1993) showed that this correlation could be high even with complete markets. Do you think the issue is now closed?
DB: Not! (as Finn Kydland would say). I think it’s very much an open issue, but let me explain why. What Baxter and Crucini, Cardia, and others established was a property of dynamic models with complete markets: that in an “artificial” time series for a specific country, saving and investment rates could be highly correlated. (The word “could” is important. Our initial paper showed, for example, that it depended on the process for technology shocks.) Feldstein and Horioka established a very different property of data: that over long periods (in their case 14 years), averages of saving and investment rates for OECD countries were pretty much the same. In other words, there was very little in the way of net international capital flows. This was surprising then, and remains surprising now. My guess is that you’d see greater flows over the last twenty years, particularly in emerging economies, but that the flows are still a lot less than you’d expect from theory.Tim Kehoe had a nice example a few years ago. He estimated how much the capital stock should increase in Mexico to equate the marginal product of capital to that of the US. The answer, as I recall, was about 50%, an enormous number relative to what was viewed as very large capital flows in the early 1990s.
ED: What do you see as the next challenge of IRBC modeling?
DB: I think you want to separate challenges from approaches. Although one’s approach may suggest interesting questions, the best questions are often interesting from lots of perspectives, whether RBC or something else. As a profession we’re probably better off diversifying, with different people attacking different problems with different methods, since we don’t know a priori which directions will turn out to be the most fruitful.On challenges, I’d list the many facts suggesting frictions to international capital flows (Feldstein and Horioka, Tesar and Werner on portfolio diversification) and the large question of how relative prices behave – the magnitude and persistence of real exchange rate movements and differences in behavior across goods (traded and nontraded, for example). These are classic issues, and I think they’ll be with us for a while yet.On approaches, I personally am fascinated by work on models with endogenous borrowing constraints. This started, I guess, with Eaton and Gersovitz, was applied to the 1980s debt crisis by Bulow and Rogoff, and has since been developed further along several directions by (among many others) Alvarez-Jermann and Kehoe-Perri. There’s also been a lot of work on models with imperfectly competitive firms in goods markets, but I know a less about it.
ED: Do you think therefore that the quantity and price anomalies you have coined in the “Frontiers of Business Cycle Research” volume are not important? Or solved? [The quantity anomaly states that models cannot replicate the fact that cross-country correlations of output are higher than those of consumption, the price anomaly states that model cannot achieve the high volatility of the terms of trade observed in the data.]
DB: Honestly, I don’t think they’re solved, although we’ve certainly taken some large bites out of them. I’m extremely enthusiastic, though, about the state of the profession: the quality of work in international macroeconomics has never been higher. Given the pace of change in the world economy and the amount of human capital devoted to understanding it, I’m confident that the next ten years will be just as exciting as the last ten.


F. Alvarez and U. Jermann, “Quantitative Asset Pricing Implications of Endogeneous Solvency Constraints“, NBER Working Paper No. 6953, 1999.
F. Alvarez and U. Jermann, “Efficiency, Equilibrium and Asset Prices with Risk of Default“, University of Chicago Working Paper, 1999.
D. K. Backus, P. J. Kehoe, and F. E. Kydland, “International Real Business Cycles“, Journal of Political Economy, vol. 100, pp. 745-775, 1992.
D. K. Backus, P. J. Kehoe, and F. E. Kydland, “Dynamics of the Trade Balance and the Terms of Trade: The J-Curve?American Economic Review, vol. 84, pp. 84-103, 1994.
D. K. Backus, P. J. Kehoe, and F. E. Kydland, “International Business Cycles: Theory and Evidence“, in: T. F. Cooley, ed., Frontiers of Business Cycle Research, Princeton University Press, pp. 331-356, 1995.
M. Baxter and M. J. Crucini, “Explaining Saving-Investment Correlations“, American Economic Review, vol. 83, pp. 416-436, 1993.
J. Bulow and K. Rogoff, “A Constant Recontracting Model of Sovereign Debt“, Journal of Political Economy, pp. 155-178, 1989.
E. Cardia, “The Dynamics of a Small Open Economy in Response to Monetary, Fiscal, and Productivity Shocks”, Journal of Monetary Economics, vol. 28, no. 3, pp. 411-434, 1991.
R. Clarida, J. Gali and M. Gertler, “The Science of Monetary Policy: A New Keynesian Perspective“, Journal of Economic Literature, forthcoming.
J. Eaton and M. Gersowitz, “LDC Participation in International Financial Markets: Debts and Reserves”, Journal of Development Economics, pp. 3-21, 1980.
M. Feldstein and Horioka, “Domestic Saving and International Capital Flows“, The Economic Journal, pp. 314-329, 1980.
P. J. Kehoe and F. Perri, “International Business Cycles with Endogenous Incomplete Markets“, University of Pennsylvania Working Paper, 1998.
T. J. Kehoe, “What Happened in Mexico in 1994-95?” in P. J. Kehoe and T. J. Kehoe, eds., Modeling North American Economic Integration, Boston: Kluwer Academic Publishers, pp. 131-148, 1995.
J. J. Rotemberg and M. Woodford, “Oligopolistic Pricing and the Effects of Aggregate Demand on Economic Activity“, Journal of Political Economy, vol. 100, pp. 1153-1207, 1992.
L. Tesar and I. M. Werner, “International Equity Transactions and U.S. Portfolio Choice“, in J. A. Frankel, ed, The Internationalization of Equity Markets, University of Chicago Press, pp. 185-216, 1994.
L. Tesar and I. M. Werner, “Home Bias and the High Turnover“, Journal of International Money and Finance, pp 467-492, 1995.