Economic Dynamics Newsletter

Volume 18, Issue 1 (April 2017)

The EconomicDynamics Newsletter is a free supplement to the Review of Economic Dynamics (RED). It is published twice a year in April and November.

In this issue

Moritz Kuhn on Understanding Income and Wealth Inequality

Moritz Kuhn is Associate Professor at the University of Bonn. His research interests cover the macroeconomics of inequality, labor markets, and human capital. Kuhn’s RePEc/IDEAS profile.

Income and wealth inequality are at historical highs. 1.5 million sold copies of Thomas Piketty’s book “Capital in the 21st century” have demonstrated that inequality is the defining issue of our time. Today, economists, policymakers, and the general public are actively engaged in a discussion about the causes and consequences of high and rising inequality and its implications for reforming the tax and social security system, as well as labor and financial market institutions. Although the debate is often coined in economic terms, it touches very fundamental issues like social cohesion in democratic societies.

But our understanding of the driving forces of inequality is still hazy. Why do some people earn so much more than others? Why do some people possess fortunes while others have barely any wealth? Does income inequality lead to wealth inequality, or vice versa? More generally, what determines the joint distribution of income and wealth? Answering these pivotal questions about inequality in contemporary societies is paramount to understand its consequences and constitutes the focus of my research agenda. I approach these questions by exploring existing datasets but also compile new ones. Newly-compiled data allows me to adopt new perspectives on inequality. In existing data, I take a more granular view at differences along the income and wealth distribution. I use the resulting evidence to inform my model-building. While traditional work studied wealth inequality as the result of an exogenous stochastic income process, the guiding idea of my work is that the income and wealth distribution are determined jointly so that policy changes reshape both. Based on this idea, I develop new models to study policy reforms in unequal societies.

My research agenda builds on a long tradition in modern economic research. 100 years ago, at the 31st annual meeting of the American Economic Association, the then-president Irving Fisher said that “the causes and cures for the actual distribution of capital and income among real persons” is a subject that needs “our best efforts as scientific students of society.” This is the goal I have set myself, and it is a great pleasure for me to present my research agenda in this newsletter. Let me add at the beginning that my research relies on an extensive and invaluable collaboration with coauthors in Bonn and elsewhere.

In my discussion, I will focus on three specific topics. The first topic addresses the connections between income and wealth inequality. I will start with a brief discussion of my recent paper (Kuhn, Schularick and Steins, 2017a) in which we compile and analyze a new micro-level dataset spanning seven decades of U.S. economic history. Using this data, we document strongly diverging trends between income and wealth inequality. We demonstrate that house price dynamics and portfolio heterogeneity of households explain these diverging trends. Second, I will discuss my recent work on the sources of earnings inequality. In Bayer and Kuhn (2017a), we explore a unique matched employer-employee dataset from Germany to revisit a key question from human capital theory about the importance of employers, education, experience, and job characteristics in determining wage differences. We find that a job’s hierarchy level encoding responsibilities and independent decision making required in the job is the most important driver of wage differences. Third, I will discuss my work to develop models to explore how policy changes affect earnings dynamics and the distribution of earnings. I will focus on a life-cycle labor market model developed in Jung and Kuhn (2016) that is jointly consistent with facts on worker mobility and earnings dynamics, focusing in particular on large and persistent earnings losses after worker displacement. At the end of the discussion of each of the three main topics, I will briefly touch upon companion works that explores the link between rising inequality and household debt (Kuhn, Schularick and Steins, 2017b), heterogeneity in earnings dynamics (Bayer and Kuhn, 2017b), and the effects of changes in the unemployment insurance system on labor market dynamics (Hartung, Jung and Kuhn, 2017). I will also take the opportunity to briefly talk about related work with José-Víctor Ríos-Rull (Kuhn and Ríos-Rull, 2016) providing a comprehensive reference on facts of U.S. earnings, income, and wealth inequality, and with Tom Krebs and Mark Wright (Krebs, Kuhn and Wright, forthcoming in the RED special issue on human capital and inequality) exploring the interaction of human capital accumulation, financial markets, and inequality.

1. Connections between income and wealth inequality

In Kuhn, Schularick and Steins (2017a), we provide newly compiled micro data for the income and wealth distribution of U.S. households over the entire post-World War II period. Despite the popular perception that inequality is the defining issue of our time, the existing micro data to study inequality trends spanning several decades remains very limited.

The newly compiled data is based on historical waves of the Survey of Consumer Finances (SCF) going back to 1949. We cleaned and harmonized the historical data to build a new dataset that we refer to as harmonized historical Survey of Consumer Finances (HHSCF). We expect that this new micro data will offer also other researchers the opportunity to address important questions with respect to changes in the financial situation of U.S. households since WW2.

In Kuhn, Schularick and Steins (2017a), we use this data to complement existing evidence on long-run trends in inequality discussed by Saez and Zucman (2016) and Piketty and Saez (2003). Most of the debate about rising inequality focused -mainly due to data limitations- at income and wealth concentration among the richest households. HHSCF data allows us to complete the existing picture on rising inequality by providing a granular picture of inequality trends among the large group of the bottom 90% of households. Existing tax data can only draw the rough contours of the developments in these strata. The paper demonstrates a strong hollowing out of the middle class. The much-debated income and wealth concentration at the top was accompanied by losses concentrated among the middle 50% of the income and wealth distribution. In other words, the middle classes lost out.

We then contrast the evolution of income inequality to the evolution of wealth inequality over time. Conceptually such a comparison of changes in income and wealth inequality is intricate because changes in inequality measures like the Gini coefficient are hard to compare if wealth inequality exceeds income inequality initially. We construct what we call the “inequality gradient”. The inequality gradient measures growth differences along the distribution relative to a distribution of inequality-neutral growth, i.e. a situation when all groups grow at the same rate. When we compare changes of income and wealth inequality over time, we find an asynchronous and asymmetric increase. Income inequality increased earlier than wealth inequality and more so. We find the strongest increase of income concentration between 1970 and 1990; over most of this time period, wealth concentration decreased. We find almost the mirror image during the financial crisis and its aftermath when wealth concentration strongly increased while income concentration increased only little. Exploring the joint evolution of income and wealth inequality has the potential for important new theoretical insights. The canonical consumption-savings model keeps a tight grip on their joint evolution. It is therefore an open question if and in how far the trends we discuss pose a challenge to recent attempts to model trends in wealth inequality (Kaymak and Poschke, 2016, Hubmer, Krusell and Smith, 2016). At the very least, in Kuhn, Schularick and Steins (2017a) we provide an explanation for the documented asymmetric increase of income and wealth inequality that is not present in the canonical macroeconomic models of wealth inequality. We document substantial differences in household portfolios along the wealth distribution. The middle class holds most of its assets in housing (non-diversified portfolios) with substantial mortgage debt against this housing (leveraged portfolios). We also demonstrate that diverging trends between income and wealth inequality can be traced back to particular historical episodes when house price booms hit these highly non-diversified and leveraged household portfolios and led to large and concentrated wealth gains in the middle class. These in turn mitigated the rise of wealth inequality relative to the rise in income inequality. Put differently, rising house prices slowed down the increase in wealth inequality. Our results highlight the importance of asset price changes and differences in portfolio composition to understand trends in wealth inequality.

Companion and related work

In a companion paper (Kuhn, Schularick and Steins, 2017b), we provide new evidence on the distribution of household debt and its changes over time. Household debt is rarely studied by macroeconomists but has recently received increasing attention after the financial turmoil of the Great Recession. We use the HHSCF data to explore the changes in the distribution of debt underlying the six-fold increase in household debt relative to income in the U.S. since World War II. The causes and consequences of this phenomenon are much debated across the social sciences. We show that debt-to-income ratios have risen at approximately the same rate across all income groups and that the aggregate increase in household debt is predominantly linked to the accumulation of housing debt. Middle-class and upper-middle class households mainly accounted for the massive rise in aggregate debt–and not poor households financing additional consumption in the absence of income growth, as is often assumed.

In related work with José-Víctor Ríos-Rull (Kuhn and Ríos-Rull, 2016), we provide a comprehensive description of income and wealth inequality based on U.S. SCF data that we hope will serve as reference for other researchers. We provide most results from the paper for download athttps://sites.google.com/site/kuhnecon/home/us-inequality. In the paper, we also address a recurring topic in the discussion of the sources of wealth inequality, namely, the intergenerational transmission of wealth through bequests. It is a widely-held belief that a lot of wealth is transmitted across generations through inheritance, yet, when looking at the micro data from the SCF, we find that in 2013, 80% of wealth in the U.S. economy is not inherited but acquired over a person’s lifetime. We show that this even holds for the wealthiest households. If anything, the share of inherited wealth is decreasing towards the top of the wealth distribution. A simple sanity check of this finding can be done by looking at the richest Americans from the Forbes 500 list. In 2015, 8 out of the Top 10 wealthiest Americans did not inherit their wealth but built it within their life-time. Most of them are entrepreneurs who created wealth through inventions or new ideas that they turned into fortunes by selling shares in financial markets.

2. Sources of earnings inequality

Understanding the sources of earnings inequality is the goal of an ongoing research project with Christian Bayer (Bayer and Kuhn, 2017a). We use data from the German Structure of Earnings Survey (SES), an administrative linked employer-employee survey, which provides exceptionally detailed information on job characteristics, employers, employees, their earnings and hours. In this data, observables can explain more than 80 percent of cross-sectional wage variation. Such an amount of explained cross-sectional variation is unheard of in existing data on individual earnings. The reason for this explanatory power is not that overall earnings variation is small but it is the unique information about job characteristics that delivers this result. The data allows us to shed light on a key question in human capital theory because we can quantify how important employers, education, experience, jobs and their characteristics are in determining wages.

We decompose cross-sectional wage inequality into an individual, a plant, and a job component. Among the three, the job component explains 40% of the age difference of average wages and almost all of the rise in wage inequality by age. The hierarchy level of workers is the most important information within the job component. Hierarchy encodes responsibility and independent decision making connected with a job. It captures therefore a functional concept and not a qualification concept so that hierarchy is correlated with formal education but is inherently job specific. In fact, we show that a substantial fraction of workers is employed on all hierarchy levels for virtually any level of formal education (with the exception maybe of extreme combinations) and that workers progress along the “hierarchy ladder” as they get older. Both results clearly indicate that formal education and hierarchy measure two distinct concepts. The plant component, differences between low-paying and high-paying plants, by contrast accounts for only 20% of the age variation of wage inequality. We interpret these results as showing that the ability to take responsibilities and to work independently are skills that are highly valued in the labor market and are required to climb the “hierarchy ladder” with large returns on wages.

The information on hierarchy (job responsibility) that we bring to speak is critical for the decomposition of earnings inequality. We show that when job characteristics are ignored, plant differences appear to be more important both in explaining average wage differences by age as well as the increase of wage inequality by age. In other words, high-paying plants are high-paying because of their job composition rather than some other intrinsic characteristics of the plant. Hence, the average human capital in the plant determines its average wage level. On top comes that even fundamentally high-paying plants have a larger fraction of jobs on higher levels of hierarchy, i.e. there is a positive correlation between plant effects in pay and the job composition of a plant.

Companion and related work

In ongoing companion work with Christian Bayer, we are compiling a long-run dataset on the evolution of the German wage and employment structure. The data has information on employment and wages across hierarchy groups, different industries and employment types, and by gender. The data is compiled from archived historical tabulations of the German Statistical Office. Comparing these detailed historical tabulations to microdata from the 2001 Structure of Earnings Survey (SES), we find that the tabulated characteristics explain 2/3 of the earnings variation in the cross-section. Our data digitalization effort is still ongoing. Once our data is complete, it will cover the entire time period from 1957 until today. The data will be pivotal for exploring the transformation of the German labor market over the past six decades. We will use the data to explore if changes in the employment structure (“quantities”) or in the wage structure (“prices”) are more important in accounting for the observed increase in earnings inequality over time. A related question is explored in Song, Price, Guvenen, Bloom and von Wachter (2015). They ask if changes in the wage structure within or between firms contributed to the rise in U.S. earnings inequality over the past decades. Yet in their dataset it is not possible to observe changes in the composition of jobs that explains most of wage differences between plants in the German data.

In other related work with Christian Bayer (Bayer and Kuhn, 2017b), we exploit high-quality administrative data from social security records of the German old-age pension scheme to explore how unequally distributed labor market risks are. The data has the unique feature that it is administrative and covers entire employment histories of workers from age 14 to 65. Using this data, we document a high concentration of unemployment and sickness episodes within worker cohorts, low-pay no-pay cycles for the typical unemployed, and stable employment with very low unemployment risk for the typical employed. While unemployment risk is prominently studied as a source of earnings risk, we document that also earnings risk on the job is highly concentrated among few workers. These results scrutinize the assumption of a homogeneous risk process and suggest that besides widely-documented and widely-studied earnings inequality, there is also large “inequality in earnings risk”. We are exploring the consequences of risk heterogeneity for the design of public insurance and transfer systems as part of this project.

3. Theoretical models of earnings dynamics and the distribution of earnings

Most macroeconomic models of inequality follow the path-breaking work of Aiyagari (1994), Huggett (1993), and Imrohoroglu (1989) on heterogeneous agents incomplete markets models. These models treat income dynamics and income inequality as exogenous; a single, stochastic earnings process is the driver of all heterogeneity. Macroeconomists rely on this workhorse model to study the consequences of pension or tax reforms, financial market liberalization, or technological progress. The model assumes that labor market and earnings dynamics remain unaffected by changes in the macroeconomic environment so that income inequality constitutes a policy-invariant fundamental of the model. For policy analysis, this poses a severe limitation because changes in labor market institutions, retirement policy, tax policy, social security programs might as well affect individual labor market behavior. The third topic on my research agenda is the development of models of income dynamics that are shaped by individual behavior. Since most income comes from the labor market, my research focuses on earnings dynamics in the labor market.

In Jung and Kuhn (2016), we develop a life-cycle general equilibrium labor market model. The model is jointly consistent with facts on worker mobility and earnings dynamics documented in the literature. The model can be seen as a human capital model with general and specific human capital accumulation where “human capital production” is the result of a frictional process that is explicitly modelled via labor market behavior. This implies that the human capital accumulation technology itself is endogenous to the labor market environment. With this model, we provide a new tool to study the effects of macroeconomic changes on earnings dynamics and close a gap in the existing literature. Existing labor market models provide very little guidance to explore earnings dynamics. They generate earnings dynamics that are highly transitory so that, for example, a job loss is a rather inconsequential event. By contrast, a large empirical literature following Jacobson, LaLonde and Sullivan (1993) has shown that workers who lose their stable job experience large and persistent earnings losses. Using our structural life-cycle model, we offer an explanation for the inability of existing models to account for the empirically observed earnings dynamics. Our model builds on the observation that an upward and a downward force prevent earnings shocks to loom large in existing models. The upward force is search. Workers who fall off the job ladder can search on and off the job trying to climb back up. Search frictions prevent an immediate catch-up, but, given the large job-to-job transition rates observed in the data, search is a powerful mean-reverting mechanism. The downward force is separations at the top of the job ladder. If separation rates are high even at the top of the job ladder, then the implied short job durations will make a worker, who is still at the top of the job ladder today, look quickly similar to a worker who just lost his job. These two forces governed by labor market mobility induce mean-reversion of earnings dynamics and make earnings shocks transitory and short lived in existing labor market models. Our paper is the first to uncover this tight link between labor market mobility and earnings dynamics. Put differently, existing labor market models provide little guidance to study earnings dynamics because they stay close to the representative-agent paradigm by imposing uniform exogenous separation rates across all jobs. Any differences from search wash out quickly in such models and all workers remain close to the average worker.

A further innovation of the paper is that we use information on worker mobility dynamics rather than wage dynamics to estimate the parameters of the skill accumulation process. Our model describes rich endogenous mobility dynamics over the life-cycle and in the cross-section conditional on age. Exploiting this variation, we develop a new approach based on ideas similar to Topel (1991) to estimate the skill accumulation process based on mobility differences of workers by age and job tenure.

We use the model to provide an explicit example of an investigation of changes in the labor market environment on earnings dynamics. We study the Dislocated Worker Program (DWP) and its effectiveness to mitigate earnings losses of displaced workers. We explore retraining and placement support as the two central pillars of the DWP. We find that the two policies are ineffective in reducing earnings losses. We explain this finding based on the insights from our structural analysis. Active labor market policy might help to remove frictions and foster mean reversion by making displaced worker look more quickly like the average worker but there will remain the gap between the pre-displacement job at the top of the job ladder and the average job in the labor market.

Empirically, we provide new evidence on heterogeneity in job stability for the United States. Going back at least to Hall (1982), there exists evidence that despite high average worker mobility rates there is also a large share of very stable jobs. We document that mean and median tenure increase almost linearly with age, so that at age 60 the average U.S. worker has been with her/his employer for 14 years. Our life-cycle model captures this heterogeneity in job stability. Abstracting from such heterogeneity limits what labor market outcomes can be explored because it severely distorts the decision to invest in human capital and the returns from search on and off the job. We argue that a life-cycle structure is the natural setup to deal with the inherent non-stationarity of job stability, the rising tenure with age. While life-cycle models are by now a standard tool in the macroeconomic literature to study topics on wealth inequality, our model highlights the importance of life-cycle variation for the study of topics on earnings dynamics and inequality.

Related work

In related work (Hartung, Jung and Kuhn, 2017), we investigate the effects of policy reforms on labor market dynamics. We study the unprecedented overhaul of the German unemployment benefit system as part of the so-called “Hartz reforms” in the mid-2000s. Most scholars attribute the German labor market miracle after the Hartz reforms to the cut in UI benefits based on a mechanism by which the cut in benefits incentivized the (long-term) unemployed to search harder for jobs. We provide new evidence that challenges this narrative. We document based on micro data from the employment panel of integrated employment histories (SIAB) that the bulk of the decline in unemployment rates is due to a change in inflow rates into unemployment. The Hartz reforms have mainly operated by scaring employed workers to separate into unemployment, not by prompting unemployed workers to search harder. Our analysis focuses therefore on the effects of labor market institutions on job stability. Job stability is, as I discussed above, a critical determinant of earnings dynamics. We show that a search model with endogenous separations and heterogeneity in job stability can quantitatively explain the German experience. The highlighted channel implies a large (macro)-elasticity of unemployment rates with respect to benefit changes. Our findings thereby add a new aspect to the current debate on the role of UI benefits on unemployment rates by highlighting the effects on job stability and unemployment inflows. A mechanism that we argue is particularly relevant in the European context.

In related work with Tom Krebs and Mark Wright (Krebs, Kuhn and Wright, 2017), we explore a consumption-saving model with human capital accumulation but without frictions in the human capital accumulation technology. The friction we focus on in this paper is limited enforcement of financial contracts. Households have access to a complete set of credit and insurance contracts, but their ability to use the available financial instruments is limited by the possibility of default (limited contract enforcement). We demonstrate that the model calibrated to the U.S. yields substantial under-insurance of consumption against human capital risk. In Krebs, Kuhn, and Wright (2015), we show that the degree of under-insurance in the model is quantitatively consistent with under-insurance in the U.S. life-insurance market. Key to generate this result are age-dependent human capital returns. High returns at the beginning of working life lead to high human capital investment of young households that are traded off against a lack of insurance against shocks. We find that the welfare losses due to the lack of insurance are substantial. We explore how changes in the macroeconomic environment affect life-cycle earnings dynamics via human capital investment and the resulting consequences for inequality.

4. Future work

My ongoing work will already provide important answers to the key questions of my research agenda. A lot of work still lies ahead. Some of the next steps emerge already clearly. The obvious next step is to embed a version of the described labor market model in a consumption-saving framework. Such a model will provide the framework to study the joint determination of the income and wealth distribution. Ongoing work is at early stages. A second step is to explore how well existing models of wealth inequality match the joint distribution of income and wealth. Preliminary results suggest that existing models face difficulties. We are exploring in ongoing work if incorporating closer links between the current labor market situation and financial decisions helps in bringing model and data closer together.

References

Aiyagari, S. R. (1994). Uninsured idiosyncratic risk and aggregate saving. The Quarterly Journal of Economics vol. 109(3), pages 659-684.

Bayer, C., and Kuhn, M. (2017a). Unequal lives: Heterogeneity in Unemployment, Health, and Wage Risk. Mimeo, University of Bonn.

Bayer, C., and Kuhn, M. (2017b). Which ladder to climb? Evidence on wages of workers, jobs, and plants. Mimeo, University of Bonn.

Hall, R. E. (1982). The importance of lifetime jobs in the US economy. The American Economic Review vol. 72(4), pages 716-724.

Hartung, B., Jung, P., and Kuhn, M. (2017). What hides behind the German labor market miracle? A macroeconomic analysis. Working paper, University of Bonn.

Hubmer, J., Krusell, P., and Smith Jr., A. A. (2016). The historical evolution of the wealth distribution: A quantitative-theoretic investigation. Working paper 23011, NBER.

Huggett, M. (1993). The risk-free rate in heterogeneous-agent incomplete-insurance economies. Journal of economic Dynamics and Control vol. 17(5), pages 953-969.

Imrohoroglu, A. (1989). Cost of business cycles with indivisibilities and liquidity constraints. Journal of Political Economy vol. 97(6), pages 1364-1383.

Jacobson, L. S., LaLonde, R. J., and Sullivan, D. G. (1993). Earnings losses of displaced workers. The American Economic Review vol. 83(4), pages 685-709.

Jung, P., and Kuhn, M. (2016). Earnings losses and labor mobility over the life-cycle. CEPR discussion paper 11572.

Kaymak, B., and Poschke, M. (2016). The evolution of wealth inequality over half a century: the role of taxes, transfers and technology. Journal of Monetary Economics vol. 77, pages 1-25.

Krebs, T., Kuhn, M., and Wright, M. L. (2015). Human capital risk, contract enforcement, and the macroeconomy. The American Economic Review vol. 105(11), pages 3223-3272.

Krebs, T., Kuhn, M., and Wright, M. L. (2017). Insurance in human capital models with limited enforcement. Review of Economic Dynamics vol. 25.

Kuhn, M., and Ríos-Rull, J.-V. (2016). 2013 update on the U.S. earnings, income, and wealth distributional facts: A view from macroeconomic modelers. Federal Reserve Bank of Minneapolis Quarterly Review vol. 37(1).

Kuhn, M., Schularick, M., and Steins, U. I. (2017a). Wealth and income inequality in America, 1949-2013. Mimeo, University of Bonn.

Kuhn, M., Schularick, M., and Steins, U. I. (2017b). The American debt boom, 1948-2013. Mimeo, University of Bonn.

Piketty, T. (2014). Capital in the Twenty-First Century. Belknap Press.

Piketty, T., and Saez, E. (2003). Income inequality in the United States, 1913-1998. The Quarterly Journal of Economics vol. 118(1), pages 1-41.

Saez, E., and Zucman, G. (2016). Wealth inequality in the United States since 1913: Evidence from capitalized income tax data. The Quarterly Journal of Economics vol. 131(2), pages 519-578.

Song, J., Price, D. J., Guvenen, F., Bloom, N., and Von Wachter, T. (2015). Firming up inequality. NBER working paper 21199.

Topel, R. (1991). Specific capital, mobility, and wages: Wages rise with job seniority. The Journal of Political Economy vol. 99(1), pages 145-176.

Back to Menu

EconomicDynamics interviews Marco Del Negro about DSGE modelling in policy

Marco Del Negro is a Vice President in the Macroeconomics and Monetary Studies Function of the Research and Statistics Group of the Federal Reserve Bank of New York. His research focuses on the use of general equilibrium models in forecasting and policy analysis. Del Negro’s IDEAS/RePEc entry. Any views expressed in this interview are his only and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System.

EconomicDynamics: How prevalent has the use of DSGE models become in policy-making?
Marco Del Negro: Well, “policymaking” may not be the right way to put it. At best, DSGEs and other models can only *inform* policymaking! And the answer varies widely from central bank to central bank, with the Norges Bank displaying the DSGE model’s optimal policy results in its quarterly reports, and others not having a DSGE model at all. But my impression is that by now most advanced economies’ central banks have a DSGE model, and some in developing economies do as well. Their use varies across central banks, from forecasting, to scenario analysis, to optimal policy exercises. One thing to bear in mind is that DSGEs are never the only models on the table—they are almost always used in conjunction with other approaches, especially for forecasting. And that’s good: these models are not, nor will ever be, “reality,” so it’s a good practice to be robust, and check their answers against other methods. By the way, Marc Giannoni and I discuss using DSGEs for forecasting and policy analysis at the New York Fed in a VoxEU book that just came out.
ED: Forecasting models always need a “fudge factor” to align the model with current data or to accomodate special circumstances. For example, consumer confidence was unexpectedly high after the Brexit vote. How can a DSGE model with rational agents deal with that?
MDN: I guess that one of the points of using DSGE models for forecasting is that you do not need “fudge factors”, or “add factors,” as you do in other approaches. Much of the purpose of forecasting is to test the model: If your DSGE produces poor forecasts, you may as well face it, as it says your model has issues that need to be fixed. If you “fudge” the forecasts, you’ll never know for sure. To put it differently, we are trying to be “serious” econometricians, and in the long run, we’ll hopefully reap the benefits from this stance.Now, this does not mean that you should just use your seven macro time-series only in forecasting, and ignore all else happening in the world. One thing you can do is to use financial variables, like spreads, as we do in the New York Fed DSGE model and as Frank Schorfheide and I have done in our Handbook of Economic Forecasting chapter (and applied in an AEJ Macro paper with Marc Giannoni as well). These variables do react rapidly to current events (see our work with Raiden Hasegawa), so they are useful sources of information for the DSGE econometrician. You can also try to incorporate information from expectations surveys. One problem with DSGE models is surely their misspecification—everything considered, they are simple models- but another problem is that the econometrician’s information set is often so limited. We need to deal with both issues. Coming back to “fudging,” policymakers never take the models’ forecasts as revealed truths. They are smart enough to fudge the forecasts themselves, that is, to incorporate judgmental elements if they feel that in some specific circumstances the model is missing something important.
ED: A DSGE model requires a lot more rigor than a purely statistical model; it requires a very specific theory after all. Doesn’t that give the DSGE model an unfair disadvantage in any forecasting horse race?
MDN: I don’t know that it is “unfair.” After all, all approaches have their pros and cons, and the rigidity of DSGEs is definitely a con in the short run. If you think something is happening to exchange rates, for instance, and your model is a closed-economy model, you are pretty stuck. At the New York Fed, the DSGE group has teamed up with the Bayesian VAR folks (Domenico Giannone, Richard Crump, and Stefano Eusepi) to overcome this problem at least for scenario analysis. This kind of methodological joint-venture helps to study the risks to the forecast that events such as, say, Brexit may entail. A large BVAR by construction deals with a lot of observables, and so this approach can encompass most, if not all, of the variables that we see are impacted by the event, e.g., exchange rates or bond yields. We then use the BVAR to derive the implications for the variables that are in the DSGE information set. Finally, we employ the DSGE to understand what the event in question may mean for objects of interest that the VAR does include, say, r*, or the output gap, as well as for possible monetary policy responses.In the long run, it is pretty obvious that if a variable is important for the forecasts, and for policy analysis, we should include it and change the model. But that’s what we are here for. Models are not static objects. As a recent popular Italian song goes, “whatever happens, panta rhei.”
ED: Estimated DSGE and *VAR models have replaced a generation of forecasting models that grew to hundreds or even thousands of equations and became unwieldy. Recently, DSGE models have been increasing in size as well. Is this a good development?
MDN: I am not sure that DSGE and VAR models have “replaced” anything. If you think about it, the workhorse model for policy analysis in the Federal Reserve System is still the FRB/US model, a very large model to some extent in the Cowles Foundation tradition. And perhaps the problem with these Cowles Foundation models is not so much in their size, but in their “incredible identification” restrictions, as pointed out by Chris Sims almost forty years ago. And yet, “large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification,” and this is still true forty years later. I personally do not think that DSGEs should try to compete in size with Cowles Foundation-style models, but I am also not sure that growing in size is necessarily a bad thing. If you think about it, many of the aspects that the current batch of DSGEs are missing, like heterogeneity in households and firms, a more fleshed-out financial intermediation system, or an expectations formation mechanism more in line with the evidence from surveys, would most likely make the models larger. If there’s a good reason for building larger models, like incorporating more data, or relevant aspects of the economy, why not?
ED: An important aspect of DSGE models is that they help provide intuition about what is happening in the economy. Yet, as models grow, it becomes increasingly difficult to get that intuition. Is intuition a factor in forecasting or policy evaluations?
MDN: What you call “intuition” is definitely a plus for DSGE models in terms of forecasting. In a reduced form model (say, in a factor model or in a VAR where you have not identified the shocks), if the forecasts change from one period to the next, you can only take a guess as to the reasons for the change. You know your forecast errors—perhaps the fact that you underpredicted output—but have no idea what caused it. Was it productivity, or demand, or monetary policy? DSGEs are identified models—meaning, they tell you exactly what shocks lie behind any forecast error, and any change in the forecasts, no matter the size of the model. Of course, the stories a DSGE tells are only as good as the model that generates them. But these stories are also a way of testing the model. For instance, if a model suggests that the only fundamentals that drive inflation forecasts are price markup shocks, maybe you do not have such a great model of inflation. (Incidentally, contrary to popular belief, it is not true that in all DSGEs markup shocks are the only drivers of inflation; see the paper with Frank and Marc that I mentioned before, or the blog post by Andrea Tambalotti and Argia Sbordone on the FRBNY DSGE model’s interpretation of the Great Recession and its aftermath).Now, Paul Romer and many others will critique these stories because the shocks in DSGEs aren’t really “structural.” To some extent that’s true. But this does not imply that these stories are completely uninformative. Let me give you an example. The model described in the paper with Frank and Marc says that “whatever shocks caused the increase in spreads during the Great Recession are also the reason why inflation has been persistently low afterwards.” Does the model tell you exactly what economic forces drove the spreads bananas after the Lehman crisis? Or what caused the crisis to start with? No. So, for a purist, that shock is bogus. Does that make the DSGE story completely uninformative? No. The story connects one event (the Lehman crisis) to another (low inflation). That seems valuable information to me (and by the way, that’s not an ex post story—the model *forecasted* low inflation in the future as of 2008Q4). And it’s testable. If spreads will increase again in the future, the DSGE will predict low inflation—a testable prediction.
ED: Let’s talk about identification for a minute, since the Romer paper makes a big deal about it.
MDN: Let me start by saying that people questioning identification, or whatever else they want to question, in DSGE models is totally fair game. Identification of the shocks in DSGEs is definitely not “model free.” If you don’t buy the model, you don’t buy the identification either. I still think it’s a step forward relative to Cowles Foundation models (or in current large-scale models used in central banks, for that matter) where identification is achieved by placing ad hoc zero restrictions here and there. At least in DSGEs you can question identification by talking about economic assumptions. If Romer and others want to point out that these assumptions all too often go unquestioned, that seems fair—although that applies all over economics, not just to DSGEs.It’s important to understand that the issue we just discussed has absolutely nothing to do with parameter identification, that is, the “back to square one” point of Canova and Sala (2009). On that point, there is quite a bit of confusion. DSGE models are often estimated using Bayesian methods, and for good reasons. Do Bayesian methods “fix” whatever identification problem may be there? No way. Dale Poirier puts it as plainly as possible: “A Bayesian analysis of a nonidentified model is always possible if a proper prior on all the parameters is specified. There is, however, no Bayesian free lunch. The ‘price’ is that there exist quantities about which the data are uninformative, i.e., their marginal prior and posterior distributions are identical.” Even if the model is perfectly identified, priors will always matter in finite samples (which is what we have in macro). For a Bayesian, in fact, a model is not just the equilibrium conditions. Instead, it’s the combination of those *and* the prior distribution one puts on the parameters. Both should be questioned. As for the priors, there is a very useful paper by Ulrich Müller called “Measuring prior sensitivity and prior informativeness in large Bayesian models.” He proposes a method to check the extent to which the objects that you are reporting (say, impulse responses) are sensitive to the priors you are using. Basically his paper moves the debate forward from “Is this DSGE model exactly identified or not?” to a more concrete and useful “How much do priors matter for the results that I am reporting?” (regardless of whether the DSGE is exactly identified).
ED: When is it a good idea to use frequentist methods to estimate a DSGE model?
MDN: I guess that if you are a frequentist the answer is all the time! Seriously, I think there are a couple of reasons for using Bayesian methods. One is that there is information on the parameters outside of the macro data that you are using, e.g., from micro studies. In my view, that is one contribution of the calibration literature, namely to say: let’s use this information! Ideally you would estimate your model on both macro and micro data at the same time. When that’s too hard, you just incorporate the micro data information into your prior. I see that as a plus. Bayesian analysis has also something else in common with calibration: the approach to model validation. That’s not well appreciated or understood, but Bayesian marginal likelihoods describe the fit of the model when the parameters are generated from the prior (before seeing the data), not the posterior.The other reason, and that’s more philosophical, has to do with how you characterize uncertainty. I personally like the Bayesian approach: from our perspective parameters (or other objects of interest) are random variables. We simply don’t know their true value and so we may as well report the full extent of our ignorance. But, you know, reasonable people can disagree on this one. Sometimes the full Bayesian approach is computationally unfeasible: it’s just too hard to compute the likelihood. So one may want to employ a limited information approach—say, use only a subset of moments to estimate the model. Other times it’s good to use a limited information approach just because you just can’t swallow the assumption that your DSGE is the data-generating process. Even so, there are Bayesian alternatives: the DSGE-VAR approach Frank Schorfheide and I have developed goes after this very idea. In addition, there are Bayesian limited information methods. Ulrich Müller, whom I mentioned before, has done very nice work on this approach. In the end, you can be Bayesian all the time if you want. But if you don’t, that’s OK, too.

Let me finish by thanking you for the opportunity to be interviewed. Lots of people have views on this stuff, and it’s been great to be able to give mine!

References:

Canova, F., and Sala, L., 2009. “Back to square one: Identification issues in DSGE models,” Journal of Monetary Economics, Elsevier, vol. 56(4), pages 431-449, May.

Del Negro, M., and Schorfheide, F., 2013. “DSGE Model-Based Forecasting,” Handbook of Economic Forecasting, vol. 2, Chapter 57, Elsevier.

Del Negro, M., Giannoni, M., and Schorfheide, F., 2015. “Inflation in the Great Recession and New Keynesian Models,” American Economic Journal: Macroeconomics, American Economic Association, vol. 7(1), pages 168-196, January.

Del Negro, M., Hasegawa, R., and Schorfheide, F., 2016. “Dynamic prediction pools: An investigation of financial frictions and forecasting performance,” Journal of Econometrics, Elsevier, vol. 192(2), pages 391-405.

Gabbani, F., 2017. Occidentali’s Karma, Italian entry to Eurovision Song Contest.

Müller, U., 2012. “Measuring prior sensitivity and prior informativeness in large Bayesian models,” Journal of Monetary Economics, Elsevier, vol. 59(6), pages 581-597.

Poirier, D., 1998. “Revising Beliefs In Nonidentified Models,” Econometric Theory, Cambridge University Press, vol. 14(04), pages 483-509, August.

Romer, P., 2016. “The Trouble with Macroeconomics,” manuscript, New York University.

Sims, C., 1980. “Macroeconomics and Reality,” Econometrica, Econometric Society, vol. 48(1), pages 1-48, January.

Tambalotti, A., and Sbordone, A., 2014. “Developing a Narrative: The Great Recession and Its Aftermath,” Liberty Street Economics, Federal Reserve Bank of New York.

Back to Menu

SED: Letter from the President

Dear Friends:

I am looking forward this year’s meeting in Edinburgh.  The School of Economics at the University of Edinburgh will host the meeting 22–24 June 2017.  The Local Organizing Committee, headed by Sevi Rodríguez Mora and Tim Worrall have set up what I am sure will be a stimulating and enjoyable meeting.  The Program Committee, headed by Veronica Rappoport and Kim Ruhl, is putting the final touches on a high-quality program and have lined up an impressive set of plenary speakers:  Francesco Caselli, Erik Hurst, and Ayse Imtohoroglu.

At last year’s meeting in Toulouse, we had extensive discussions on how to be able to accept a higher fraction of submissions to the annual meeting.  I am happy to report that Veronica and Kim, with the help of Sevi and Tim, have made some progress in this direction.  The most important step was to expand the number of simultaneous sessions during every time slot from 13 to 14.  (Remember that in Toulouse we had expanded from 12 to 13.)  Once again, we had a large number of submissions, 1638, falling just short of last year’s record of 1662.  Since we were able to increase the number of acceptances from 468 to 504, the acceptance rate increased from 28.2 percent in Toulouse to 30.8 percent in Edinburgh.  I understand that our adjustments have not been enough to keep many loyal SED members happy, however.  Our program committee does not have the resources to screen papers as carefully as our journal does, and we are clearly rejecting some very good submissions.  We will continue to explore options for expanding the meeting when we meet in Edinburgh.

Last year at the dinner in Toulouse and in my letter, I mentioned some of the options that we are exploring to increase the number of accepted papers:  elimination of one or more plenary sessions, elimination of the dinner, an increase in number of parallel sessions, the addition of another day to conference, and a reduction in size of Program Committee and corresponding reduction of the number of invited sessions.  I have received a lot of feedback from SED members (And I thank you for it!).  There was some consensus that we should expand the number of parallel sessions, but not too much.  Doing so, as we are doing this year, requires some changes in the way we think about the dinner, at least at many of the venues that local organizers propose.  A formal dinner for six or seven hundred people requires facilities that are not available in many places, and it tends to be very expensive and not very good.  Most of you with whom I talked or who wrote to me thought that we should have some sort of social event like a diner but that we could be flexible about it.  We will explore options this direction in the future.  A reduction in the size of the Program Committee was also a popular option and is already something that we have been doing, although a number of you noted that many, if not most, of the papers included in invited sessions would have been included anyway.  Elimination of one or more plenary sessions is far more controversial, with some SED members strongly opposed and others strongly in favor.  Since I myself am in favor of keeping the plenary sessions, this is probably not an option that we will try out in the near future. The least popular option that I proposed was adding another day or half day to the conference.  Because this option will be so easy to try out in some proposed venues, however, it is probably something that we will try out.  When Antonio Merlo and I did this at the 1998 Meeting in Alghero, it worked fairly well.  If we do it, I pledge that I will ask the Program Committee Co-Chairs, if they accept my paper, to put me into a session on Sunday morning or Wednesday afternoon, whenever we decide to do it.

One thing that we added to the meeting last year to make it easier for young people to participate was a poster session.  Christian Hellwig and Franck Portier did a great job putting together a session that was popular both with the presenters and with other meeting attendees.  Veronica and Kim are working with Sevi and Tim to put an expanded poster session this year.

If you have any thoughts or suggestions on the issues of expanding our meetings, you can send them to me at [email protected] or tell me about them in Edinburgh.  The task is a difficult one, and we will continue to experiment.  For every SED member who is in favor of radical changes to expand the meetings, there is another SED member who feels that we are already doing things well.

The 2018 Meeting will be held in Mexico on 28–30 June 2018, possibly expanded to the afternoon of 27 June or the morning of 1 July.  It will be sponsored by the Instituto Tecnológico Autónomo de México with the support of the Banco de México.  The local organizing committee, headed by Diego Domínguez, Germán Rojas, and Carlos Urrutia, is already arranging what promises to be a fabulous meeting (And I mean FABULOUS). We have managed to recruit David Lagakos and Guillermo Ordoñez to be the Co-Chairs of the Program Committee.

We have had a number of changes in the Editorial Board of the Review of Economic Dynamics.  You can find all of the information elsewhere in this newsletter, but let me focus on three:  Matthias Doepke is ending his term as Coordinating Editor.  On behalf of the Society, I want to thank Matthias for his service.  If you take a look at the journal rankings on RePEc, you will see that the ranking of RED has been steadily improving.  Matthias started as Coordinating Editor in July 2013.  He will continue to serve as Editor on papers that were submitted during his term.  The SED Board of Directors has managed to recruit Jonathan Heathcote and Vincenzo Quadrini to serve as Coordinating Editors effective 1 May, 2017.  Welcome, Jonathan and Vincenzo!  Finally, Richard Rogerson is ending his term as Associate Editor.  Richard served as Associate Editor 1997–2001 and 2007–2017 and as Editor 2001–2007.  He also served as President of the Society 2009–2012.  This is a truly impressive record of service to our Society.  Thanks, Richard!

I look forward to seeing those of you who were lucky enough to have your paper accepted in Edinburgh.  I hope that we can find additional ways to adding more acceptances for the Mexico City Meeting.

As I say, I am getting excited about the Edinburgh Meeting.  Looking at the conference website, I see a web page on the restaurants and pubs in Edinburgh.  I intend to sample these offerings extensively.  I hope that you can join me.

Tim Kehoe

President, Society for Economic Dynamics

 

Back to Menu

Letter from the Coordinating Editors

Dear Friends:

We are excited and honored to be taking over from Matthias Doepke as joint Coordinating Editors at the Review of Economic Dynamics. Matthias has done a fantastic job during his time in charge, and we inherit a journal that is in great shape.

RED has two key strengths on which we want to build. First, RED belongs to the Society for Economic Dynamics, and this strong and loyal community makes the journal special. Second, RED has a fantastic set of Editors, Associate Editors, and referees, who together have established a reputation for handling submissions carefully and speedily. We look forward to building on the vision and hard work of Matthias and his predecessors, and to publishing lots of exciting and important papers.

Jonathan Heathcote and Vincenzo Quadrini
Coordinating Editors, Review of Economic Dynamics.

Back to Menu