Mikhail Golosov and Aleh Tsyvinski on New Dynamic Public Finance
Mikhail Golosov is Professor of Economics at Princeton University. His research interests lie in the impact of taxation and optimal dynamic contracts. Aleh Tsyvinski is Arthur M. Okun Professor of Economics at Yale University. He is interested in optimal fiscal policy and the role of governement. Golosov’s RePEc/IDEAS profile and Tsyvinski’s RePEc/IDEAS profile.
One of the central questions in macroeconomics and public finance is how to design taxation and social insurance policy. Debates about how progressive taxes should be, how to reform the Social Security system, or how generous welfare programs should be, are consistently on the front pages of newspapers and at the top of policy agendas. The New Dynamic Public Finance (NDPF) is an approach to the design of optimal taxation and social insurance programs that lies on the intersection of macroeconomics and public economics, contributes to the theoretical understanding of how policy should be conducted, and provides practical recommendations that can be used by policymakers around the world.
Traditional approaches to optimal policy
We start with a short description of the NDPF approach and the main results derived with it. We will also contrast this approach with a Ramsey approach widely used in the macroeconomic literature and in policymaking.
The Ramsey approach studies the problem of funding a stream of government expenditures through taxation, operating under the assumption that only distortionary linear or proportional taxes can be used (see Chari and Kehoe 1999 for a comprehensive review). The main goal of the government is to minimize social distortions arising because of the assumed nature of taxes. If, instead, lump-sum taxes were allowed, then the unconstrained first-best optimum could be achieved. While Ramsey models have provided several insights into optimal policy (zero capital taxation result, tax smoothing, time inconsistency of taxation), their well-understood limitation regarding the ad hoc nature of tax instruments may make interpreting their prescriptions problematic. Moreover, the focus of Ramsey approach on funding government expenditures makes it less suitable for analyzing the important roles taxes and transfers play in a modern economy — provision of redistribution and social insurance.
The Mirrlees approach (pioneered by Mirrlees 1971) moves the question of social insurance and redistribution to the front of analysis. The starting point is that people in the economy differ in their income generating abilities or, more generally, face risks. These abilities may be I.Q., physical stamina, or health — we generally refer to these abilities as skills. Society would like to provide insurance for the people who are less advantaged than others by providing redistribution from the more fortunate ones. The problem with provision of such redistribution or insurance is that, if income generating abilities are private information, agents may pretend to be those to whom redistribution is directed. For example, a progressive income tax designed to redistribute to those who are poor because of low income generating ability would create a disincentive for agents with high skills to work as hard. Therefore, the optimal taxation or insurance program must balance the desire to redistribute with the provision of incentives.
The New Dynamic Public Finance
The New Dynamic Public Finance is a recent literature that analyzes the Mirrlees framework in dynamic settings. Many taxes, such as a capital tax or social insurance programs like Social Security in the U.S., feature incentive-insurance tradeoffs that are dynamic, or intertemporal. These dynamic tradeoffs are the focus of the NDPF. Specifically,: (1) NDPF focuses on social insurance and redistribution, an important goal of policymaking; (2) NDPF policy predictions do not rely on the ad hoc restriction of the taxes but on the fundamental tradeoffs between insurance and incentives; (3) The NDPF allows for a richer set of optimal tax instruments and social insurance benefits such as the ones used in practice (progressive taxes, means tested social insurance programs, etc).
In Golosov, Kocherlakota and Tsyvinski (GKT, 2003) we re-examine the two central conclusions of taxation literature, that capital should not be taxed, and consumption goods should be taxed uniformly, using dynamic model with a general process of private information. Each individual is subject to potentially serially correlated labor income shocks and wishes to obtain insurance against adverse realizations of these shocks. We establish a very general result that, if the evolution of skills is stochastic, the celebrated uniform commodity taxation theorem of public finance continues to hold, but the zero capital income taxation theorem does not necessarily apply. Because of the dynamic nature of the incentives that agents need to be provided with, incentive compatibility requires a wedge relative to the decentralized consumption plans and this translates into a positive implicit tax on capital.
The theoretical results in GKT not only answer some important questions, but also raise a number of issues. First, one would like to know how important these taxes might be in realistic social insurance problems. Second, it is necessary to consider methods of implementing these optimal policies in a simple manner. We address this question in Golosov and Tsyvinki (2006) by studying the problem of optimal disability insurance. We show that we can use the new methods developed in GKT and combine them with the data to answer questions about the design of optimal disability insurance. This is a relevant problem both because it provides a simple and tractable example of the general set up in GKT and also because disability insurance is one of the most important social programs, providing insurance against a very important and life-changing event. The disability insurance is one of the largest social insurance programs in the United States, considerably larger than, for example, unemployment insurance. We provide a simple decentralization scheme for implementing the optimal disability insurance policy. Our proposed implementation scheme involves an asset test in which a person receives a disability payment only if his assets are below a certain level. Many social programs use such asset tests, even though they are not currently used in Social Security Disability Insurance. We use the available data to show that the introduction an asset test would provide significant benefit to the economy.
We build on these results in Golosov and Tsyvinski (2007) to investigate the theoretically more challenging question of whether there is a role for the government in designing social insurance programs when individuals can also engage in private insurance using competitive markets. In dynamic optimal taxation environments with informational frictions it is often assumed that a government is the sole provider of insurance. However, in many circumstances, private insurance companies can and do provide insurance against various shocks, ranging from unemployment to health shocks and bankruptcy. The presence of competitive insurance markets may significantly change optimal policy prescriptions regarding the desirability and extent of social insurance policies. In this paper we allow a rich set of competitive insurance markets, the structure of which is endogenously affected by informational constraints and by government policy. This latter feature is particularly important, since it emphasizes that government and private provision of insurance are coupled and need to be studied together. We show that while the markets can provide a significant amount of insurance, there is still a role for welfare improving distortionary taxes or subsidies imposed by the government. The reason for this is that private insurance companies cannot fully internalize pecuniary externalities that arise from the dynamic nature of the incentive problems that they are facing.
Path forward: from theory to policy
This research agenda has now reached a stage at which it is able to analyze the design of social insurance programs and optimal taxation in rich environments that can be closely matched to microeconomic data.
In Golosov, Troshkin, and Tsyvinski (2016) we study a rich lifecycle economy with individuals who are ex ante heterogeneous in their abilities and experience idiosyncratic shocks to their skills over time.
We first derive a novel decomposition that allows us to isolate key economic forces determining the optimal labor distortions in lifecycle economies with unobservable idiosyncratic shocks and to provide their characterization. We show that the labor distortion in a given period is driven by two components: an intratemporal component that provides insurance against new shocks in that period, and an intertemporal component that relaxes incentive constraints and reduces the costs of insurance provision against shocks in the previous periods. The intratemporal component depends on the elasticity of labor supply, the hazard rate of the current period shock conditional on past information, and the welfare gain from providing insurance against that shock. The intertemporal component depends on past distortions, a specific form of a likelihood ratio of the shock realization, and the marginal utility of consumption.
This decomposition then implies that the behavior of the optimal distortion is quite different for the top and the bottom of the income distribution. The labor distortions for high-productivity shocks are determined by the labor elasticity and the higher moments of the shock process. The labor distortions for low shocks depend on the persistence, the past history, and the growth rate of consumption, and are generally increasing in age.
We then use newly available high-quality administrative data on labor earnings (see Guvenen, Ozkan and Song (2014) and Guvenen et al. (2015)) and the U.S. tax code to estimate the stochastic process for skills and quantify the implications for the optimal distortions. Similar to the earnings, the process for the shocks is highly persistent and leptokurtic. We find that the optimal labor distortions are approximately U-shaped as a function of current labor earnings, with the dip in the distortions around the level of earnings in the previous period. The optimal savings distortions generally increase in labor earnings. The distortions are fairly large in magnitude, especially in the right tail: the labor distortions approach 75 percent, while savings distortions approach 2 percent of return to savings. We also show that the welfare losses from using simple affine policies instead of the optimal policy are around 2 to 4 percent of consumption. Moreover, the optimal labor distortions differ significantly from those in a model with the lognormal shocks, both qualitatively and quantitatively, and imply higher welfare gains from non-linear, history-dependent policies. These findings (both the U-shaped and the relatively high welfare gains from nonlinear, history-dependent taxation) are largely driven by the high kurtosis found in the labor earnings process in the data. This suggests that a system of progressive taxes and history-dependent transfers that are being phased out relatively quickly with income can capture most of the welfare gains in this economy.
The discussion above shows that it is challenging to develop a theory of taxation that both allows for sufficiently rich tax functions and provides transparent, intuitive insights about the effect of taxes. In Golosov, Tsyvinski, and Werquin (2016) we develop an alternative variational approach to the analysis of the effects of taxation that both preserves the transparency of the Ramsey approach and allows us to handle more complicated, nonlinear tax systems. Instead of solving for a constrained optimal problem and then backing out the implied optimal taxes that decentralize the optimum, we develop a method to optimize with respect to the tax function directly. This method builds on the perturbation ideas that Piketty (1997) and Saez (2001) apply to a static economy. Our paper finds sufficient conditions for the more rigorous application of that approach and extends it to more general dynamic settings. First, we apply it to optimal taxation problems and show how it recovers the hallmark results on optimal linear commodity taxation of Diamond (1975) and non-linear labor taxation of static model of Mirrlees (1971), both of which are special cases of our general environment. Our formulas emphasize the insight that the same general principle underlies the two models, namely that more sophisticated (in this case, non-linear) tax instruments allow the government to better target the distortions associated with higher tax rates toward the segments of the populations that have either relatively small behavioral responses, or where relatively few individuals are affected. We then show that this fundamental principle can be generalized and applies to broader classes of environments. In particular, we derive several novel predictions such as the optimality conditions for the optimal non-linear capital income tax, or for the optimal labor tax on joint income of couples.
We also show how this approach can be used beyond optimal taxation, as we apply it to analyze tax reforms and welfare gains from increased sophistication of tax systems. We sequentially decompose the welfare gains of reforming existing, not necessarily optimal, tax systems as the tax instruments become more sophisticated. We show the effects of taking into account individuals’ intertemporal optimization decisions, of allowing for age- and history-dependence, and of joint conditioning of labor and capital income. This sequential decomposition of increasingly sophisticated tax systems shows that the welfare effects of general tax reforms depend on aggregate measures of three key elements: the government’s redistributive objective; the labor and capital income elasticities and income effect parameters with respect to the marginal income tax rates, which capture the behavioral effects of taxes; and the properties of the labor and capital income distributions, namely the hazard rates of the marginal and joint distributions. Finally, we show how one can use available empirical moments of income distributions and elasticities to quantify the welfare effects of small tax reforms. Unlike the traditional approach to measuring welfare gains, which requires solving often difficult maximization problems to find the optimum, our method is very transparent and can be done almost -Y΄by hand‘.
More broadly, the variational approach that studies the effects of the tax reforms is complementary to the New Dynamic Public Finance literature studies environments in which taxes are restricted only by explicit restrictions on government’s information set. If we restrict attention to the classes of taxes considered in the NDPF literature, we obtain an alternative characterization of optimality conditions in terms of elasticities. More generally, it is easy to use our approach to analyze the tax systems using restricted tax instruments, e.g., non-linear but separable from labor income taxes, and quantify welfare gains from switching to more sophisticated taxes, e.g. gains from introducing joint taxation of capital and labor.
The dynamic public finance literature achieved significant progress in a relatively short period of time. What started as an abstract optimal dynamic contracting framework now has become an active research agenda that delivers theoretical, quantitative, and empirical results that are increasingly relevant to policy. In our opinion, there are three primary directions in which the literature may move. First, most of the empirically and policy relevant problems still require major theoretical and quantitative effort to be analyzed. Progress in developing new tools is needed to ease the analytical burden and to lower the barriers to entry for more applied researchers. Second, in many cases, the problem of implementation with simple tax systems is as laborious (and hence interesting) as the problem of finding the optimum. We have discussed one alternative — a variational approach that starts with the tax system and changes it directly and hence does not need to separately consider the policy and implementation problems. Progress in developing a simple yet general tax implementation of the optimum in a variety of settings and connection to the variational approach is important. Third, a large number of more applied public finance and macroeconomic questions can be addressed with this framework — from the design of specific taxes or elements of the tax code to a variety of social insurance and redistribution programs.
V. V. Chari and Patrick Kehoe. 1999. “Optimal Fiscal and Monetary Policy,” in: John Taylor and Michael Woodford (editors), Handbook of Macroeconomics, vol. 1, ch. 26, pages 1671-1745.
Johannes Stroebel is Associate Professor of Finance at the New York University Stern School of Business. His research interests lie in the relationship between real estate and macroeconomics. Stroebel’s RePEc/IDEAS profile.
EconomicDynamics: The U.S. has seen large swings in house prices over the past 15 years. What were the effects of these house price movements on the broader economy?
Johannes Stroebel: This is a very interesting question that we are still trying to fully understand. It is by now clear that there are a number of different channels through which the large boom-bust cycle in house prices affected the economy. Many of these channels are directly or indirectly related to household balance sheets. The idea is that when house prices go up, households can withdraw their increased home equity and use the additional resources to consume. On the flip side, when house prices decline and the value of housing assets drop, households’ ability to consume out of home equity disappears and they might have to default. Through this channel, house price movements have the potential to affect aggregate demand.Much of the early research on this household balance sheet mechanism was pioneered by Atif Mian and Amir Sufi. In Mian and Sufi (2011), they document that during the 2002 to 2006 house price boom, households extracted more home equity, and indeed used much of the money to increase consumption. They also show that the resulting increase in household leverage subsequently led to higher mortgage defaults. Similarly, in Mian, Rao and Sufi (2013), they document a large marginal propensity to consume (MPC) out of housing wealth during the housing bust between 2006 and 2009. They find this MPC to be particularly large in areas with poorer households, suggesting disproportionately large effects on household demand in those regions. This line of empirical research has put households and their balance sheets squarely at the center of the macroeconomic narrative of the housing boom-bust cycle. More recently, there have been research efforts to better understand how models of optimal household behavior can generate the large observed MPCs out of housing wealth (e.g, Berger, Guerreri, Lortenzoni, and Vavra, 2015; Kaplan, Mitman, and Violante, 2015).
Researchers have also explored other macroeconomic effects of house price increases. For example, Giroud and Mueller (2016) show that the negative local demand response to house price declines led firms to reduce employment, with particularly strong effects at highly-levered firms (see also Mian and Sufi, 2014). Charles, Hurst, and Notowidigdo (2015) show that house price increases in the early 2000s led to a significant decline in the college enrollment of individuals who were, for example, drawn into the construction sector. This highlights how temporary swings in house prices can have large and permanent effects on the economy.
In my own work with Joe Vavra, we analyze how local retail prices and markups respond to house-price-induced local demand shocks. We use item-level price data from thousands of retail stores to construct price indices at the zip code and city level. We show that local retail prices strongly respond to local house price movements, with elasticities of retail prices to house prices of about 15%-20% across both housing booms and busts. We document that this elasticity is driven by changes in markups rather than by changes in local costs. We also find that these elasticities are much larger in areas where there are mainly homeowners compared to areas with mainly renters. Our interpretation is that markups rise with house prices, particularly in high homeownership locations, because greater housing wealth reduces homeowners’ demand elasticity, and firms raise markups in response. When we look at shopping data, we find corroborating evidence for this. In particular, we document that when house prices go up, homeowners become less price sensitive (for example, they buy fewer goods on sale or with a coupon), while renters become more price sensitive.
ED: Would you say the large increases in global house prices in the early 2000s were the result of a housing bubble?
JS: That very much depends on what you mean by a housing bubble. The workhorse model of bubbles in macroeconomics is based on a failure of the transversality condition that requires the present value of a payment occurring infinitely far in the future to be zero. Such a bubble is often called a classic rational bubble (see Blanchard and Watson, 1982, and Tirole, 1982, 1985).In joint work with Stefano Giglio and Matteo Maggiori, we provide the first direct test of this classic rational bubble. Our analysis focuses on housing markets in the U.K. and Singapore since the early 1990s. In both countries, property ownership takes the form of either a leasehold or a freehold. Leaseholds are temporary, pre-paid, and tradable ownership contracts with initial maturities ranging from 99 to 999 years, while freeholds are perpetual ownership contracts. By comparing the price of 999-year leaseholds with that of freeholds, we can get a direct estimate of the present value of owning the property in 999 years. Models of classic rational bubbles would predict there to be a large price differences between these two contracts. When we look in the data, we find no price difference, even when we zoom in on the periods and regions with the largest house price increases. Put differently, at no point do we find evidence for a classic rational bubble in these housing markets.
However, just because house prices did not display the features of a classic rational bubble does not mean that they did not deviate from fundamental values. Other, more behavioral models of bubbles do not require a failure of the transversality condition. Differentiating between these models is important, since the positive and normative implications of models with bubbles depend crucially on the exact type of bubble that is considered. So empirical research should not just investigate whether there was a deviation of prices from fundamental values. It is as important to understand the exact sources of those deviations.
ED: What are examples of such forces that could explain deviations of house prices from fundamental value?
JS: A lot of evidence points towards an important role played by the way that potential homebuyers form their expectations about future house price growth. For example, there is mounting evidence that when households think about where house prices will go in the future, they extrapolate from their own past experiences: households that experienced larger past price growth expect faster price growth going forward. Kuchler and Zafar (2016) provide strong evidence for this type of extrapolative expectations in the housing market. Guren (2016), Glaeser and Nathanson (2015), and Barberis, Greenwood, Jin, and Shleifer (2015) explore the implications of such extrapolation for price dynamics. The take-away is that extrapolative expectations can quite easily cause prices to deviate from fundamental values.Another force that I have recently investigated together with Mike Bailey, Rachel Cao, and Theresa Kuchler is the role of social interactions in driving housing market expectations and investments. To conduct our analysis, we combine anonymized social network information from Facebook with housing transaction data and a survey. Our research design exploits that we can observe variation in the geographic spread of different people’s social networks. For example, when you compare two individuals living in Los Angeles, one might have more friends in Boston, and the other might have more friends in Miami. At any point in time, house prices in Boston might either appreciate more or less than house prices in Miami. We find that in years following disproportionate increases in Miami house prices, Los Angeles-based renters with more friends in Miami are more likely to buy a house. They also buy larger houses, and pay more for a given house. Similarly, when homeowners’ friends experience less positive recent house price changes, these homeowners are more likely to become renters, and more likely to sell their property at a lower price.
We show that these relationships cannot be explained by common shocks to individuals and their friends. Instead, they are driven by the effect of social interactions on individuals’ housing market expectations. We arrive at this conclusion after analyzing responses to a housing expectation survey. We find that individuals whose geographically distant friends experienced larger recent house price increases consider local property a more attractive investment, with bigger effects for individuals who regularly discuss such investments with their friends. This provides a link between social interactions and behavior that goes through expectations. It also provides some evidence as to the sources of disagreement about future house price growth: two people looking at the same local housing market might disagree because their heterogeneous social networks have recently had very different price experiences.
This evidence is very consistent with Bob Shiller’s narrative of the housing boom in the 2000s, which he described as a “social epidemic of optimism for real estate.” Indeed, it does appear that after interacting with individuals who have recently experienced house prices going up, you yourself become more optimistic about housing market investments in your own local housing market. So one promising direction for future research is to better understand the role of social dynamics in forming expectations and ultimately prices, even beyond housing markets. Burnside, Eichenbaum, and Rebelo (2016) have made important progress in how to model such interactions, but there are many interesting unresolved empirical and theoretical questions.
ED: Price formation depends also on how future cash flows are discounted. You recently provided evidence about long-term discount rates.
JS: So the problem we analyzed for that project was to understand how households discount payments that will only materialize very far in the future, say 100 years or more. This horizon is very relevant for a number of important intergenerational public policy issues, such as the question of how much to invest in climate change abatement. But there were essentially no data points on actual discount rates used by households for payments that far into the future. The reason is that finite maturity assets necessary to estimate those discount rates usually do not extend more than 30 or at most 50 years into the future.Here is where my research with Stefano Giglio and Matteo Maggiori came in. We used the same leasehold/freehold contract structure for real estate that I described above to back out households’ discount rates over these very long horizons. Our approach was to compare transaction prices for two otherwise identical properties, one trading as a 100-year leasehold and the other as a freehold. Our insight was that any price differences would capture the present value of owning the freehold in 100 years, and would therefore be informative about discount rates that households used over 100-year horizons. When we looked at the data, we found that in both the U.K. and Singapore, freeholds were trading at a premium of about 10% relative to 100-year leaseholds on otherwise identical properties. This implied very low annual discount rates over this horizon, at levels of about 2.6%.
In follow-up work with Andreas Weber we are exploring what these low long-run discount rates for housing can tell us about the appropriate discount rates to use for climate change abatement. Two of our insights from that project are as follows: First, when we combine the low long-run discount rate with estimates of the average return to housing, we find evidence for a strongly-declining term structure of discount rates for housing. This can help find appropriate discount rates at a wide variety of horizons. Second, since housing is a risky asset (i.e., it will pay off in good states of the world), but climate change abatement is a hedge (i.e., it will pay off during climate disasters, which are bad states of the world), the low long-run discount rates for housing are actually an upper bound on the appropriate discount rate one should apply for investments in climate change abatement. This suggests that such investments will have much higher present values than what is commonly assumed.
We are currently accepting submissions through Conference Maker. The deadline for submission is February 15th, 2017. We are looking forward to many exciting submissions for the academic program, and we hope to see you next year in Edinburgh!
The Toulouse Meeting followed past meetings in setting new standards and in raising the bar even higher for future meeting organizers to try to clear. All of the details of the Meeting, including the slides and video from the plenary lectures, can be found on the Society’s web site. I personally want to thank all of the people that I have mentioned and all of you who participated in helping to make the 2016 Meeting of the SED one of the best conferences in economics last year!
Sevi, Tim, and their Local Organizing Committee have already made what I expect to be an important innovation. They brought two support staff members from the School of Economics at the University of Edinburgh, Janet Taylor and Hannah Chater, to Toulouse, where they met with the SED Board and with the support staff from the Toulouse School of Economics headed by Carolyne Lamy to go over their plans for the 2017 Meeting. We are constantly striving to make our meetings better, and I congratulate Sevi and Tim for coming up with this potentially major positive TFP shock. Let’s see how it works out!
I can imagine that a lot of you are thinking: With this year’s meeting looking to rival, or maybe beat, last year’s in quality, will I be lucky enough to get my paper on the schedule? I hope so, and let me tell you the sorts of things we are thinking about doing to make it more likely — that is, to increase the odds in your, and my, favor. At the SED Board Meeting in Toulouse, we discussed trying to increase the acceptance rate for contributed papers with Veronica Rappaport and Kim Ruhl, next year’s Program Committee co-chairs. We decided to continue some innovations made at last year’s meeting. Christian Hellwig, Franck Portier, and their team managed to increase the number of parallel sessions from 12 to 13, adding 36 presentation slots. We will do this year as well, and we are exploring the possibility of expanding to 14 parallel sessions. Last year, we experimented with having a poster session for PhD students and recent post docs. Given the success of this poster session, we will do this again in Edinburgh. There is yet another margin to add presentation slots that we can push on. Two years ago, for the Warsaw Meeting, the Program Committee had 50 members. We compensated each Program Committee member by letting him or her organize an invited session. This meant that 150 presentation slots went to invited submissions. Last year, Manuel and Pierre-Olivier managed to reduce the size of their committee to 44, which freed up 18, that is, 6 times 3, presentation slots for contributed, as opposed to invited, submissions. This year, Veronica and Kim think that they can get by with an even smaller committee, freeing up still more slots for contributed submissions.
For all those who are going to attend the 2017 ASSA Meetings in Chicago on 6–8 January 2017, the SED is sponsoring two sessions: the first at 2:30–4:30 pm on 6 January on “Housing Market Dynamics” and the second on at 8:00–10:00 am on 7 January on “How Safe and Liquid Assets Impact Monetary and Financial Policy.” I served for six years as the Econometric Society representative on what is now called the ASSA Advisory Committee, and I know that there is intense competition for slots for sessions. The expansion of the number of sessions for a small society like ours depends on attendance. The SED would like to expand our presence at the ASSA Meetings, and I urge you to try to attend one or both of our sessions in Chicago.
David K. Backus, a distinguished economist and a loyal member of our Society, died on Sunday, 12 June 2016 after a short but intense battle with leukemia. I knew Dave for 41 years. We were classmates and roommates in graduate school at Yale. Dave was a professor at the NYU Stern School of the past 25 years. He made fundamental contributions to international macroeconomics and finance. He was an Associate Editor and Editor of the Review of Economic Dynamics and a frequent participant in our meetings. Dave was instrumental in securing the funding for the 2002 SED Meeting at NYU. I posted the news of Dave’s death on my Facebook page during the night of Monday, 13 June 2016. More than 200 people reacted to this news. The most common comment made was that Dave was one of the kindest and most generous people, maybe the nicest person in Economics. At a memorial ceremony for Dave at NYU immediately before last year’s Meeting in Toulouse, I was especially touched by the words of Stan Zin. Stan talked about what a big hole Dave’s death had left in his life, but he concluded that the best we can do to remember Dave is to strive to be like him in his kindness and generosity in helping others — students, colleagues, co-authors — to do the best that they can do. We can be honest and straightforward in our criticism, as Dave was, but we can strive to be kind and generous, as he was.
The Review of Economic Dynamics (RED) is the official journal of the Society for Economic Dynamics. The journal publishes meritorious original contributions to dynamic economics. The scope of the journal is intended to be broad and to reflect the view of the Society for Economic Dynamics that the field of economics is unified by the scientific approach to economics. We publish contributions in any area of economics provided they meet the highest standards of scientific research. In particular, RED publishes original articles on applications of dynamic economic theory to a wide variety of problems in economics. Related measurement and empirical papers are also welcomed.
RED strives to deliver fast and efficient turnaround of manuscripts, without compromising the quality of the refereeing process. Besides desk rejections, virtually all submitted manuscripts receive two referee reports. In 2015, RED received 379 submissions (we had 337 submissions in 2014). The mean processing time from submission to first decision was 57 days, which continues a steady trend towards faster turnaround times in recent years (the average was 60 days in 2014 and 64 in 2012).
The table below describes the distribution of first decisions by type: desk reject, reject after review, and revise and resubmit (which includes both minor and major revisions requested).
Distribution of First Decision Times on 2015 Submissions
Number of decisions
Within 3 months
3 to 4 months
4 to 5 months
More than 5 months
Average days since submission
Note that 83 percent of all submissions were dealt with within 4 months, and only 6 percent of all submissions took longer than 5 months.
Among all regular submissions with a final disposition in 2015, the acceptance rate was 10%. This acceptance rate, comparable to that of other top economics journals, reflects the fact that only submissions of the highest quality are selected for publication in the Review.
Upcoming Special Issues
RED relies predominantly on regular submissions of manuscripts. Throughout our history, we have also published special issues representing the frontier of academic research on topics which are of particular interest to members of the Society. Articles in special issues are usually selected from a broad call for papers, as well as through direct solicitations. They all go through a full refereeing process. Guest Editors Dean Corbae, Mariacristina de Nardi, and Lance Lochner are currently preparing a special issue on Human Capital and Economic Inequality, which is slated to be published in April 2017.