Sydney Ludvigson on Empirical Evaluation of Economic Theories of Risk Premia
Sydney Ludvigson is the William R. Berkley Term Associate Professor at the Department of Economics, New York University. She is interested in asset valuation, equity premia and consumption smoothness. Ludvigson’s RePEc/IDEAS entry.
What explains the behavior of risk premia in stock and bond markets, both over time and cross-sectionally across classes of assets? For academic researchers, the progression of empirical evidence on these questions has presented a continuing challenge to asset pricing theory and an important road map for future inquiry. For investment professionals, finding practical answers to these questions is the fundamental purpose of financial economics, as well as its principal reward.To address these questions, economists need to develop theoretical models of risk. Once such models have been developed, formal statistical analysis is required to assess how well these models fit the data, and to provide independent estimates of the theories’ key parameters. In this essay, I describe work that aims to build our understanding of the ways in which modern-day asset pricing theories are related to asset pricing facts established from historical data, to estimate the models’ key parameters, and to formally evaluate the extent to which leading theories are successful in explaining the facts. The general approach is a multifarious one that involves both formal econometric estimation as well as simulation analyses directed at particular questions of interest. The approach is summarized in three articles, numbered below for ease of reference:
 “Euler Equation Errors” (with Martin Lettau).
 “An Estimation of Economic Models with Recursive Preferences” (with Xiaohong Chen and Jack Favilukis).
 “Land of Addicts? An Empirical Investigation of Habit-Based Asset Pricing Models” (with Xiaohong Chen).
Relating Asset Pricing Theories to Asset Pricing Facts
Previous research shows that the standard, representative agent, consumption-based asset pricing theory based on constant relative risk aversion utility fails to explain the average returns of risky assets. (For example, Hansen and Singleton (1982); Ferson and Constantinides (1991); Hansen and Jagannathan (1991); Cochrane (1996); Kocherlakota (1996)) One aspect of this failure, addressed in , is the large unconditional Euler equation errors that the model generates when evaluated on cross-sections of stock returns. Euler equation errors are statistical discrepancies between a theory’s prediction about the dynamic behavior of expected discounted asset returns and that implied by observable data. In , we present evidence on the size of these errors and show that they remain economically large even when preference parameters are freely chosen to maximize the model’s chances of fitting the data. Thus, unlike the equity premium puzzle of Mehra and Prescott (1985), the large Euler equation errors cannot be resolved with high values of risk aversion.To explain why the standard model fails, we need to develop alternative models that can rationalize its large Euler equation errors. Yet surprisingly little research has been devoted to assessing the extent to which newer consumption-based asset pricing theories–those specifically developed to address empirical limitations of the standard consumption-based model–can explain its large Euler equation errors. Unconditional Euler equation errors can be interpreted economically as pricing errors; thus we use the terms “Euler equation error” and “pricing error” interchangeably.
The research in  makes three contributions. First, we show that leading consumption-based asset pricing theories resoundingly fail to explain the mispricing of the standard consumption-based model. Specifically, we investigate four models at the vanguard of consumption-based asset pricing and show that the benchmark specification of each of these theories counterfactually implies that the standard model has negligible Euler equation errors when its parameters are freely chosen to fit the data. This anomaly is striking because early empirical evidence that the standard model’s Euler equations were violated provided much of the original impetus for developing the newer models we investigate here.
Second, we show that the leading asset pricing models we study fail to explain the mispricing of the standard model because they fundamentally mischaracterize the joint behavior of consumption and asset returns in recessions, when aggregate consumption is falling. In the model economies, realized excess returns on risky assets are negative when consumption is falling, whereas in the data they are often positive.
Our third contribution is to suggest one specific direction along which the current models can be improved, based on a time-varying, state-dependent correlation between stockholder and aggregate consumption growth. Specifically, we show that a stylized model in which aggregate consumption growth and stockholder consumption growth are highly correlated most of the time, but have low or negative correlation in recessions, produces violations of the standard model’s Euler equations and departures from joint lognormality of aggregate consumption growth and asset returns that are remarkably similar to those found in the data.
Why should we care about the ability of leading consumption-based asset pricing models to explain the failure of the standard consumption-based model? To motivate the importance of these findings for consumption-based asset pricing theory, it is helpful to consider, by way of analogy, the literature on the value premium puzzle in financial economics. In this literature, the classic Capital Asset Pricing Model (CAPM) resoundingly fails to explain the high average excess returns of value stocks, resulting in a value premium puzzle (Fama and French (1992, 1993)). It is well accepted that a fully successful theoretical resolution to this puzzle must accomplish two things: (i) it must provide an alternative theory to the CAPM that explains the high average returns of value stocks, and (ii) it must explain the failure of the CAPM to rationalize those high returns.
Analogously, the large empirical Euler equation errors of the standard consumption-based model place additional restrictions on new consumption-based models: not only must such models have zero pricing errors when the Euler equation is correctly specified according to the model, they must also produce large pricing errors when the Euler equation is incorrectly specified using power utility and aggregate consumption. To understand why the classic consumption-based model is wrong, alternative theories must generate the same large Euler equation errors that we observe in the data for this model.
Our analysis employs simulated data from several contemporary consumption-based asset pricing theories expressly developed to address empirical limitations of the standard consumption-based model. Clearly, it is not possible to study an exhaustive list of all models that fit this description; thus we limit our analysis to four that both represent a range of approaches to consumption-based asset pricing, and have received significant attention in the literature. These are: the representative agent external habit-persistence paradigms of (i) Campbell and Cochrane (1999) and (ii) Menzly, Santos and Veronesi (2004), (iii) the representative agent long-run risk model based on recursive preferences of Bansal and Yaron (2004), and (iv) the limited participation model of Guvenen (2003). Each is an explicitly parameterized economic model calibrated to accord with the data, and each has proven remarkably successful in explaining a range of asset pricing phenomena that the standard model fails to explain.
We show that some of these models can explain why we obtain implausibly high estimates of risk aversion and the subjective rate of time-preference when freely fitting aggregate data to the Euler equations of the standard consumption-based model. But, none can explain the large unconditional Euler equation errors associated with such estimates for plausibly calibrated sets of asset returns. Indeed, the asset pricing models we consider counterfactually imply that parameter values can be found for which the unconditional Euler equations of the standard consumption-based model are exactly satisfied.
The work in  diagnoses this result by showing that each of the four models studied satisfy sufficient conditions under which parameter values can always be found such that the Euler equations of the standard model will be exactly satisfied. The economically important condition satisfied by each model is that realized excess returns on risky assets are negative whenever consumption growth is sufficiently negative. We show that such a condition is violated in the data.
We close the paper by turning our attention to stylized models with limited stockmarket participation. When limited participation is combined with a time-varying, state-dependent correlation between stockholder and aggregate consumption, consumption-based asset pricing theories come much closer to rationalizing the large Euler equation errors of the standard paradigm that in large part motivated the search for newer models in the first place.
Econometric Modeling of Asset Pricing Models
A large and growing body of theoretical work in macroeconomics and finance models the preferences of economic agents using a recursive utility function of the type explored by Epstein and Zin (1989, 1991) and Weil (1989) (See for example Campbell (1993, 1996); Tallarini (2000); Campbell and Viceira (2001); Bansal and Yaron (2004); Colacito and Croce (2005); Bansal, Dittmar and Kiku (2007); Campbell and Vuolteenaho (2004); Gomes and Michaelides (2005); Krueger and Kubler (2005); Hansen, Heaton and Li (2005); Kiku (2005); Malloy, Moskowitz and Vissing-Jorgensen (2005); Campanale, Castro and Clementi (2007); Croce (2006); Bansal, Dittmar and Lundblad (2005); Croce, Lettau and Ludvigson (2007); Hansen and Sargent (2006), Piazzesi and Schneider (2006)). One reason for the growing interest in such preferences is that they provide a potentially important generalization of the standard power utility model discussed above, first investigated in classic empirical studies by Hansen and Singleton (1982, 1983). The salient feature of this generalization is a greater degree of flexibility as regards attitudes towards risk and intertemporal substitution. Specifically, under the recursive representation, the coefficient of relative risk aversion need not equal the inverse of the elasticity of intertemporal substitution (EIS), as it must in time-separable expected utility models with constant relative risk aversion. This degree of flexibility is appealing in many applications because it is unclear why an individual’s willingness to substitute consumption across random states of nature should be so tightly linked to her willingness to substitute consumption deterministically over time, as it must in standard models of preferences.Despite the growing interest in recursive utility models, there has been a relatively small amount econometric work aimed at estimating the relevant preference parameters and assessing the model’s fit with the data. As a consequence, theoretical models are often calibrated with little econometric guidance as to the value of key preference parameters, the extent to which the model explains the data relative to competing specifications, or the implications of the model’s best-fitting specifications for other economic variables of interest, such as the return to the aggregate wealth portfolio or the return to human wealth. The purpose of  is to help fill this gap in the literature by undertaking a formal econometric evaluation of the Epstein-Zin-Weil (EZW) recursive utility model.
If recursive preferences are of growing interest, why has there been so little formal econometric work evaluating these models? In its most general form, the EZW model is extremely challenging to evaluate empirically. The EZW recursive utility function is a constant elasticity of substitution (CES) aggregator over current consumption and the expected discounted utility of future consumption. This structure makes estimation of the general model difficult because the intertemporal marginal rate of substitution is a function of the unobservable continuation value of the future consumption plan. The common approach in the literature is to make one of a number of simplifying assumptions that effectively reduce the continuation value function to an observable variable. For example, one approach to this problem, based on the insight of Epstein and Zin (1989), is to exploit the relation between the continuation value and the return on the aggregate wealth portfolio. To the extent that the return on the aggregate wealth portfolio can be measured or proxied, the unobservable continuation value can be substituted out of the marginal rate of substitution and estimation can proceed using only observable variables (e.g., Epstein and Zin (1991)). Unfortunately, the aggregate wealth portfolio represents a claim to future consumption and is itself unobservable. Moreover, given the potential importance of human capital and other nontradable assets in aggregate wealth, its return may not be well proxied by observable asset market returns.
These difficulties can be overcome in specific cases of the EZW recursive utility model. For example, if the EIS is restricted to unity and consumption follows a loglinear time-series process, the continuation value has an analytical solution and is a function of observable consumption data (e.g., Hansen, Heaton and Li (2005)). Alternatively, if consumption and asset returns are assumed to be jointly lognormally distributed and homoskedastic (or if a second-order linearization is applied to the Euler equation), the risk premium of any asset can be expressed as a function of covariances of the asset’s return with current consumption growth and with news about future consumption growth (e.g., Restoy and Weil (1998), Campbell (2003)). In this case, the model’s cross-sectional asset pricing implications can be evaluated using observable consumption data and a model for expectations of future consumption.
While the study of these specific cases has yielded a number of important insights, there are several reasons why it may be desirable to allow for more general representations of the model, free from tight parametric or distributional assumptions. First, an EIS of unity implies that the consumption-wealth ratio is constant, contradicting statistical evidence that it varies considerably over time. Lettau and Ludvigson (2001a) argue that a cointegrating residual for log consumption, log asset wealth, and log labor income should be correlated with the unobservable log consumption-aggregate wealth ratio, and find evidence that this residual varies considerably over time and forecasts future stock market returns. See also recent evidence on the consumption-wealth ratio in Hansen, Heaton, Roussanov and Lee (2007) and Lustig, Van Nieuwerburgh and Verdelhan (2008). Moreover, even first-order expansions of the EZW model around an EIS of unity may not capture the magnitude of variability of the consumption-wealth ratio (Hansen, Heaton, Roussanov and Lee (2007)). Second, although aggregate consumption growth itself appears to be well described by a lognormal process, empirical evidence suggests that the joint distribution of consumption and asset returns exhibits significant departures from lognormality (Lettau and Ludvigson (2005)). Third, Kocherlakota (1990) points out that joint lognormality is inconsistent with an individual maximizing a utility function that satisfies the recursive representation used by Epstein and Zin (1989, 1991) and Weil (1989).
To overcome these issues, in  we employ a semiparametric estimation technique that allows us to conduct estimation and testing of the EZW recursive utility model without the need to find a proxy for the unobservable aggregate wealth return, without linearizing the model, and without placing tight parametric restrictions on either the law of motion or joint distribution of consumption and asset returns, or on the value of key preference parameters such as the EIS. We present estimates of all the preference parameters of the EZW model, evaluate the model’s ability to fit asset return data relative to competing asset pricing models, and investigate the implications of such estimates for the unobservable aggregate wealth return and human wealth return.
To avoid having to find a proxy for the return on the aggregate wealth portfolio, we explicitly estimate the unobservable continuation value of the future consumption plan. By assuming that consumption growth falls within a general class of stationary, dynamic models, we may identify the state variables over which the continuation value is defined. However, without placing tight parametric restrictions on the model, the continuation value is still an unknown function of the relevant state variables. Thus the key to our approach is that the unknown continuation value function is estimated nonparametrically, in effect allowing the data to dictate the shape of the function. The resulting empirical specification for investor utility is semiparametric in the sense that it contains both the finite dimensional unknown parameters that are part of the CES utility function (risk aversion, EIS, and subjective time-discount factor), as well as the infinite dimensional unknown continuation value function.
Using quarterly data on consumption growth, assets returns and instruments, our empirical results indicate that the estimated relative risk aversion parameter is high, ranging from 17-60, with higher values for the representative agent version of the model than the representative stockholder version. The estimated elasticity of intertemporal substitution is typically above one, and differs considerably from the inverse of the coefficient of relative risk aversion. In addition, the estimated aggregate wealth return is found to be weakly correlated with the CRSP value-weighted stock market return and much less volatile, implying that the return to human capital is negatively correlated with the aggregate stock market return. This later finding is consistent with results in Lustig and Van Nieuwerburgh (2005), discussed further below. In data from 1952 to 2005, we find that an SMD estimated EZW recursive utility model can explain a cross-section of size and book-market sorted portfolio equity returns better than the time-separable, constant relative risk aversion power utility model and better than the Lettau and Ludvigson (2001b) scaled consumption CAPM model, but not as well as purely empirical models based on financial factors such as the Fama and French (1993) three-factor model. These results are encouraging for the recursive utility framework, because they suggest that the model’s ability to fit the data is in a comparable range with other models that have shown particular success in explaining the cross-section of expected stock returns.
A similar semiparameteric approach is taken in  to study an entirely different class of asset pricing models, namely those in which investors are presumed to have a consumption “habit.” According to these theories of aggregate stock market behavior, assets are priced as if there were a representative investor whose utility is a power function of the difference between aggregate consumption and a “habit” level, where the habit is some function of lagged and (possibly) contemporaneous consumption. Unfortunately, theory does not provide precise guidelines about the parametric functional relationship between the habit and aggregate consumption. As a consequence, there is substantial divergence across theoretical models in how the habit stock is specified to vary with aggregate consumption. As for the EZW model, the fundamental problem is the unobservability of some function that is crucial to the success of the model, in this case the habit function.  both develops and applies the formal econometric techniques required to estimate the habit function and to formally test important aspects of habit-based models. While many theoretical papers have offered calibrated versions of the habit, the econometric estimation and testing of these models that we propose is new. If habit formation is actually present in the manner suggested by these many influential theoretical papers, then estimating it freely should produce a theoretically plausible functional form.  studies the ability of a general class of habit-based asset pricing models to match the conditional moment restrictions implied by asset pricing theory. Instead of testing a particular model of habit formation, our semiparameteric approach allows us to treat the functional form of the habit as unknown, and to estimate it along with the rest of the model’s parameters. This approach allows us to empirically evaluate a number of interesting hypotheses about the specification of habit-based asset pricing models that have not been previously investigated, and to formally test the framework’s ability to explain stock return data relative to other models that have proven empirically successful.
Using this methodology, we empirically investigate a number of hypotheses about the specification of habit-based asset pricing models that have not been previously investigated. One hypothesis concerns whether the habit is better described as a linear or nonlinear function. We develop a statistical test of the hypothesis of linearity and find that the functional form of the habit is better described as nonlinear rather than linear.
A second hypothesis concerns the distinction between “internal” and “external” habit formation. About half of the theoretical papers cited above investigate models of internal habit formation, in which the habit is a function of the agent’s own past consumption. The rest investigate models of external habit formation, in which the habit depends on the consumption of some exterior reference group, typically per capita aggregate consumption. Abel (1990) calls external habit formation “catching up with the Joneses.” Determining which form of habit formation is more empirically plausible is important because the two specifications can have dramatically different implications for optimal tax policy and welfare analysis (Ljungqvist and Uhlig (2000)), and for whether habit models can explain long-standing asset-allocation puzzles in the international finance literature (Shore and White (2002)). To address this issue, we derive a conditional moment restriction that nests the internal and external nonlinear habit function, under the assumption that both functions are specified over current and lagged consumption with the same finite lag length. Our empirical results indicate that the data are better described by internal habit formation than external habit formation.
The SMD approach also allows us to assess the quantitative importance of the habit in the power utility specification. Our empirical results suggest that the habit is a substantial fraction of current consumption–about 97 percent on average–echoing the specification of Campbell and Cochrane (1999) in which the steady-state habit-consumption ratio exceeds 94 percent. The SMD estimated habit function is concave and generates positive intertemporal marginal rate of substitution in consumption. The SMD estimated subjective time-discount factor is around 0.99 and the estimated power utility curvature parameter is about 0.80 for three different combinations of instruments and asset returns.
Finally, we undertake a statistical model comparison analysis. Because our habit-based asset pricing model makes some parametric assumptions that may not be fully accurate (e.g., it maintains the power utility specification), and because the SMD-estimated nonparametric habit function contains lagged consumption of only finite lag length, the implied Stochastic Discount Factor (SDF) should be best viewed as a proxy to the true unknown SDF. Thus, we evaluate the SMD-estimated habit model and several competing asset pricing models by employing the model comparison distance metrics recommended in Hansen and Jagannathan (1997) (the so-called HJ distance and the HJ+ distance), where all the models are treated as SDF proxies to the unknown truth. In particular, the SMD-estimated internal habit model is compared to (i) the SMD-estimated external habit model, (ii) the three-factor asset pricing model of Fama and French (1993), (iii) the “scaled” consumption Capital Asset Pricing Model (CAPM) of Lettau and Ludvigson (2001b), (iv) the classic CAPM of Sharpe (1964) and Lintner (1965), and (v) the classic consumption CAPM of Breeden (1979) and Breeden and Litzenberger (1978). Doing so, we find that a SMD-estimated internal habit model can better explain a cross-section of size and book-market sorted equity returns, both economically and in a statistically significant way, than the other five competing models. These results are particularly encouraging for the internal habit specification, since the Fama and French (1993) three-factor model and the Lettau and Ludvigson (2001b) scaled consumption CAPM have previously displayed relative success in explaining the cross-section of stock market portfolio returns.
The Future Research Agenda
I am currently working on several projects that extend the analyses on equity markets described above to study housing markets, risk premia in housing assets, and the relationship of these variables to aggregate consumer spending. One line of work (joint with Christopher Mayer of Columbia University) concerns the role of risk premia in U.S. housing markets. Existing empirical work in the housing literature has assumed that housing risk premia are constant. Yet there are plenty of reasons to investigate whether this assumption is plausible. Indeed, the unprecedented surge in U.S. house prices that preceded the recent mortgage crisis appears, anecdotally, to have been driven by a decline in market participants’ assessment of the riskiness of these assets. We are currently investigating whether risk premia in the U.S. housing market vary across metropolitan areas and assessing the extent to which particular models of risk can account for that variation.A closely related theoretical project with Stijn Van Nieuwerburgh of the Stern School at NYU and Jack Favilukis of London School of Economics explores the effects of changing collateral constraints from housing assets on aggregate consumer spending. It is often presumed that changes in such collateral constraints will have a large affect on aggregate consumer spending, especially if financial markets are incomplete and agents cannot perfectly insure idiosyncratic shocks to their labor income or housing wealth. In this work we show that, even when markets are incomplete, the theoretical basis for such a large and direct linkage between consumer spending and housing wealth is unclear. Although fluctuations in housing collateral constraints do affect some household’s ability to borrow and consume, in general equilibrium fluctuations in housing collateral affect households’ ability to share risks with one another, and therefore affect the cross-sectional distribution of consumption, but may have very little affect on the size of the overall consumption pie, that is on aggregate consumption. We develop and solve a general equilibrium model to measure the theoretical marginal propensity to consume out of housing wealth and to assess the impact of changing collateral constraints on aggregate consumption. One important question we intend to address is to what extent risk premia adjust versus quantities (aggregate consumption). We solve for optimal portfolio decisions of such heterogeneous households who face housing collateral constraints, and determine the equilibrium housing returns they give rise to.
Q&A: James Heckman and Flávio Cunha on Skill Formation and Returns to Schooling
James Heckman is the Henry Schultz Distinguished Service Professor of Economics at The University of Chicago. His recent research deals with such issues as evaluation of social programs, econometric models of discrete choice and longitudinal data, the economics of the labor market, and alternative models of the distribution of income. Flávio Cunha is Assistant Professor of Economics at the University of Pennsylvania. He is interested in inequality and skill formation. Heckman’s RePEc/IDEAS entry, Cunha’s RePEc/IDEAS entry.
EconomicDynamics: In recent work, you highlight that it is essential to understand skill formation in childhood as a multiperiod problem. Why is it so?
James Heckman, Flavio Cunha: There are different reasons. First, different types of skills are produced at different periods of childhood. Economists, social scientists, and policy makers alike tend to focus on cognitive skill. Although it is necessary for success in life, it is not enough for many aspects of performance in social life. Furthermore, the evidence shows that cognitive skills crystallize early in life, so it is difficult to change them at later stages. Until recently, few economists considered non cognitive skills although Marxist economists considered them very early on. Of course, psychologists have studied them, in addition to some work by sociologists. These skills play an important role in determining many socioeconomic measures, as shown in Heckman, Stixrud and Urzua (2006). They affect crime participation, teenage pregnancy, education, and many others important outcomes. More importantly, there is ample evidence from neuroscience showing that the prefrontal cortex, which is believed to be the neural focal point for noncognitive skills, matures later. This is consistent with the view that the way we motivate ourselves, the way we control our impulses, are malleable into later ages.Second, early advantages reinforce each other. Skills that are developed at one stage of life serve as an input to produce skills at other stages of life. For example, if we are given a paragraph that describes an algebra problem, it is necessary to know how to read so we can construct the correct equations from the description in the text. Once we have written down the equations, we can use our algebra skills to solve the equation and get the correct answer. Children who cannot read will have a hard time developing their algebra skills, because they will fail in the first stage of this task, namely decoding the information from the text into equations. This example supports our work on Job Training Programs. Public job training programs try to improve dropouts. Many do not know how to read or write well. These programs reflect the current view of public policy that it is possible to make up for 17 years of neglect. Our work in this area has shown that the success rate is really low. Creating the foundation of early skills is important.
Based on this evidence, we developed models of skill formation allowing for investments at different stages to complement each other. The multiple periods formulation is a crucial feature of the models because (1) some skills may be formed more easily at certain given periods, (2) some skills produced at earlier stages may serve as an input in the production of later skills.
ED: In related work, you argue that a substantial fraction of ex post returns to schooling are predictable by the agents, but not necessarily by the econometrician. Market structure is important here to identify deep parameters. How is it then possible that you obtain similar results across different market structures?
JH, FC: In Cunha, Heckman, and Navarro (2005) we developed a framework that can be used by economists to separate heterogeneity from uncertainty in life cycle earnings by looking at economic choices made by agents. Given preferences and market structure, we show how one can use an educational choice model to generate restrictions that allow one to recover the distributions of predictable heterogeneity and uncertainty separately. In that paper, we specified a complete-market environment and we found that almost 50% of the variance of unobservable components in returns to schooling are known and acted on by individuals when making schooling choices. This framework was extended in Cunha and Heckman (2007) where we showed that a large fraction of the increase in inequality in recent years is due to the increase in the variance of unforecastable components. Cunha, Heckman, and Navarro (2004) and Navarro (2005) extended the model to an incomplete-market environment. These papers also consider assets accumulation data, which, given a market structure, imposes identifying assumptions on preferences.There is an open question: if we have data on choices (education decisions, consumption, etc…), outcomes of choices (earnings in a given education group), and assets accumulation, how far can we go in nonparametrically identifying information sets, preferences, and market structures? This is a question we first stated in the original Cunha, Heckman, and Navarro (2005) paper and the literature has not yet settled on a definite answer.
The question you raise is a good one. But remember – only one market structure generates the data. If for example, a full insurance model characterized the data, an econometric model that allowed for constraints on transfers across states would show that the constraints are not binding, and would reproduce the complete markets model. Some version of this story seems to be at work in our estimates.
ED: Given your results, where should the policy focus be? In particular, how does your work distinguish itself in this regard from other work in macroeconomics?
JH, FC: The current work in economics emphasizes the importance of market incompleteness with regards to shocks agents experience while they are already adults: for example, there are many theoretical and applied papers discussing the allocation and welfare loss with respect to uninsurable shocks in income agents face during their adulthood. In the type of Bewley economies, Huggett (1993) and Aiyagari (1994) show that the inability of agents to transfer resources across states of nature and over periods of time distorts allocations and generates welfare losses that tend to be larger the higher the persistence of these shocks.In our work (see Cunha and Heckman, 2007), we have tried to point out to the profession the lifetime importance of the shock represented by the accident of birth. It is clearly a very persistent shock since children who are born in a disadvantaged family will spend many years in a disadvantaged environment, with terrible consequences for their skill acquisition and opportunities in life, as we show in Cunha, Heckman, and Schennach (2008). If markets were complete, children would be able to buy insurance against those shocks and they would use the resources to improve the environment in the family they grow up. Clearly, this is not feasible: at the very least, such markets would require children to start making allocation decisions as soon as they are born, something they clearly are not ready to do. It becomes imperative for our society to devise mechanisms or policies that will implement an allocation as close as possible to the first best even if markets are incomplete. This intervention can be justified on efficiency or fairness grounds, as is laid out in Heckman and Masterov (2007). Our work shows that a possible answer to the policy question is present in early childhood intervention programs. Recent work by Heckman, Moon, Pinto, and Yavitz (2008) shows that these programs are successful in a number of dimensions: they promote education; they reduce participation in crime for boys and reduce teenage pregnancy for girls all of which represent large costs to society. The basic idea underlying these programs is to provide children with an environment that would resemble the children’s environments if they were not born in disadvantaged families.
It is important to emphasize that our work does not mean that early intervention programs are sufficient. The inter-temporal complementarity of investments that we estimate in our joint work with Susanne Schennach states that early investments must be followed up with late investments: high-quality early programs, such as the Perry pre-school, do not replace the importance of having good schools and prepared teachers. At the same time, the complementarity indicates that children from very disadvantaged households will not be able to extract the full benefits from school unless they receive early investments that make them school ready.
S. Rao Aiyagari, 1994. “Uninsured Idiosyncratic Risk and Aggregate Saving,” The Quarterly Journal of Economics, MIT Press, vol. 109(3), pages 659-84, August.
Flavio Cunha & James Heckman, 2007. “The Technology of Skill Formation,” American Economic Review, American Economic Association, v. 97(2), p. 31-47, May.
Flavio Cunha & James Heckman, 2008. “Formulating, Identifying and Estimating the Technology of Cognitive and Noncognitive Skill Formation,” Journal of Human Resources, forthcoming.
Flavio Cunha, James Heckman & Salvador Navarro, 2005. “Separating uncertainty from heterogeneity in life cycle earnings,” Oxford Economic Papers, Oxford University Press, v. 57(2), p. 191-261, April.
Flavio Cunha, James Heckman & Susanne Schennach, 2008. “Estimating the Elasticity of Intertemporal Substitution in the Formation of Cognitive and Non-Cognitive Skills”, unpublished mimeo, University of Chicago.
James Heckman & Dimitriy Masterov, 2007. “The Productivity Argument for Investing in Young Children“, Review of Agricultural Economics, v. 29(3), p. 446-493
James Heckman, Jora Stixrud & Sergio Urzua, 2006. “The Effects of Cognitive and Noncognitive Abilities on Labor Market Outcomes and Social Behavior,” Journal of Labor Economics, University of Chicago Press, v. 24(3), p. 411-482, July.
James Heckman, Seong Hyeok Moon, Rodrigo Pinto & Adam Yavitz, 2008. “A Reanalysis of The Perry Preschool Program”, unpublished mimeo, University of Chicago, first draft 2006, revised 2008.
Mark Huggett, 1993. “The risk-free rate in heterogeneous-agent incomplete-insurance economies,” Journal of Economic Dynamics and Control, Elsevier, vol. 17(5-6), pages 953-969.
Salvador Navarro, 2005. “Understanding Schooling: Using Observed Choices to Infer Agent’s Information in a Dynamic Model of Schooling Choice when Consumption Allocation is Subject to Borrowing Constraints,” PhD dissertation, University of Chicago.
We continue the process of streamlining the organization. We have a new server for the website, and are systematizing the software used for registration and so forth. Much thanks is due to our Secretary, Christian, who has put in a great deal of hard work to make this all happen. Hopefully this means that we will continue to have a smooth process with minimal fuss.
The big news, of course, is the upcoming SED conference in Cambridge July 10-12. Our organizers, Marios Angeletos, Arial Burstein, Mike Golosov, and Christian Hellwig have done a fabulous job of pulling things together. We’ve had an exceptionally strong set of submissions – 1020 in all, a new record. The quality of papers was enormously high – so good work everyone, and also an apology to those who made strong submissions that didn’t make the program.
If you are coming to the meetings, we have great plenary talks lines up. James Poterba, José Scheinkman and Per Krusell will be our speakers. The Boston Federal Reserve Bank has kindly agreed to sponsor our reception.
Next year we are going to Istanbul – dates to be announced soon.
Modelling, especially under rational expectations, assumes that agents know the true underlying structure of the economy, including the parameter values. So does the researcher. What if the research is wrong, and he knows that he could be wrong? Model misspecification is a recognized, but often neglected problem in econometrics. It is mostly ignored in theoretical work. But it becomes particularly important if policy prescriptions differ significantly according to model specifications.
Hansen and Sargent tackle this “fear of model misspecification” by using recent advances in control theory: robust control. This book is not only an introduction to robust control for economists, it also extends robust control to areas particularly relevant to economic theory, for instance discounting, multiple agents, calibration of fear of misspecification.
The basic principle is based in relative entropy, a measure of distance with the true model. One represents misspecification as a set of perturbations to an approximating model. Hansen and Sargent apply this to a standard linear-quadratic dynamic programming problem with a maximin objective: while the agent maximizes over strategies, the researcher minimizes over entropy.
This is an incredibly rich book. It covers a lot of material with plenty of examples. Obviously, as for any pioneering work, disgesting it is somewhat challenging, but with high returns.
“Robustness” is published by Princeton University Press.