Theory Ahead of Identification, by Marcus Hagedorn and Iourii Manovskii
Marcus Hagedorn is Professor of Economics at the University of Oslo. Iourii Manovski is Associate Professor of Economics at the University of Pennsylvania. Their research interests are at the intersection of macroeconomics and labor economics. Hagedorn’s RePEc/IDEAS profile and Manovski’s RePEc/IDEAS profile
The empirical research methodology in economics is based, with a few exceptions, on one of two approaches. In one, economic theory underlining the analysis is left unspecified or is used to loosely “inspire” the measurement strategy. In the other, the full model structure is imposed on the data. Our research strategy is based on the view that answers to many substantive questions can be obtained using a methodology that is less extreme. In particular, we use theory to prove identification, i.e., to establish a precise mapping between the available data and the answer to the substantive question of interest. As this mapping is often independent of the formulation of many elements of the model, it can be measured without having to specify and estimate the full model structure, preserving some of the flexibility of the non-structural approaches. Of course, the theoretical challenge is to identify the minimal set of model elements needed to achieve identification, which depends both on the research question and on the nature of the available data. This strategy is less simple than the two extreme ones as it requires both a theoretical identification proof and a careful data analysis. Yet, the answers it delivers are more general enabling a faster scientific progress.
Below we describe four substantive research agendas that we are currently working on and where we expect substantial scientific advances to be made in the near future.
1. Labor Market Sorting
Suppose one day all workers in the U.S. were randomly reallocated across all existing jobs. Would there be any effect on output? It seems likely that the answer is “yes”. As worker and firm attributes observable in the data account for only some 30% of the variation in wages, the answer will likely remain “yes” even if we condition the experiment on all such characteristics. This is indicative of complementarities between workers and jobs and of sorting of workers across jobs based on unobserved (to the economist) characteristics.
In Hagedorn, Law, and Manovskii (2012) we show that, using only routinely available matched employer-employee data sets, it is possible to fully non-parametrically recover the unobserved worker and firm productivities and the production function, i.e., the consequences for output and productivity of moving any worker to any firm in the economy.
Of course, this can be done only in the context of an explicit sorting theory. A typical starting point for thinking about assignment problems in heterogeneous agent economies is the model of Becker (1973). It was extended by Shimer and Smith (2000) to allow for time consuming search between heterogeneous workers and firms. We further extend the model to allow for search on the job, which is a key feature of the data, and stochastic match quality of individual matches. Importantly, we do not place any assumptions on the production function, except for being smoothly increasing in worker and firm types.
The identification is achieved in three steps. First, we rank workers. We show that workers can be ranked based on their wages within firms (potentially observed with an error). Workers who change firms provide links between the partial rankings inside the firms they work at. This enables us to solve a rank aggregation problem which effectively maximizes the likelihood of the correct global ranking. Second, we rank firms. To do so we show that the value of a vacant job is increasing in firm type. We assume that wage bargaining is such that when the match surplus increases, both parties benefit. This implies that more productive firms deliver more to the workers they hire relative to the reservation wage of those workers. These are statistics based on wages that can be easily computed to obtain the ranking of firms. Third, we recover the production function. The observed wages of a match between a particular worker and firm are a function of the match output and the outside options of the worker and the firm. The outside option of the worker is the value of receiving the measured reservation wage. The outside option of the firm is the value of the vacancy computed using wage data in the previous step. Thus, the wage equation can simply be inverted for output. Finally, although each worker is typically observed working at only a few firms, we estimate his output in other firms by considering how much similarly ranked workers who work at those firms produce.
Implementing this simple algorithm in the data, we are able to provide coherent answers to classic questions in economics. We list some of the applications that we are working on. It is well known that the college premium increased substantially in the US and many other countries starting in the 1980s. One potential explanation is that there was a skill-biased technical change. Alternatively, the sorting of workers to the right jobs might have improved, e.g., due to the spread of the new information technologies. Our method allows to tell these potential explanations apart in that we can explicitly measure the change in technology and in sorting patterns. Similar issues also arise in the trade literature. We can not only disentangle why firms engaged in trade have higher output per hour and pay higher wages — because they have better workers or are inherently more productive, but also understand the effects of trade liberalizations on technology adoption and worker reallocation. We can also address the perennial questions of whether the differences in unobserved worker quality can explain why firms of a particular size or belonging to different industries pay persistently different wages. Having identified the production function in the data, we can also readily solve for the optimal (e.g., output-maximizing) assignment of individual workers to individual firms. This not only yields an aggregate estimate of the cost of mismatch, but also allows to assess the impact of various policies on productivity and matching patterns at a very detailed level. It also allows to study the dynamics of misallocation at the micro level, which is relevant for the macro literature as it typically reduces total factor productivity with a potentially important impact on, e.g., income differences across time and across countries.
What sets our method apart is that it does not impose restrictions on the production function, such as the typical global restrictions on the sign of the cross-partial in the theoretical literature. Because of not imposing such an assumption, we are able to show that it is not verified in the data sets we have worked with. Moreover, we show that even the assumption of monotonicity in worker and firm types that we made above can be relaxed. In addition, our method has excellent properties in samples featuring size limitations of the available data sets and is robust to the presence of large measurement error in wages. It is somewhat more computationally intensive than estimating the regressions with worker and firm fixed effects as typical in the empirical literature. But it is by now well understood that the estimates from such regressions have no economic interpretation in the context of the standard theory of assortative matching.
Thus, we view this research agenda as being extremely promising. It represents a very active area of research with important recent contributions by Pieter Gautier, Philipp Kircher, Rasmus Lentz, Jeremy Lise, Rafael Lopes de Melo, Jean-Marc Robin and Coen Teulings. There are clear limitations to what we can currently do that are mainly due to the underdeveloped theory of assortative matching in more complex but realistic environments. We are hopeful that many such limitations can be overcome, however.
2. Unemployment Insurance and Unemployment
While there is an enormous literature devoted to understanding the elasticity of labor supply, the research on the responsiveness of labor demand to macroeconomic policies is almost non-existent. Yet, in the modern theory of frictional labor markets, it is the labor demand response of firms that drives the response to shocks and policy changes.
In Hagedorn, Karahan, Manovskii, and Mitman (2013) we provide indirect evidence on this elasticity. Specifically, we evaluate the impact of the unprecedented unemployment benefit extensions on unemployment during the Great Recession. To put the labor supply and demand responses into perspective, note that the probability of finding a job for an unemployed worker depends on how hard this individual searches and how many jobs are available:
Chance of Finding Job = Search Effort x Job Availability
Both the search effort of the unemployed and job creation decisions by employers are potentially affected by unemployment benefit extensions.
The recent empirical literature focused exclusively on evaluating the effect of benefit extensions on search effort. The ideal experiment this literature is trying to implement given the available data is as follows. Compare two observationally identical unemployed individuals (same age, gender, occupation, location, etc) who have different duration of benefits available. Then ask whether the individual with more weeks of benefits remaining is less likely to find a job in a given week. The existing empirical literature finds that the difference is very small. This result suggests that search effort is little affected by benefit duration. On the basis of this finding, the literature concluded that extending benefits has no negative effect on employment and unemployment. This conclusion is unwarranted: Suppose both individuals are willing to accept the same jobs, but employers cut job creation in response to benefit extensions. Then both individuals are equally less likely to find a job. The experiment, by its very design, is incapable of capturing the effect of a decrease in job creation.
We instead measure the effect of benefit extensions on unemployment and find it to be quite large. In fact, our estimates imply that the unprecedented benefit extensions can account for much of the persistently high unemployment following the Great Recession. Modern theory of the labor market due to Mortensen and Pissarides, provides one possible explanation. Unemployment benefit extensions improve workers’ well-being when unemployed. This puts an upward pressure on wages they demand. If wages go up, holding worker productivity constant, the amount left to cover the cost of job creation by firms declines, leading to a decline in job creation. As a consequence, unemployment rises and employment falls.
Our empirical approach is based on the pioneering work by Holmes (1998) and involves comparing pairs of counties that border each other but belong to different states. As unemployment insurance extensions are implemented at the state level, there is a large amount of variation in benefit durations across counties within each pair. By comparing how unemployment, job vacancies, employment, and wages respond to changes in the differences in benefit durations, we uncover the effects of benefit durations on these economic variables. The key assumption underlying this measurement approach is that while policies are discontinuous at state borders, economics shocks evolve continuously and do not stop when reaching a state border. We explicitly verify the validity of this assumption. Of course, different states have different policies, industrial composition, housing markets etc. and as a consequence respond differently to the same aggregate shocks. Our measurement approach accounts for this. Finally, the estimator we developed in the paper takes into account that workers and firms are forward looking so that expectations about the future may affect job creation and search effort decisions today.
Thus, methodologically our paper is entirely structural in the sense that all results are clearly interpreted in the context of standard equilibrium models. Yet, it is as general as a reduced-form paper in that we do not impose any unnecessary structure on the data. We used cutting-edge econometric techniques that are appropriate for our environment. We prove identification of the policy effects and verify the validity of the identifying assumptions. We do this not only in the data, but by simulating explicit equilibrium models to verify the good properties of our methods.
Our findings surprised many economists, whose views are well summarized by Chetty (2013): “Consider the politically charged question of whether extending unemployment benefits increases unemployment rates by reducing workers’ incentives to return to work. Nearly a dozen economic studies have analyzed this question by comparing unemployment rates in states that have extended unemployment benefits with those in states that do not. These studies approximate medical experiments in which some groups receive a treatment — in this case, extended unemployment benefits — while “control” groups don’t. These studies have uniformly found that a 10-week extension in unemployment benefits raises the average amount of time people spend out of work by at most one week. This simple, unassailable finding implies that policy makers can extend unemployment benefits to provide assistance to those out of work without substantially increasing unemployment rates.” Indeed, classic research based on large benefit extensions during the recessions of the 1980s, reached consensus estimates that a one week increase in benefit duration increases the average duration of unemployment spells by 0.1 to 0.2 weeks (although there are plenty of highly respected studies that find even larger effects). But consider the implications of these estimates. During the Great Recession unemployment benefits were extended 73 weeks from 26 to 99 weeks. Thus, these estimates imply an increase in unemployment duration between 7.3 and 14.6 weeks, i.e. the duration at least doubles. But a doubling of duration implies that the exit rate from unemployment fell by a factor of two. This would then imply roughly a doubling of the unemployment rate, as can be seen from, e.g., the basic steady state relationship that balances flows in and out of unemployment. This is a substantially larger effect than the one we find.
One could worry that these findings were based on the records of UI recipients and that non-recipients might react differently. But this was shown to be not the case by Rothstein (2011). Indeed, his contribution was to show that the job finding rate of ineligible workers responds as much as that of the eligible ones to benefit extensions.
Some may also hypothesize that the recessions of the 1980s were somehow fundamentally different. Consider the experience of New Jersey that awarded 13 extra weeks of benefits to those whose regular 26 weeks of benefits expired between June and November of 2006. Card and Levine (2000) estimate that this temporary unemployment benefit extension led to a 17% decline in the exit rate from unemployment. This suggests a much larger effect on unemployment than what our estimates imply for such a small and transitory extension. Further, we find that the effects of benefit extensions during the Great Recession are similar to the effects of the extensions following the 2001 recession.
Thus, the available evidence suggests a large effect of unemployment benefit extensions on unemployment. How can one reconcile the robust evidence of very large effects of benefit extensions on unemployment with a small or non-existent response of worker search effort? Standard economic theory on which much of macroeconomics is based provides a clear answer: it is driven by the equilibrium response of job creation by firms.
Of course, given the central role the distinction between labor supply and demand effects plays in guiding the design of macro models and in shaping the understanding of business cycle fluctuations and the effects of policies, additional evidence is sorely needed. Twenty years ago Daniel Hamermesh passionately appealed to his colleagues: “One bit of evidence for the neglect of labor demand by mainstream labor economists is a recent monograph on empirical labor economics that is divided into “halves” dealing with supply and demand (Devine and Kiefer, 1991). The second “half” takes up 14 pages of the 300-page book!” The profession needs the answer!
3. Demand Stimulus and Inflation
An important motivation for unemployment benefit extensions in recessions is that they represent a demand stimulus. Indeed, since the beginning of the Great Recession, the Council of Economic Advisors and the Congressional Budget Office have issued a number of statements predicting hundreds of thousands of jobs created if benefits are extended due to this effect.
More generally, the effectiveness of fiscal policy as a macroeconomic stabilization policy depends on the size of the fiscal multiplier. In standard New Keynesian models with sticky prices the multiplier is large whenever nominal interest rates are not responsive to inflation. The model mechanism underlying this result is particularly clear in Farhi and Werning (2013). Government spending increases marginal costs which leads firms to increase prices, resulting in higher inflation. With fixed nominal interest rates — e.g., in a liquidity trap with a nominal interest rate of zero — the increase in inflation translates one-for-one into a reduction in the real interest rate, boosting current private spending. This increase in demand leads to further increases in inflation, and so on, explaining the quantitatively important feedback loop.
Thus, the link between fiscal stimulus and inflation is at the heart of the theory. Surprisingly, it has not been assessed in the literature. In Hagedorn, Handbury, and Manovskii (2014) we fill this gap by once again exploiting policy discontinuities at the state borders. We use the New Keynesian theory to derive a relationship between unemployment benefit generosity and inflation. The specification is quite general in that it applies to e.g., models with time- and state-dependent pricing as well as to models with complete and incomplete markets.
We use the Kilts Nielsen Retail Scanner Data to construct county-level inflation measures and control for demand spillovers using the Consumer Panel data. We find that exogenous increases in unemployment benefit generosity do not have a statistically significant effect on inflation, with the point estimate being negative but very small. In contrast, the effect on price levels is significantly positive, as implied by the theory in which prices are flexible and the observed stickiness is not allocative. The apparent conclusion is that either the stimulative effects of such policies are indeed tiny or they arise due to a different mechanism from the one at the foundation of the standard New Keynesian theory.
We think that additional research using alternative sources of exogenous variation in firms’ marginal costs, presumably in the micro data, would be very helpful for building consensus on the usefulness of this class of models.
4. Identifying Neutral Technology Shocks
The objective of this research is to develop a method to identify neutral labor-augmenting technology shocks in the data. Classic results, starting with Uzawa (1961), establish that these shocks drive the long-run economic behavior along the balanced growth path. They are also the key driving force inducing fluctuations in real business cycle (RBC) models pioneered by Kydland and Prescott (1982), Long and Plosser (1983), and play a quantitatively important role in New Keynesian models, e.g., Smets and Wouters (2007). Moreover, the relationships between various economic variables and neutral technology shocks identified in the data are routinely used to assess the model performance and to distinguish between competing models.
However, the methods used in the literature to identify technology shocks are not designed to measure neutral technology shocks. Clearly, the Solow residual combines neutral and non-neutral technology changes such as shocks to the relative productivity or substitutability of various inputs. These shocks are often considered important in e.g., accounting for the evolution of the skill premium over time. These shocks, if persistent, will also affect, e.g. labor productivity (output per hour) in the long run and will thus be captured by structural vector autoregressions identified with long-run restrictions. Yet, while the models typically have clear predictions on the response of variables to neutral shocks, the response of variables such as hours worked or credit spreads to non-neutral shocks are ambiguous. Thus, while we are sympathetic to the idea of measuring technology shocks in the data using a set of assumptions satisfied by wide classes of models and assessing the models by their ability to match conditional impulse responses of endogenous variables to identified shocks, the shocks identified using the existing methods are of limited use for this purpose.
In Bocola, Hagedorn, and Manovskii (2014) we propose a method for estimating neutral technology shocks. To do so, we assume a constant returns to scale aggregate production function and exploit the rich implications of Uzawa’s characterization of neutral technology on a balanced growth path. We do not assume the economy to be on a balanced growth path but instead use a weak conditional form of this assumption. We only require that the impulse responses to a permanent neutral technology shocks have the standard balanced growth properties in the long run. This is sufficient to identify the neutral technology shock because we are able to prove that no other shock (to non-neutral technology, preferences, etc.) satisfies these restrictions.
To implement this identification strategy we use a state-space model collecting various macroeconomic time-series and estimate it with filtering/smoothing techniques. Since we do not treat the technology shock as a residual, our method does not require to specify an explicit function that aggregates heterogeneous labor and capital inputs. Instead, all this information is summarized in the unobserved states which we identify without the need to specify the structure behind the dynamics of these states. Moreover, our method does not require the parameters of the production function to be invariant over time. The identification is achieved conditional on a testable assumption on a time series process for neutral technology and other unobserved states. This process is only required to provide a good statistical approximation and does not have to be consistent with a structural model since we do not need to assign a structural interpretation to the other shocks affecting the economy.
We assess the small sample properties of the proposed method in a Monte Carlo study using samples drawn from estimated benchmark business cycle models. We consider the RBC and the New-Keynesian models with worker heterogeneity. We find that the proposed method is successful in identifying neutral technology shocks in the model generated data and does not confound neutral technology with other disturbances such as non-neutral technology innovations, preference shifts, wage markup shocks, etc. We are currently working on applying this method in the data in a hope of finally providing conclusive answers to some of the classic questions in macroeconomics.
Dynamic General Equilibrium theory and its application to macroeconomics is part of mainstream research in economics, but it is not easily accessible to laypeople and policymakers. It is a literature that requires tooling up, and it is difficult to follow a typical paper even for someone who is not solving for equilibria himself. The few graduate textbooks are very technical. The very few undergraduate textbooks that touch the subject can only cover extremely simplified models and need to cut corners to make the theory somewhat digestible to students.
If standard macroeconomic theory is to make a broader impact in policy and the educated public, it needs a serious translation into plain English. This is the goal of this book. There is no a single equation or figure. Indeed, Athreya is very careful, for example, to slowly and clearly build the definition of an equilibrium as used in macroeconomics. The book goes in great details through the Arrow-Debreu-McKenzie model by explaining its assumptions and intuition. It expands from this model to get to what modern macroeconomists do in their research.
This approach allows to make it understandable to the layman what typical macro models assume and what they leave out. Why the obsession with the Walrasian equilibrium? What about institutional arrangements? But the book also covers where the latest literature tries to go further, in particular in the light of the economic events of the last years: incomplete markets, search, overlapping generations.
This book should be a great read for advanced undergraduates who already had exposure to simple dynamic models, so as to learn what the “pros” do. Graduate students and faculty, including those not specializing in the field, may also find it interesting to deepen their understanding of modern macro. Policy makers and economic journalists, or anybody else interested in what is going on in macroeconomics and who is not tooled up to read papers, should also benefit greatly from this book.
Big Ideas in Macroeconomics is published by MIT Press.
Our Toronto meeting is the 25th meeting of the Society for Economic Dynamics, a long way since — as Randy Wright likes to say — ‘we lost Control’, an event difficult to place in time, but the consensus among historians is: the 1990 Minneapolis meeting, with Thomas Sargent as Program Chair. This was the vision that dynamic economics — particularly, dynamic macroeconomics — needed its own society in control of its own journal. Then it took some work to get the Review of Economic Dynamics out in 1998, with Thomas Cooley as its first managing editor. A vision shared by a few that has benefited many of us, and more to come. Among these few there were, in addition to both Toms, the other two first presidents of the society: Edward Prescott and Dale Mortensen.
Dale’s passing has been a great loss, not only to his family, friends, colleagues, students and, in general, to the economics profession, but also to our Society for Economic Dynamics. It is for this reason that we wanted to pay tribute to him in the Newsletter, giving the word to his coauthor and friend Kenneth Burdett, and through planning some events at the Toronto meeting.
On June 27, during the lunch break, there will be a Panel on SED@25, an occasion to celebrate, remember and also pay tribute to Dale. Panelists will include other members of the original few, long time friends and supporters of SED, as well as a few others whose work was close to Dale’s. But mainly it should be an occasion for all of us to look ahead for SED, as Dale would have done…
Economic Dynamics is our mandate and we like to discuss the best research being done. This is what our meetings are all about, as is our Review of Economic Dynamics. It is for this reason that in Toronto there will also be a couple of contributed sessions, organized by the program chairs Marina Azzimonti and Veronica Guerrieri, on current research related to Dale’s work. Meanwhile, Matthias Doepke, RED’s co-ordinating editor, and special guest editors Guido Menzio and Randy Wright are working on a ‘symposium’ in the journal on Dale’s contributions.
Society for Economic Dynamics: A Tribute to Dale Mortensen
Kenneth Burdett is the James Joo-Jin Kim Professor of Economics at the University of Pennsylvania. His work concentrates on undertanding labor markets within a search equilibrium context. Burdett’s RePEc/IDEAS profile.
Dale T. Mortensen died on January 9, 2014, in Wilmette, Illinois. He was a past president of the Society of Economic Dynamics and one of the founding editors of the Review of Economic Dynamics. As a personal note, we had been friends forty years. I have wonderful memories of working hard all day with him on economics, returning to his home in the evening to enjoy great food, good wine, and stimulating conversation with Dale, his wife Bev, and perhaps one or more of his three children.
A lot has been written in recent years about Dale’s work. He wrote many influential papers in a variety of areas, but mainly in labor economics, macroeconomics and growth theory. I will focus here on his influence on labor. This does not imply his contribution to macroeconomics, or his contribution to growth theory via research and development is any less important. Indeed they may turn out to be more important. I do think, however, his contributions to labor economics, and his influence on others, changed the nature of research into the subject. Dale developed a body of work and was the leader of a movement that led to this paradigm shift. To get a feel of his many contributions I suggest Mortensen (1982a, 1982b, 2010), Mortensen and Pissarides (1994, 1999), Burdett and Mortensen (1978, 1998), Lentz and Mortensen (2005, 2008), and his most recent work on non-steady-state dynamics, Coles and Mortensen (2014).
I don’t think Dale was unique in feeling during the late sixties and early seventies that there was a problem with labor economics, in that the standard models could not address the questions many wanted to ask. Such questions as:
(a) Why did it take so long for an unemployed to find a job?
(b) Why was unemployment so persistent?
(c) Why did seemingly identical workers receive different wages?
(d) How can unemployment and vacancies coexist?
At the time the economic analysis of labor markets was typically performed within the framework of a perfect competitive market, or some simple variant of it. Within this setting the labor services traded were homogeneous, and both workers and firms had complete information. The wage and the exact quality of the labor services being traded were known, as were the locations of all participants. This institutional structure, the preferences of workers, and the costs faced by firms are all that one needs to determine the equilibrium, defined by a wage that equates supply and demand. If unemployment existed it was blamed on some form of wage stickiness, or government policy such as a mandated minimum wage. The standard model had very little to say about the questions posed above. A classic example is provided by Lipsey (1960), who struggles mightily to explain the Phillips curve data within the context of a standard labor market model.
If the labor market is not centralized and information is not perfect it follows that the participants must spend both time and resources obtaining information. Workers spend time obtaining information about where firms are located, which firms will hire them, and what wage they offer. Firms spend time and resources looking for potential employees as well as trying to evaluate the quality of workers. No theory was available in which these topics could be addressed.
As I said, Dale was certainly not the only economist who recognized the problem. What separated Dale from most others is that he (with a few co-authors) constructed new, original, and rigorous models of the labor market where the focus was on information accrual through time by the participants. To accomplish such a goal, he based his theories on flows in the labor market and not stocks.
A simple example illustrates how the flow approach changes things. As typically defined any individual occupies one of three states — employment, unemployment, or nonparticipation — at a particular moment in time. Through time, individuals move between these states. The numbers of movers are large. For example, firms hire about 5 million workers in a typical month. This only fell to about 4 million at the height of the recent recession. This flow view of the labor market turned out to be most fruitful from a theoretical, empirical, and policy point of view. To illustrate, suppose the flows between are constant through time and the labor market is in steady-state where inflow into each state equals the outflow. In such a market a government reducing the stock of unemployment by a half by finding them jobs would have no long term effect on unemployment if the flows stay the same. The stock of unemployed would increase through time until it reached the original steady state. The only way to reduce unemployment in the long run is to change the flows between the states. Changing stocks does nothing in the long-run.
Not only did Dale start the search and matching literature, he has for the last thirty years been at the heart of its development, and is universally recognized as its leader. This developing body of work changed dramatically the way economists view the labor market. Few would have the courage today to explain unemployment utilizing supply and demand arguments.
As you can see I have not focused on the particular contributions Dale made. I have done this as I think that Dale’s legacy will be greater than his many individual contributions. His influence on the work of many economists and the development of the search and matching literature, pay tribute to this legacy.
I cannot finish without mentioning one of my favorite papers by Dale, published in AER in 1982. I think it is wonderful, and captures all of Dale’s strengths. In this study, he introduces several new possible research topics. He flits between subject, deriving important results and then moving on to new areas. The development of the search matching literature over the last 30 years — including the many papers on marriage markets, monetary economics, housing markets and welfare analysis in matching markets — all flow from this very original study.