Economic Dynamics Newsletter

Volume 3, Issue 2 (April 2002)

The EconomicDynamics Newsletter is a free supplement to the Review of Economic Dynamics (RED). It is published twice a year in April and November.

In this issue

Peter Howitt on Schumpeterian Growth Theory

Peter Howitt is the Charles Pitts Robinson and John Palmer Barstow Professor and Professor of Economics, Brown University. He has published extensively on growth theory and monetary theory. Here he reports on his latest research on growth theory. Peter Howitt’s RePEc/IDEAS entry.

Over the past 15 years, much of my time has been spent developing a new generation of endogenous growth theory, together with Philippe Aghion. Our original contribution was Aghion and Howitt (1992). We have since generalized the simple model of that paper considerably and applied it to a variety of different questions. Most of what we have done is contained in our recent book (Aghion and Howitt, 1998a). Our theory is based on Schumpeter’s concept of “creative destruction.” It portrays a free enterprise economy that is constantly being disturbed by technological innovations from which some people gain and others lose, an economy in which competition is a Darwinian struggle whose survivors are those that succeed in creating, adopting and improving new technologies.Schumpeterian theory differs fundamentally from the earlier AK versions of endogenous growth theory, in which technological progress was portrayed as just another form of capital accumulation. In AK theory, the mainspring of growth was the private process of thrift, an essentially private process involving no interpersonal conflicts. Schumpeterian theory recognizes that, on the contrary, technological change is a social process, and that ever since the start of the Industrial Revolution, people’s skills, capital equipment and technological knowledge have been rendered obsolete and destroyed by the same inventions that have created fortunes for others. Our new theory treats innovation as a separate activity from saving, and it is explicit about who gains from it, who loses, how the gains and losses depend on social arrangements, and how such arrangements affect society’s willingness and ability to progress. The rest of this essay discusses some of the insights that the theory provides into four different issues: competition, patent policy, cross-country income differences and technological revolutions.

Competition and Economic Growth

The earliest Schumpeterian growth models predicted that competition should reduce growth, through a well-known appropriability effect; that is, by reducing the prospective monopoly rents that spur innovation. The available evidence seems however to contradict this prediction. This evidence has sent us back to the drawing board, and the result of this rethinking has been a more sophisticated version of Schumpeterian theory containing a variety of channels through which competition might in fact spur economic growth. The simplest of these involves barriers to entry. To the extent that these barriers raise the cost to outside firms of introducing new technologies, they reduce the incentive to perform R&D, thus reducing the long-run growth rate.Consider next the role of agency costs that allow managers to operate businesses in their own interests rather than maximizing the owners’ profits. Aghion et al (1999) have shown that when these costs are severe, competition can act as a stimulus to growth. In their model, each firm is controlled by a manager who is interested primarily in minimizing effort, but who wants the firm to remain solvent in order to continue enjoying the non-monetary benefits of control. Since innovation takes effort, the manager will innovate only as often as needed to remain solvent. To the extent that an increase in competition reduces the firm’s flow of profits it reduces the scope for managerial slack, and forces managers to innovate more often.

We explore another channel in Aghion et al (2001), which takes into account not just the absolute level of profits obtained by a successful innovator but the incremental profits; that is, the difference between the profits of a firm that innovates and one that does not. In the basic first-generation Schumpeterian model such a distinction did not arise because in equilibrium all important innovations were made by outside firms, owing to the replacement effect first analyzed by Arrow. In this paper we assume there are decreasing returns to R&D at the firm level, as the evidence suggests there are; this means that incumbent firms will engage in at least some R&D despite the Arrow effect. We show that although an increase in the intensity of competition will tend to reduce the absolute level of profits realized by a successful innovator, it will tend to reduce the profits of an unsuccessful innovator by even more. Therefore competition can have a positive overall effect on the rate of innovation because firms will try to innovate in order to escape competition. Thus we have a variety of theoretical reasons for doubting that the commonly accepted tradeoff between static efficiency and growth exists. Ongoing econometric investigations that we are undertaking with Bloom, Blundell and Griffith (Aghion et al., 2002) provide strong support for a non-linear relationship in which competition has a positive effect up to a certain point, beyond which it retards growth, as in the framework of Aghion et al (2001).

Patent Policy

Schumpeterian growth theory has shown that the case for stronger protection is not as clear cut as it might seem. For example, the above-mentioned analysis of Aghion et al (2001) shows that stronger patent protection can in some cases reduce the overall pace of technological change, through a “composition effect.” We consider a world with a large number of industries, each of which has two incumbent firms, each with its own technology that is improved from time to time by random innovations. Innovation takes place at the greatest rate in those industries where the two firms are technologically neck-and-neck, because this is where the incentive to escape competition is the greatest. If patent laws were weakened, the incentive to innovate of a firm with any given lead would indeed be blunted, but the steady-state distribution of lead sizes would also be changed; specifically, more firms would be forced to engage in neck-and-neck competition because of a rival’s successful imitation. As a result, a little bit of imitation almost always has the overall effect of raising the economy’s long-run rate of technological progress and therefore of raising the long-run growth rate.Grossman and Helpman (1991) used Schumpeterian growth theory to show that strengthening international patent protection in the South can even weaken the incentive to perform R&D in the North. This happens through a rise in Northern wages; as few products get imitated, more of them remain in production in the North, and this raises the demand for labor in the North, leading to an increase in wages and hence drawing labor out of R&D and into manufacturing. The overall result is thus a decrease in the rate of growth not just in the South but also in the North.

Cross-country income differences

Cross-country comparisons of per-capita GDP have constituted the testing ground of growth theories in recent years. No theories have fared well in these tests. The AK model implies that differences in per-capita GDP among countries should be widening over time. But Evans (1996) has shown that this prediction is clearly refuted by postwar OECD data. Similar refutations of early endogenous growth theories have come from the growth regressions showing conditional beta-convergence.Some have argued that these results support neoclassical theory; that what accounts for differences in income between rich and poor nations is differences in capital accumulation, not differences in technological progress. However, the neoclassical model founders on the fact that convergence appears limited to a select group of rich countries. That is, the data tend to support a theory of “club convergence.”

One model that fits all this evidence is the multi-country Schumpeterian model of Howitt (2000). In this model, each time a firm in one sector of one country innovates by inventing a new intermediate product, the productivity of that intermediate product is determined by a world-wide technology frontier that grows as a result of innovations throughout the world. As long as a country maintains enough incentives that some domestic innovation takes place, it will join the convergence club, and its growth rate will ultimately converge to that of all the other members.

The mechanism through which convergence occurs in this model is technology transfer. That is, the growth rate of productivity equals the product of the frequency and size of innovations. A country that spends little on R&D may temporarily grow slower than the rest of the convergence club, but in the long run the technology currently in use in almost all its industries will be very far from the world frontier. Thus each innovation when it occurs will represent a relatively large improvement over the technology already in place in that industry. In other words, a low frequency of innovations will ultimately generate such a large size of innovations that the product of frequency and size converges to the common world growth rate. In this same model, countries in which conditions are so unfavorable to R&D as to shut down domestic innovation entirely will not grow at all, because R&D is a necessary channel of technology transfer. These countries will stagnate, falling further and further behind the others. Thus the world distribution of per-capita GDP will show the emerging “twin peaks” that Quah claims to have found in the data.

Whether this multi-country Schumpeterian theory bears up under further empirical investigation remains to be seen. Some initial empirical support for the theory is provided by the results of Coe and Helpman (1995) and Coe, Helpman and Hoffmaister (1997), who show that the international R&D spillovers on which the theory is based are indeed substantial. Also, Feyrer (2001) has shown that the emergence of twin peaks in the world income distribution is largely accounted for by emerging twin peaks in productivity, as would be the case in this model.

General Purpose Technology

The destructive side of creative destruction is not just a microeconomic phenomenon. Indeed the whole economy can suffer, at least during a transitional period, as a result of widespread technological change. This is especially true when that technological change involves the introduction of a new “General Purpose Technology” (GPT); that is, a new technology that is used throughout the economy, has a profound effect on the way economic life is organized, and gives rise to a wave of complementary innovations associated with its increasing use. In the long run our standard of living has been greatly enhanced by the succession of GPTs that have been introduced since the first Industrial Revolution. However, the period during which a new GPT is being introduced can be a period of wrenching adjustment, not just at the level of the individual firm but for the economy as a whole.There are many aspects to this adjustment cost. Helpman and Trajtenberg (1998) emphasize the lost output that occurs because the GPT does not arrive ready to use but requires the invention of a set of complementary components. During the period when the components are being developed, the new GPT will not yet be in use. Meanwhile the labor that is drawn into developing new components will be drawn out of producing final output. The result will be a fall in the overall level of output.

Others have pointed out a variety of additional channels through which the cost of adjusting to a new GPT can show up at the macroeconomic level. Greenwood and Yorukoglu (1997) argue that real resources are used up in learning to use the new GPT. Aghion and Howitt (1998b) point out that the process of reallocating labor from sectors using older technologies to those using the new GPT may involve a rise in unemployment, for the same reason that any large reallocation of labor often entails unemployment in a less than frictionless economic system. Howitt (1998) calibrates to U.S. data a Schumpeterian model with capital-embodied technological change, and shows numerically that the introduction of a new GPT that raises the productivity of R&D by 50% until overall productivity has doubled will reduce the level of per-capita GDP below the path it would otherwise have followed, for a period of about two decades, through induced obsolescence of human and physical capital. Thus it seems that Schumpeterian growth theory may have something to say about the productivity slowdown that occurred between the mid 1970s and the mid 1990s. The results of Howitt (1998) exemplify an important general aspect of the dynamics of Schumpeterian growth models. In the short run, as in the neoclassical model of Solow and Swan, the growth rate in output per person can be decomposed into two components, one depending on the rate of capital deepening (the increase in capital per efficiency unit of labor), and the other depending on the rate of technological progress. Technological progress is the only component that matters in the long run, because the amount of capital per efficiency unit of labor will stop growing as it approaches its long-run equilibrium value. But capital deepening is quantitatively the component that dominates the economy’s transitional dynamics, often for long periods of time, and it very often goes in the opposite direction to technological progress. The presence of such long lags makes the theory difficult to estimate and test using time-series data, but Zachariadis (2001) has shown how to overcome these difficulties using cross-sectional evidence.

References

Aghion, Philippe, Nicholas Bloom, Richard Blundell, Rachel Griffith and Peter Howitt 2002. “Competition and Innovation: An Inverted U Relationship,” unpublished.
Aghion, Philippe, Mathias Dewatripont, and Patrick Rey 1999. “Competition, Financial Discipline and Growth.” Review of Economic Studies. Vol. 66, pages 825-52.
Aghion, Philippe, Christopher Harris, Peter Howitt, and John Vickers 2001. “Competition, Imitation and Growth with Step-by-Step Innovation.” Review of Economic Studies. Vol. 68, pages 467-92.
Aghion, Philippe, and Peter Howitt 1992. “A Model of Growth through Creative Destruction.” Econometrica. Vol. 60, pages 323-51.
Aghion, Philippe, and Peter Howitt 1998a. Endogenous Growth Theory. Cambridge, MA: MIT Press.
Aghion, Philippe, and Peter Howitt 1998b. “On the Macroeconomic Effects of Major Technological Change.” In General Purpose Technologies and Economic Growth, edited by Elhanan Helpman, 121-44. Cambridge, MA: MIT Press.
Coe, David T., and Elhanan Helpman 1995. “International R&D Spillovers.” European Economic Review. Vol 39, pages 859-87.
Coe, David T., Elhanan Helpman, and Alexander W. Hoffmaister 1997. “North-South R&D Spillovers.” Economic Journal. Vol. 107, pages 134-49.
Evans, Paul 1996. “Using Cross-Country Variances to Evaluate Growth Theories.” Journal of Economic Dynamics and Control. Vol. 20, pages 1027-49.
Feyrer, James 2001. “Convergence by Parts.” Unpublished, Brown University.
Greenwood, Jeremy, and Mehmet Yorukoglu 1997. “1974.” Carnegie-Rochester Conference Series on Public Policy. Vol. 46, pages 49-95.
Grossman, Gene M., and Elhanan Helpman 1991. “Quality Ladders and Product Cycles.” Quarterly Journal of Economics. Vol. 106, pages 557-86.
Helpman, Elhanan, and Manuel Trajtenberg 1998. “A Time to Sow and a Time to Reap: Growth Based on General Purpose Technologies.” In General Purpose Technologies and Economic Growth, edited by Elhanan Helpman. Cambridge, MA: MIT Press.
Howitt, Peter 1998. “Measurement, Obsolescence, and General Purpose Technologies.” In General Purpose Technologies and Economic Growth, edited by Elhanan Helpman, 219-51. Cambridge, MA: MIT Press.
Howitt, Peter 2000. “Endogenous Growth and Cross-Country Income Differences.” American Economic Review. Vol. 90, pages 829-46.
Zachariadis, Marios 2001. “R&D, Innovation and Technological Progress: A Test of the Schumpeterian Framework without Scale Effects.” Unpublished, Louisiana State University.

Q&A: Urban Jermann on Asset Pricing

Urban Jermann is Associate Professor of Finance at the Wharton School, University of Pennsylvania. His general fields of interest are international macroeconomics and asset pricing. In this interview, he talks about various aspects of his recent research. Urban Jermann’s RePEc/IDEAS entry.
EconomicDynamics: In recent work with Fernando Alvarez, you find that the permanent component of the pricing kernel has to be very large to be consistent with the low returns of long terms bonds relative to equity. You also find the permanent component of consumption to be lower than that of those bond returns. Do you see a parallel with the quest to find an endogenous propagation mechanism in business cycle models?
Urban Jermann: I am reasonably confident about our estimate of the size of the permanent component of the marginal utility of wealth. However, our estimate of the size of the permanent component of consumption, based on standard statistical techniques, show large standard errors. Thus any interpretation of the comparison of these two findings is somewhat tentative.To answer your question: Yes, I see a parallel between to the quest of finding endogenous propagation mechanisms in business cycle models. Our result suggests the need to depart from the standard time-separable utility specification because it would not imply any difference in the size of the permanent components of the marginal utility of wealth and of consumption. Using non-time-separable preference specifications clearly has the potential to propagate shocks through time. For instance, some recent studies by Fuhrer or McCallum and Nelson have shown that habit-formation utility leads to improved dynamic behavior of their macroeconomic models.

So far, we haven’t explored systematically the quantitative implications of specific utility functions for the size of the permanent components of the implied marginal utility compared to consumption. We have, however, a general result in our paper for utility functions of the type proposed by Epstein, Zin and Weil that allow for nonseparability across time and states of nature. Specifically, we show that even if consumption has no permanent component, the marginal utility of wealth always has a permanent component.

ED: In other work with Fernando Alvarez, you use asset prices to determine the cost of business cycles. As in other studies using aggregate data or representative agent constructs, the cost is small. Yet, there is substantial evidence that the cost is distributed very unevenly across households. Thus, how does your work represent a step forward in the determinantion of the cost of business cycles?
UJ: The “cost of business cycles” is an answer to the question of what is an upper bound to the welfare gains associated with macro-economic stabilization policies such as monetary and fiscal policies. If these gains are small, then it seems hard to justify encurring significant costs avoiding them. If the costs of economic fluctuations are unevenly spread, this would require efforts along other dimensions. For instance, considering institutions such as bankruptcy laws that directly impact the ability of cross-sectional risk sharing.My reading of the literature on the cost of business cycles is that earlier studies, by using various utility functions, have reported a wide variety of different estimates. Many came up with small numbers, but some, in particular those that required their utility function to be able to replicate the equity premium, came up with considerably larger numbers. Our estimates of the cost of business cycles is directly based on asset prices without the tricky intermediate step of specifying and calibrating a utility function. Our finding that the cost of business cycles is smaller than half a percent of lifetime consumption was an update to some of my priors, because the framework that we use is also consistent with the historical equity premium of more than six percent.

Our work has also something to say about the distinction between the costs of consumption fluctuations at business cycle frequencies and at all frequencies. While we find the cost of business cycles to be smaller than one half of a percent of lifetime consumption, we also find that eliminating consumption uncertainty at all frequencies would be worth a lot more, easily topping the equity premium. That is, the representative asset pricing agent seems to require very large compensation to bear the low frequency risk components that are in consumption.

ED: In his recent book, Peter Bossaerts argues that current asset pricing models are routinely rejected by the data, yet they continue to form the basis of theory. What is your stand on this? Should we still use CAPM and APT?
UJ: I believe it takes a model to beat a model. Until we have models that do not get rejected anymore, our answers to concrete questions will have to be based on whatever the best available tools are.While a lot of our attention, not surprisingly, is focused on puzzles and rejections, it is good to remind ourselves of some of the successes of economic models. For instance, the basic idea of no-arbitrage has lead to the development of powerful models for pricing derivatives. Currently, derivatives from some of the most active segments of the global financial markets. Transaction volumes are in the multi-billion dollars per day. These markets would not be able to function the way they do without the asset pricing models based on no-arbitrage principles. In the two papers we have discussed here, my co-author and I have tried to apply some of these ideas to macroeconomic issues.

ED: You have also worked on the recent stock market boom, with Vincenzo Quadrini. Your point here is that prospects of productivity gains generated immediate productivity gains, as financing became easier for firms. How can this be reconciled with theories proposing that the technological progress was a result of improvements that started several decades ago?
UJ: The ideas this work is based on can also be seen as resulting from the improvements that started several decades ago.In our work, we show that the mere prospect of a New Economy, where productivity would be growing at a higher rate, can have immediate consequences not only for stock market valuations but also for measured aggregate labor productivity. This happens because financial constraints are looser for firms with a promising future and, thus, these firms will hire more workers. The increased labor demand drives up wages and forces a reallocation of workers as all firms strive to increase the marginal product of labor. Measured aggregate labor productivity increases even without technological improvment at the firm level.

In our story, optimism about future growth rates of firm level productivity is the driving force. It would seem to me that it was necessary to see some of the successes in information and telecommunication technologies in order for a widespread belief in a New Economy to be possible.

References

Alvarez, Fernando, and Urban Jermann 2000, “Using Asset Prices to Measure the Cost of Business Cycles.” NBER working paper 7978.
Alvarez, Fernando, and Urban Jermann 2002, “Using Asset Prices to Measure the Persistence of the Marginal Utility of Wealth.” Mimeo, University of Pennsylvania.
Bossaerts, Peter 2002, “The Paradox of Asset Pricing.” Princeton University Press.
Epstein, Larry, and Stan Zin 1989, “Substitution, Risk Aversion and the Temporal Behavior of Consumption and Asset Returns: a Theoretical Framework.” Econometrica. Vol. 57, pages 937-69.
Fuhrer, Jeffrey 2000, “Optimal Monetary Policy in a Model with Habit Formation.” Federal Reserve Bank of Boston Working Paper 00-5.
McCallum, Bennett, and Edward Nelson 1999, “Nominal Income Targeting in an Open-Economy Optimizing Model.” Journal of Monetary Economics. Vol. 43, pages 553-578.
Quadrini, Vincenzo, and Urban Jermann 2002, “Stock Market Boom and the Productivity Gains of the 1990s.” Mimeo, New York University.
Weil, Philippe 1990, “Nonexpected Utility in Macroeconomics.” The Quarterly Journal of Economics. Vol. 105, pages 29-42.

Society for Economic Dynamics: 2002 Meetings

The deadline for submissions to the 2002 meetings of the Society for Economics Dynamics, June 28-30 in New York City is now passed and the program is mostly finalized. Details are available at www.minneapolisfed.org/sed/. Plenary sessions will be presented by Orazio Attanasio, Lee Ohanian and Wolfgang Pesendorfer. The conference cocktail party will be held on Saturday at the Guggenheim, the conference dinner will be at the Tavern on the Green in Central Park on Sunday. As usual, the Contractions will be in town, on Friday night.

If you plan on attending, do not delay reserving your accomodation. Two options are available: at the Sheraton New York Hotel & Towers for $160 a night (deadline June 5) or in the NYU dormitories for 45/65$ a night a person (deadline April 30). Reservations should be made online through the conference web site. Note that conference registrations made before May 15 receive a discount.

Bossaerts’ The Paradox of Asset Pricing

Several models of asset pricing models are held is high esteem both in the research literature and among practitioners. For example, CAPM is consistently used for investment strategy. Yet, there is little empirical support for it. Dynamic models do not fare much better. Peter Bossaerts argues that empirical tests assume much more than theory does. Indeed, models do not require the efficiency markets hypothesis (EMH), yet empirical tests do. The book shows that one can do away with EMH, and historical data give more convincing results.

The book brings forward two interesting points. The first is that EMH can be replaced by efficiently learning markets, that is markets update their beliefs correctly, but their priors may be wrong, instead of correct with EMH. Empirical testing is still in its infancy, though, and is not that simple. Bossaerts raises here a second point: controlled experiments in the lab may be used to test the theory. By appropriately setting the experiment, one can use the same assumptions for theory and test, and theories like CAPM hold well, but others not (like instantaneous equilibration).

While venturing in a very complex literature, the book is kept at a level that advanced undergraduates should be able to follow. Bossaerts gives an rapid outline of the major theories, insisting on assumptions while bypassing the difficult proofs. A good read, even if you are already familiar with asset pricing theory.

“The Paradox of Asset Pricing” has been published by Princeton University in February 2002.