This article is an attempt to analyze the theoretical implications of biased technological change within a general framework, and then in the specialized factor-augmenting case. Two-factor production functions are used in the analysis, but the main results are generalized to the n-factor case. The results obtained point out the necessity of separating technological change per se from factor substitution in the analysis of technological change.
The results form a structure that takes biased technological change into account, and the three main definitions, according to Hicks, Harrod, and Solow, are explicitly derived and related to one another.
Let us assume it is possible to define a macroeconomic relationship between output and a set of inputs — the aggregate production function. Such a production function, defined in close analogy to the microeconomic production function, can be characterized by four parameters that describe its "abstract technology," that is, the efficiency parameter, the scale parameter, the intensity parameter, and the substitution parameter.
A change in "abstract technology" can be associated with a number of factors, such as a greater possibility of substitution between production factors, economies or diseconomies of scale, improvements in education and training, intersectoral shifts of resources, organizational changes, etc. While some of these reasons may be directly associated with changes in the stock of knowledge, usually resulting from research and development, others may not. This is the case, for example, with the training process that does not actually increase the stock of knowledge but may accelerate its diffusion and, consequently, lead to a shift in the production function.
The aggregate production function should not be interpreted as a relationship that describes the most efficient production techniques but rather as a representation of the average level of production possibilities available to producers at any given moment in time. Interpreted this way, the aggregate production function is not entirely an exogenous element in economic theory, dependent solely on a technologically determined stock of knowledge, but an endogenous variable, jointly determined by both economic and non-economic considerations. The production function, interpreted as a purely technologically determined concept, restricts its broader characteristics, especially in a dynamic context. A broader view of the production function concept, however, is compatible, for instance, with the theory of induced innovation, which posits that the adoption of technological changes depends on the relative prices of factors, even though, physically, new production techniques may be known in advance.
1. Problems in Analyzing Technological Change
Hicks (1932) made one of the most stimulating observations in the history of economic science when he wrote:
"A change in the relative prices of production factors is, in itself, an incentive to invention, and invention of a specific type — aimed at economizing the use of the factor that has become relatively expensive."
This observation introduced a new perspective in the study of technological change, as, until then, economists had considered technical evolution as outside the realm of economics. From that moment, economists began to question whether economic variables could influence the nature of technological change, making it an endogenous, rather than exogenous, variable in economic models.
In the 1950s and 1960s, American experience showed that despite the rapid increase in the capital-to-labor ratio, the shares of both factors in output had remained constant, which seemed contradictory to economists. The paradox is easily explained by assuming either a constant unitary elasticity of substitution, implicit in a Cobb-Douglas function, or a production function with less than unitary elasticity of substitution along with a labor-saving bias large enough to offset the increase in labor’s share resulting from the rising capital-to-labor ratio.
Such assertions require a clear understanding of the economic theory behind the concept of technological change. Any observed changes in the proportion of factor usage and in their relative shares of output must be decomposed into one component resulting from the ordinary substitution of factors along an isoquant and another component derived from non-neutral, or biased, shifts of the isoquant. Additionally, as shown in Albuquerque (1985a), it is necessary to isolate the non-homothetic effect of isoquant shifts, i.e., those shifts due to production scale effects.
Fellner (1971), Resek (1963), and Kendrick and Sato (1963), among others, suggested and tested important hypotheses about technological change without actually needing to estimate the values of technological parameters. Based on observations and estimates of capital-labor ratios, labor productivity, prices, shares, and total factor productivity, substitution elasticities, and marginal rates of technical substitution, they successfully, under certain assumptions, tested hypotheses regarding the direction of technological progress.
These studies generally involved intricate reasoning and somewhat counterintuitive causal deductions, despite the commonly made simplifying assumptions, such as a two-factor production function and linear homogeneity, or, as in Kendrick and Sato (1963), assuming technological change to be neutral. Assuming, as Hicks (1932) suggested, that technological change is endogenous, the question arises of how economic variables influence changes in production methods.
The initial interest in biased technological change and models of induced innovation stemmed from income distribution concerns; essentially, attempts were made to predict the effects of the inherent labor-saving bias in industrialized societies.
Binswanger (1978) showed that the literature on induced innovation contains two basic models: Ahmad’s (1966), which postulates the existence of an innovation possibility curve that has so far defied mathematical treatment and thus has limited econometric applications; and Kennedy’s (1964), which incorporates factor-augmenting technological changes and assumes an innovation possibility frontier — a trade-off frontier between capital-augmenting and labor-augmenting rates. This approach has led to a rich theoretical literature and empirically requires the econometric measurement of factor-augmenting rates.
This work is, therefore, an attempt to understand the theory of technological change and seeks to demonstrate the impossibility of simultaneously determining biases and factor substitution elasticities without the help of econometric models.
Additionally, the two-factor hypothesis, in both its general and factor-augmenting forms, is generalized to the case of 'n' factors.
Unless econometric models are used, highly simplifying assumptions become necessary to make assertions about technological change, and the task becomes almost impossible if more than two production factors are involved, as shown in Appendix 1. This explains the customary use of highly restrictive functional forms, such as Cobb-Douglas or Constant Elasticity of Substitution (CES), and highlights the need to derive less restrictive and more powerful functional forms.
2. The Theory of Technological Change
Technological change, at the aggregate level, can be represented by an index that specifies shifts in the production function, generating an entire family of these functions. The index itself is a function of all the factors that cause technological change, such as economic, technological, cultural, climatic, and other factors. As a simplification, we will associate this index, which we will call "t," with time. This interpretation of the "t" parameter assumes that shifts in the aggregate production function occur uniformly and continuously, although we must be aware that, at the microeconomic level, technological change occurs irregularly, sometimes progressing and sometimes regressing.
Technological Change and Definitions of Neutrality
Technological changes can be classified as capital-using (or labor-saving) or labor-using (or capital-saving), depending on how they affect the relative shares of factors in output. Specifically, a technological change is defined as capital-using when the capital share increases and labor-using when the labor share increases.
Hicks neutrality requires the relative shares of the factors to remain constant along a path where the capital-to-labor ratio is constant. This definition seeks to analyze a short-term situation in which the availability of capital and labor is fixed.
Harrod’s definition considers long-term adjustments in factor availability but imposes the restriction of a constant rate of return on capital. This hypothesis aligns with the "neo-Keynesian" view, which suggests that capitalists in mature economies determine their average rate of profit, which ceases to be an exogenously determined variable.
Finally, Solow’s definition is consistent with an underdeveloped economy where the wage rate, presumably at the subsistence level, cannot be reduced and is not allowed to rise due to the mechanisms known in models of unlimited labor supply.
Originally, these three classifications of technological change were not explicitly defined in terms of factor shares. Hicks neutrality required a constant marginal rate of substitution at a given capital-labor ratio under full employment; Harrod required a constant capital-output ratio at a given interest rate; and Solow stipulated a constant labor-output ratio, given a fixed wage rate.
As Beckman and Sato (1968) demonstrated in their attempt to define new types of technical progress, by introducing factor shares as invariant variables concerning capital-labor, capital-output, and labor-output ratios, these definitions proved to coincide with those provided by Hicks, Harrod, and Solow, respectively. In fact, this can be observed quite directly.