• About

Sacred Cow Chips

Sacred Cow Chips

Tag Archives: Substitutability

The Coexistence of Labor and AI-Augmented Capital

30 Friday Jan 2026

Posted by Nuetzel in Artificial Intelligence, Labor Markets

≈ 4 Comments

Tags

AI-Augmented Capital, Artificial Intelligence, Brian Albrecht, ChatGPT, Comparative advantage, Corner Solution, Deployment Risks, Dwarkesh Patel, Elasticity of Substitution, Erik Schiskin, Factor Intensity, Grok, Labor Demand, Marginal Product, Opportunity cost, Perfect Complements, Perfect Substitutes, Philip Trammell, Reciprocal Advantages, Ronald W. Jones, Scarcity, Substitutability, Technology Shocks

I’m an AI enthusiast, and while I have econometric experience and some knowledge of machine learning techniques, I’m really just a user — I really lack deep technical expertise in the area of AI. However, I use it frequently for research related to my hobbies and to navigate the kinds of practical issues we all encounter day-to-day. With one good question, AI can transform what used to require a series of groping web searches into a much more efficient process and informative result. In small ways like this and in much greater ways, AI will bring dramatically improved levels of productivity and prosperity to the human race.

Still, the fear that AI will be catastrophic for human workers is widely accepted. Some claim it’s already happening in the workplace, but the evidence is thin (and see here and here). While it’s certain that some workers will be displaced from their jobs by AI, ultimately new opportunities (and some old ones) will be available.

I’ve written several posts (here, here, here, and here) in which I asserted that a pair of phenomena would ensure continuing employment opportunities for humans in the presence of AI: an ongoing scarcity of resources relative to human wants, and the principle of comparative advantage. Unfortunately, the case I’ve made for the latter was flawed in one critical way: reciprocal comparative advantages across different factors of production are not guaranteed. In trade relationships, trading partners have reciprocal cost advantages with respect to the goods they exchange, and I extended the same principle to factors of production in different sectors. Unfortunately, that analogy with trade does not always hold up, in part because the owners of productive inputs don’t fully engage in direct trade with one another.

Thus, so-called reciprocity of opportunity costs cannot guarantee future employment for humans in a world with AI-augmented capital. Nevertheless, there is a strong case that reciprocity of comparative advantages will exist, whether labor and capital are (less than perfect) complements or substitutes. This is likely to hold up even though human labor and AI-augmented capital could well become more substitutable in the future.

Below, I’ll start by reviewing the principles of scarcity, opportunity costs, and input selection. Then I’ll turn to a couple of other rationales for a more sanguine outlook for human jobs in a world with widely dispersed AI in production. Finally, I’ll provide more detail on whether reciprocal input opportunity costs are likely to exist in a world with AI-augmented capital, and the implications for continued human employment.

Scarcity and Advantages

Scarcity must exist for a resource to carry a positive price. That price is itself a measure of the resource’s degree of scarcity as determined by both demand and supply. And ultimately an input’s price reflects its opportunity cost, or the reward foregone on its next-best use.

Labor and capital are both scarce inputs. Successful integration of AI into the capital stock will make capital more productive, but it will not eliminate the fundamental scarcity of capital. There will always be more use cases than available capital, and particular uses will always have positive opportunity costs.

If capital is more productive than labor in a particular use, then capital has an absolute advantage over labor in that line of production. If capital and labor are perfect substitutes in producing good X, then capital can be substituted for labor at a constant rate, say 1 unit of capital for every 2 units of labor, in a straight line without any change in output.

One might expect the producer in this scenario to choose to employ only capital in production. That’s the general argument put forward by AI pessimists. They appeal to a presumed, future absolute advantage of AI (or AI combined with robotics) in each and every line of production. In fact, the pessimists treat the AI robots of the future as perfect substitutes for labor. That’s not a foregone conclusion, however, and even if it were, absolute advantages are not reliable guides to economic decision-making.

Physical tradeoffs in a line of production are one thing, but opportunity costs are another, as they depend on rewards in other lines of production. In the example above, if a unit of capital costs slightly less than two units of labor, then it would indeed be rational to employ all capital and zero labor in producing good X. Then, capital has not just an absolute advantage in X, but also a sufficient cost advantage over labor (or else labor would be more highly valued elsewhere). In this example, the labor share of income from producing X is zero. The capital share is 100%.

Income Shares

The simple case just described is the same as the one examined by Brian Albrecht in his recent analysis of “Capital In the 22nd Century”, an essay by Philip Trammell and Dwarkesh Patel. The controversial conclusion in the latter essay is that capital taxation will be necessary in a world of strong AI, because labor’s share of income will approach zero.

Albrecht is rightfully skeptical. He examines the case of capital and labor as perfect substitutes, as above, and the “corner solution” with all capital and no labor in production.

Albrecht notes that empirical estimates show that capital and labor are not even close to perfect substitutes. In fact, on an economy-wide basis, capital and labor have a fairly high degree of complementarity. But this varies across sectors, and Albrecht acknowledges that substitutability might increase in a world of strong AI.

Without getting ahead of myself, I’ll note here again that AI is likely to dramatically enhance human productivity across tasks. In cases of less than perfect substitution, automation increases the marginal product of labor. In addition, humans benefit from the high degree of complementarities across many tasks, which create limits on deployment opportunities and scaling of AI.

Returns To AI Capital

Albrecht covers a second avenue through which AI-augmented capital could displace labor: rapid growth in the capital stock fueled by stubbornly high returns to capital. While Albrecht’s main interest is in whether capital taxation will one day be necessary, his analysis is obviously a useful reference for thinking about whether labor will be completely displaced by AI-augmented capital.

Again, capital is a scarce resource. For it to grow unbounded in AI-augmented forms, its real yield (and marginal product) must always and forever resist diminishing returns while also exceeding rates of time preference. It also must stay ahead of depreciation on an ever-expanding stock of existing capital. Albrecht is of the opinion that AI-augmented capital might be especially prone to rapid obsolescence. For that matter, it remains to be seen whether the many moving parts of humanoid robots will be highly vulnerable to wear and tear in the field. Perhaps the use of AI in materials research and robotics design can ease those physical constraints.

There are other obstacles to complete AI dominance in the labor market. Institutions of almost all kinds will always face AI deployment risks. On this point, an interesting piece is “Persuasion of Humans Is the Bottleneck”. The author, Erik Schiskin, says that in addition to investment in physical capital:

“AI deployment is capital-intensive in a different way: admissibility—what institutions can rely on, defend, insure, audit, and appeal without taking unbounded tail risk.“

Of course, this too increases the cost of AI deployment.

A “One-Good” Analysis Is Inadequate

Albrecht essentially confines his analysis of inputs and incomes shares to a world in which thee is only one kind of final output, and yet he makes the following assertion:

“Remember this is a model of the whole economy, so that would mean there’s not a single thing produced that humans have a comparative advantage.“

That kind of aggregation is not possible in a world with comparative advantages, however. A mental model with only one good cannot describe a world with opportunity costs. Capital and labor are both scarce resources. Their alternate uses cannot be buried within a single aggregation without appealing to the “idle state” as an alternative use.

With more than one good, the opportunity cost of using an addition unit of capital to produce good X is what must be foregone when that unit of capital is not deployed to its next-best use producing some other good.

And to return to our earlier example, if capital is the exclusive input to the production of Good X, that’s because 1) capital is perfectly substitutable for labor in that line of production; 2) capital is more productive than labor in producing good X; and 3) capital’s relative cost for producing good X is sufficiently low to favor its use.

Factor Intensities

Now I’ll revisit my earlier rationale that for labor’s continuing role in a world with AI-augmented capital. I began to have doubts about how input substitutability might play out as AI is deployed (see other views here and here, as well as Albrecht’s post). So I enlisted the assistance of two AI tools, Grok and ChatGPT, to help identify relevant economic literature bearing on the durability of the “reciprocity” phenomenon given a technical shock. There were differences in the conclusions of the two AI tools when certain embedded assumptions were overlooked or not initially made plain. Considerable push-back against these analyses by yours truly helped to align the conclusions. I’ll be skipping over lots of gory details, but I’d welcome any and all feedback from readers with insight into the issues, or with deeper knowledge of this type of economic research.

Reciprocal input opportunity costs (and comparative advantages) depend on parameters that help determine factor intensity and income shares. A paper by Ronald W. Jones in 1965 helped delineate conditions that preserve the relative rankings of factor intensities across sectors in a closed economy. Those conditions can be extended to the context of reciprocal input opportunity costs. I’ll briefly discuss those conditions in the next couple of sections.

For now, it’s adequate to say that when capital’s comparative advantage in one sector is offset to some degree by a reciprocal comparative advantage for labor in another, we need not conclude that human labor will become obsolete given a positive shock to the productivity of capital. Again, however, in earlier posts I mistakenly asserted that this kind of reciprocity was a more general phenomenon. It is not, and I should have known that. That said, the specifics of the conditions are of interest in the context of AI-augmented capital.

Non-Reciprocity

First, let’s cover cases that are the least conducive to reciprocal opportunity costs: when capital and labor are perfect substitutes in the production of all goods, and when they are perfect complements in the production of all goods. While there are many cases in which inputs are used in fixed proportions in the short run, or where one input can easily be substituted for another in a particular task, it’s still safe to say we don’t generally live in either of those worlds. Nevertheless, they are instructive to consider as extreme cases.

The case of perfect substitutes was discussed above in connection with Albrecht’s post. Then cost minimization yields corner solutions involving 100% capital and zero labor for both goods if capital is everywhere more productive (relative to its cost) than labor. There is no reciprocity of input opportunity costs across goods except by coincidence, and labor will be unemployed.

The other case certain to have non-reciprocal opportunity costs (except by coincidence) is when capital and labor are perfect complements. Then, the rigidity of resource pairings lead to indeterminate input prices and an inability to absorb unemployed resources. However, note that if perfect complementarity were to persist under strong AI, as unlikely as that seems, it would not lead to a capital share of 100%.

A Paradox of Substitutability

Now I turn to more plausible ranges of substitutabilty. There’s a notion that capital with AI enhancements will become more substitutable for labor than it has been historically. And if that’s the case, there’s a fear that humans will be out of work and produce a zero labor share of income. This same line of thinking holds that future prospects for human employment and labor income are better if capital and labor remain somewhat complementary.

That framing of the future of work and its dependence on complementarity vs. substitutability is fairly intuitive. Paradoxically, however, a higher degree of substitutability might not have any impact on human comparative advantages, or might even strengthen them, as long as the elasticity of substitution is not highly asymmetric across sectors.

The Ronald Jones paper referenced above shows that under certain conditions, factor intensities for different goods will retain their relative rankings after a shock to factor prices or technology. By implication, comparative advantages will be preserved as well. So if capital has a greater intensity in producing X than in producing Y, that ranking must preserved after a shock if capital and labor are to retain their reciprocal comparative advantages. Jones shows this is satisfied when the inputs in both sectors are equally substitutable, or when changes in substitutability across sectors are equal. If those changes are not greatly different, then reversals in factor intensity are unlikely and reciprocity is usually preserved. Therefore, if augmenting capital with AI increases the elasticity of substitutability between capital and labor broadly, there is a good chance that many reciprocal comparative advantages will be preserved.

Another general guide implied by the Jones paper is that factor intensities and reciprocal comparative advantages are more likely to be preserved when production technologies differ, input proportions are stable, and differences in substitutability are similar or differ only moderately.

Empirically, elasticities of substitution between capital and labor vary across industries but are typically well within a range of complementarity (0.3 to 0.7). Starting from these positions, and given increases in substitutability via AI-augmented capital, factor proportions aren’t likely to change drastically, and rankings of capital intensity aren’t likely to be altered greatly, thus preserving comparative advantages for most sectors.

Restating the Last Section

Starting from a world in which inputs have reciprocal comparative advantages (and reciprocal opportunity costs), a technological advance like the augmentation of capital via AI might or might not preserve reciprocity. The return on capital will increase, and widespread capital deepening is likely to drive up the rental rate of a unit of capital relative to its higher marginal product. If capital intensities increase in all sectors, but relative rankings of capital intensities are preserved, then labor will retain comparative advantages despite the possible absolute advantages of AI-augmented capital. Labor’s share of income will certainly not fall to zero.

If the substitutability of capital and labor increase, ongoing reciprocity can be preserved if the change in substitutability does not differ greatly across sectors. This is true even for large, but broad, increases in substitutability. However, should large increases in substitutability be concentrated to some sectors but not others, reciprocity could fail more broadly. If greater substitutability implies greater dispersion in substitutabilities, then reciprocity is likely to be less stable.

Of course, regardless of the considerations above, there are certain to disruptions in the labor market. Classes of workers will be forced to leverage AI themselves, find new occupations, or reprice their services. Nevertheless, once the dynamics have played out, labor will still have a significant role in production.

Recapping the Whole Post

Here are a few things we know:

Capital will remain scarce, even more so if its return reaches impressive heights via AI augmentation. Another way of saying this is that interest rates would have to rise in order to induce saving. Depreciation and obsolescence of capital will reinforce that scarcity, and there are now and always will be too many valued uses for capital to become a free good.

Capital and labor are not perfect substitutes in most tasks and probably won’t be, even given strong AI-augmentation.

Capital and labor are not perfect complements, though they have been complementary historically. Their complementarity might well be moderated by AI.

Besides capital scarcity, there will be a continuing series of bottlenecks to AI deployment, some of which will demand human involvement.

We start our transition to a world of AI-augmented capital with different inputs having comparative advantages in producing some goods and not others. In general, at the outset, there is a reciprocity of input comparative advantages and opportunity costs across sectors, much as reciprocal opportunity costs exist in cross-country trade relationships.

A technological shock like the introduction of strong AI will alter these relationships. However, as long as factor intensities in different sectors maintain their rank ordering, reciprocal opportunity costs will still exist.

If substitutability increases with the introduction of AI-augmented capital, reciprocal opportunity costs will be preserved as long as the changes in the degree of substitutability do not differ greatly across sectors.

My earlier contention that reciprocal opportunity costs were the rule was incorrect. However, it’s safe to say that reciprocity will persist to one degree or another, even if more weakly, as the transition to AI goes forward. That means labor will still have a role in production, despite many areas in which AI-augmented capital will have an absolute advantage. And we haven’t even discussed preferences for “the human touch” and the likelihood that AI will spawn new opportunities for human labor as yet unimagined.

On Noah Smith’s Take Re: Human/AI Comparative Advantage

13 Thursday Jun 2024

Posted by Nuetzel in Artificial Intelligence, Comparative advantage, Labor Markets

≈ 4 Comments

Tags

Absolute Advantage, Agentic AI, Alignment, Andrew Mayne, Artificial Intelligence, Comparative advantage, Compute, Decreasing Costs, Dylan Matthews, Fertility, Floating Point Operations Per Second, Generative AI, Harvey Specter, Inequality, National Security, Noah Smith, Opportunity cost, Producer Constraints, Substitutability, Superabundance, Tyler Cowen

As of February 2026, I’m adding this short preamble to a few older posts on the subject of AI and future prospects for human labor. In the original post below (and a few others), I overstated the case that the law of comparative advantage would assure a continued role for humans in production. I still think the case is strong, mind you, but now I’m convinced that the outcome depends on elasticities of input substitution and how those elasticities might shift given the advent of AI-augmented capital. You can read my most recent thoughts on the matter here.

____________________________________________

I was happy to see Noah Smith’s recent post on the graces of comparative advantage and the way it should mediate the long-run impact of AI on job prospects for humans. However, I’m embarrassed to have missed his post when it was published in March (and I also missed a New York Times piece about Smith’s position).

I said much the same thing as Smith in my post two weeks ago about the persistence of a human comparative advantage, but I wondered why the argument hadn’t been made prominently by economists. I discussed it myself about seven years ago. But alas, I didn’t see Smith’s post until last week!

I highly recommend it, though I quibble on one or two issues. Primarily, I think Smith qualifies his position based on a faulty historical comparison. Later, he doubles back to offer a kind of guarantee after all. Relatedly, I think Smith mischaracterizes the impact of energy costs on comparative advantages, and more generally the impact of the resources necessary to support a human population.

We Specialize Because…

Smith encapsulates the underlying phenomenon that will provide jobs for humans in a world of high automation and generative AI: “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” He tells technologists “… it’s very possible that regular humans will have plentiful, high-paying jobs in the age of AI dominance — often doing much the same kind of work that they’re doing right now …”

… often, but probably transformed in fundamental ways by AI, and also doing many other new kinds of work that can’t be foreseen at present. Tyler Cowen believes the most important macro effects of AI will be from “new” outputs, not improvements in existing outputs. That emphasis doesn’t necessarily conflict with Smith’s narrative, but again, Smith thinks people will do many of the same jobs as today in a world with advanced AI.

Smith’s Non-Guarantee

Smith hedges, however, in a section of his post entitled “‘Possible’ doesn’t mean guaranteed”. This despite his later assertion that superabundance would not eliminate jobs for humans. That might seem like a separate issue, but it’s strongly intertwined with the declining AI cost argument at the basis of his hedge. More on that below.

On his reluctance to “guarantee” that humans will have jobs in an AI world, Smith links to a 2013 Tyler Cowen post on “Why the theory of comparative advantage is overrated”. For example, Cowen says, why do we ever observe long-term unemployment if comparative advantage rules the day? Of course there are many reasons why we observe departures from the predicted results of comparative advantage. Incentives are often manipulated by governments and people differ drastically in their capacities and motivation.

But Cowen cites a theoretical weakness of comparative advantage: that inputs are substitutable (or complementary) by degrees, and the degree might change under different market conditions. An implication is that “comparative advantages are endogenous to trade”, specialization, and prices. Fair enough, but one could say the same thing about any supply curve. And if equilibria exist in input markets it means these endogenous forces tend toward comparative advantages and specializations balancing the costs and benefits of production and trade. These processes might be constrained by various frictions and interventions, and their dynamics might be complex and lengthy, but that doesn’t invalidate their role in establishing specializations and trade.

The Glue Factory

Smith concerns himself mainly with another one of Cowen’s “failings of comparative advantage”: “They do indeed send horses to the glue factory, so to speak.” The gist here is that when a new technology, motorized transportation, displaced draft horses, there was no “wage” low enough to save the jobs performed by horses. Smith says horses were too costly to support (feed, stables, etc…), so their comparative advantage at “pulling things” was essentially worthless.

True, but comparing outmoded draft horses to humans in a world of AI is not quite appropriate. First, feedstock to a “glue factory” better not be an alternative use for humans whose comparative advantages become worthless. We’ll have to leave that question as an imperative for the alignment community.

Second, horses do not have versatile skill sets, so the comparison here is inapt due to their lack of alternative uses as capital assets. Yes, horses can offer other services (racing, riding, nostalgic carriage rides), but sadly, the vast bulk of work horses were “one-trick ponies”. Most draft horses probably had an opportunity cost of less than zero, given the aforementioned costs of supporting them. And it should be obvious that a single-use input has a comparative advantage only in its single use, and only when that use happens to be the state-of-the-art, or at least opportunity-cost competitive.

The drivers, on the other hand, had alternatives, and saw their comparative advantage in horse-driving occupations plunge with the advent of motorized transport. With time it’s certain many of them found new jobs, perhaps some went on to drive motorized vehicles. The point is that humans have alternatives, the number depending only on their ability to learn a crafts and perhaps move to a new location. Thus, as Smith says, “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” But not draft horses in a motorized world, and not square pegs in a world of round holes.

AI Producer Constraints

That brings us to the topic of what Smith calls producer-specific constraints, which place limits on the amount and scope of an input’s productivity. For example, in my last post, there was only one super-talented Harvey Specter, so he’s unlikely to replace you and keep doing his own job. Thus, time is a major constraint. For Harvey or anyone else, the time constraint affects the slope of the tradeoff (and opportunity costs) between one type of specialization versus another.

Draft horses operated under the constraints of land, stable, and feed requirements, which can all be viewed as long-run variable costs. The alternative use for horses at the glue factory did not have those costs.

Humans reliant on wages must feed and house themselves, so those costs also represent constraints, but they probably don’t change the shape of the tradeoff between one occupation and another. That is, they probably do not alter human comparative advantages. Granted, some occupations come with strong expectations among associates or clients regarding an individual’s lifestyle, but this usually represents much more than basic life support. In the other end of the spectrum, displaced workers will take actions along various margins: minimize living costs; rely on savings; avail themselves of charity or any social safety net as might exist; and ultimately they must find new positions at which they maintain comparative advantages.

The Compute Constraint

In the case of AI agents, the key constraint cited by Smith is “compute”, or computer resources like CPUs or GPUs. Advancements in compute have driven the AI revolution, allowing AI models to train on increasingly large data sets and levels of compute. In fact, by one measure of compute, floating point operations per second (FLOPs), compute has become drastically cheaper, with FLOPs per dollar almost doubling every two years. Perhaps I misunderstand him, but Smith seems to assert the opposite: that compute costs are increasing. Regardless, compute is scarce, and will always be scarce because advancements in AI will require vast increases in training. This author explains that while lower compute costs will be more than offset by exponential increases in training requirements, there nevertheless will be an increasing trend in capabilities per compute.

Every AI agent will require compute, and while advancements are enabling explosive growth in AI capabilities, scarce compute places constraints on the kinds of AI development and deployment that some see as a threat to human jobs. In other words, compute scarcity can change the shape of the tradeoffs between various AI applications and thus, comparative advantages.

The Energy Constraint

Another producer constraint on AI is energy. Certainly highly complex applications, perhaps requiring greater training, physical dexterity, manipulation of materials, and judgement, will require a greater compute and energy tradeoff against simpler applications. Smith, however, at one point dismisses energy as a differential producer constraint because “… humans also take energy to run.” That is a reference to absolute energy requirements across inputs (AI vs. human), not differential requirements for an input across different outputs. Only the latter impinge on tradeoffs or opportunity costs facing an inputs. Then, the input having the lowest opportunity cost for a particular output has a comparative advantage for that output. However, it’s not always clear whether an energy tradeoff across outputs for humans will be more or less skewed than for AI, so this might or might not influence a human comparative advantage.

Later, however, Smith speculates that AI might bid up the cost of energy so high that “humans would indeed be immiserated en masse.” That position seems inconsistent. In fact, if AI energy demands are so intensive, it’s more likely to dampen the growth in demand for AI agents as well as increase the human comparative advantage because the most energy-intensive AI applications will be disadvantaged.

And again, there is Smith’s caution regarding the energy required for human life support. Is that a valid long-run variable cost associated with comparative advantages possessed by humans? It’s not wrong to include fertility decisions in the long-run aggregate human labor supply function in some fashion, but it doesn’t imply that energy requirements will eliminate comparative advantages. Those will still exist.

Hype, Or Hyper-Growth?

AI has come a long way over the past two years, and while its prospective impact strikes some as hyped thus far, it has the potential to bring vast gains across a number of fields within just a few years. According to this study, explosive economic growth on the order of 30% annually is a real possibility within decades, as generative AI is embedded throughout the economy. “Unprecedented” is an understatement for that kind of expansive growth. Dylan Matthews in Vox surveys the arguments as to how AI will lead to super-exponential economic growth. This is the kind of scenario that would give rise to superabundance.

I noted above that Smith, despite his unwillingness to guarantee that human jobs will exist in a world of generative AI, asserts (in an update) at the bottom of his post that a superabundance of AI (and abundance generally) would not threaten human comparative advantages. This superabundance is a case of decreasing costs of compute and AI deployment. Here Smith says:

“The reason is that the more abundant AI gets, the more value society produces. The more value society produces, the more demand for AI goes up. The more demand goes up, the greater the opportunity cost of using AI for anything other than its most productive use. 

“As long as you have to make a choice of where to allocate the AI, it doesn’t matter how much AI there is. A world where AI can do anything, and where there’s massively huge amounts of AI in the world, is a world that’s rich and prosperous to a degree that we can barely imagine. And all that fabulous prosperity has to get spent on something. That spending will drive up the price of AI’s most productive uses. That increased price, in turn, makes it uneconomical to use AI for its least productive uses, even if it’s far better than humans at its least productive uses. 

“Simply put, AI’s opportunity cost does not go to zero when AI’s resource costs get astronomically cheap. AI’s opportunity cost continues to scale up and up and up, without limit, as AI produces more and more value.”

This seems as if Smith is backing off his earlier hedge. Some of that spending will be in the form of fabulous investment projects of the kinds I mentioned in my post, and smaller ones as well, all enabled by AI. But the key point is that comparative advantages will not go away, and that means human inputs will continue to be economically useful.

I referenced Andrew Mayne in my last post. He contends that the income growth made possible by AI will ensure that plenty of jobs are available for humans. He mentions comparative advantage in passing, but he centers his argument around applications in which human workers and AI will be strong complements in production, as will sometimes be the case.

A New Age of Worry

The economic success of AI is subject to a number of contingencies. Most important is that AI alignment issues are adequately addressed. That is, the “self-interest” of any agentic AI must align with the interests of human welfare. Do no harm!

The difficulty of universal alignment is illustrated by the inevitability of competition among national governments for AI supremacy, especially in the area of AI-enabled weaponry and espionage. The national security implications are staggering.

A couple of Smith‘s biggest concerns are the social costs of adjusting to the economic disruptions AI is sure to bring, as well as its implications for inequality. Humans will still have comparative advantages, but there will be massive changes in the labor market and transitions that are likely to involve spells of unemployment and interruptions to incomes for some. The speed and strength of the AI revolution may well create social upheaval. That will create incentives for politicians to restrain the development and adoption of AI, and indeed, we already see the stirrings of that today.

Finally, Smith worries that the transition to AI will bring massive gains in wealth to the owners of AI assets, while workers with few skills are likely to languish. I’m not sure that’s consistent with his optimism regarding income growth under AI, and inequality matters much less when incomes are rising generally. Still, the concern is worthy of a more detailed discussion, which I’ll defer to a later post.

Major Mistake: The Minimum Opportunity Wage

06 Saturday Jun 2015

Posted by Nuetzel in Price Controls

≈ 1 Comment

Tags

Alan Krueger, Brian Doherty, competition, Coyote Blog, David Card, Don Boudreaux, Economic justice, Fast food robots, Mark Perry, Minimum Wage, Monopsony, Reason Magazine, Rise of the Machines, Robert Reich, Robot replacements, Show-Me Institute, Steve Chapman, Substitutability, Tim Worstall, Unintended Consequences, Wage compression, Warren Meyer

government-problem

City leaders in St. Louis and Kansas City are the latest to fantasize that market manipulation can serve as a pathway to “economic justice”. They want to raise the local minimum wage to $15 by 2020, following similar actions in Los Angeles, Oakland  and Seattle. They will harm the lowest-skilled workers in these cities, not to mention local businesses, their own local economies and their own city budgets. Like many populists on the national level with a challenged understanding of market forces (such as Robert Reich), these politicians won’t recognize the evidence when it comes in. If they do, they won’t find it politically expedient to own up to it. A more cynical view is that the hike’s gradual phase-in may be a deliberate attempt to conceal its negative consequences.

There are many reasons to oppose a higher minimum wage, or any minimum wage for that matter. Prices (including wages) are rich with information about demand conditions and scarcity. They provide signals for owners and users of resources that guide them toward the best decisions. Price controls, such as a wage floor like the minimum wage, short-circuit those signals and are notorious for their disastrous unintended (but very predictable) consequences. Steve Chapman at Reason Magazine discusses the mechanics of such distortions here.

Supporters of a higher minimum wage usually fail to recognize the relationship between wages and worker productivity. That connection is why the imposition of a wage floor leads to a surplus of low-skilled labor. Those with the least skills and experience are the most likely to lose their jobs, work fewer hours or not be hired. In another Reason article, Brian Doherty explains that this is a thorny problem for charities providing transitional employment to workers with low-skills or employability. He also notes the following:

“All sorts of jobs have elements of learning or training, especially at the entry level. Merely having a job at all can have value down the line worth enormously more than the wage you are currently earning in terms of a proven track record of reliable employability or moving up within a particular organization.“

The negative employment effects of a higher wage floor are greater if the employer cannot easily pass higher costs along to customers. That’s why firms in highly competitive markets (and their workers) are more vulnerable. This detriment is all the worse when a higher wage floor is imposed within a single jurisdiction, such as the city of St. Louis. Bordering municipalities stand to benefit from the distorted wage levels in the city, but the net effect will be worse than a wash for the region, as adjustments to the new, artificial conditions are not costless. Again, it is likely that the least capable workers and least resourceful firms will be harmed the most.

The negative effects of a higher wage floor are also greater when substitutes for low-skilled labor are available. Here is a video on the robot solution for fast food order-taking. In fact, today there are robots capable of preparing meals, mopping floors, and performing a variety of other menial tasks. Alternatively, more experienced workers may be asked to perform more menial tasks or work longer hours. Either way, the employer takes a hit. Ultimately, the best alternative for some firms will be to close.

The impact of the higher minimum on the wage rates of more skilled workers is likely to be muted. A correspondent of mine mentioned the consequences of wage compression. From the link:

“In some cases, compression (or inequity) increases the risk of a fight or flee phenomonon [sic]–disgruntlement culminating in union organizing campaigns or, in the case of flee, higher turnover as the result of employees quitting. … all too often, companies are forced to address the problem by adjusting their entire compensation systems–usually upward and across-the-board. .. While wage adjustments may sound good for those who do not have to worry about profits and losses, the real impact for a company typically means it must either increase productivity or lay people off.“

For those who doubt the impact of the minimum wage hike on employment decisions, consider this calculation by Mark Perry:

“The pending 67% minimum wage hike in LA (from $9 to $15 per hour by 2020), which is the same as a $6 per hour tax (or $12,480 annual tax per full-time employee and more like $13,500 per year with increased employer payroll taxes…)….“

Don Boudreaux offers another interesting perspective, asking whether a change in the way the minimum wage is enforced might influence opinion:

“... if these policies were enforced by police officers monitoring workers and fining those workers who agreed to work at hourly wages below the legislated minimum – would you still support minimum wages?“

Proponents of a higher minimum wage often cite a study from 1994 by David Card and Alan Krueger purporting to show that a higher minimum wage in New Jersey actually increased employment in the fast food industry. Tim Worstall at Forbes discussed a severe shortcoming of the Card/Krueger study (HT: Don Boudreaux): Card and Krueger failed to include more labor-intensive independent operators in their analysis, instead focusing exclusively on employment at fast-food chain franchises. The latter were likely to benefit from the failure of independent competitors.

Another common argument put forward by supporters of higher minimum wages is that economic theory predicts positive employment effects if employers have monopsony power in hiring labor, or power to influence the market wage. This is a stretch: it describes labor market conditions in very few localities. Of course, any employer in an unregulated market is free to offer noncompetitive wages, but they will suffer the consequences of taking less skilled and less experienced hires, higher labor turnover and ultimately a competitive disadvantage. Such forces lead rational employers to offer competitive wages for the skills levels they require.

Minimum wages are also defended as an anti-poverty program, but this is a weak argument. A recent post at Coyote Blog explains “Why Minimum Wage Increases are a Terrible Anti-Poverty Program“. Among other points:

“Most minimum wage earners are not poor. The vast majority of minimum wage jobs are held as second jobs or held by second earners in a household or by the kids of affluent households. …

Most people in poverty don’t make the minimum wage. In fact, the typically [sic] hourly income of the poor appears to be around $14 an hour. The problem is not the hourly rate, the problem is the availability of work. The poor are poor because they don’t get enough job hours. …

Many young workers or poor workers with a spotty work record need to build a reliable work history to get better work in the future…. Further, many folks without much experience in the job market are missing critical skills — by these I am not talking about sophisticated things like CNC machine tool programming. I am referring to prosaic skills you likely take for granted (check your privilege!) such as showing up reliably each day for work, overcoming the typical frictions of working with diverse teammates, and working to achieve management-set goals via a defined process.”

Some of the same issues are highlighted by the Show-Me Institute, a Missouri think tank, in “Minimum Wage Increases Not Effective at Fighting Poverty“.

A higher minimum wage is one of those proposals that “sound good” to the progressive mind, but are counter-productive in the extreme. The cities of St. Louis and Kansas City would do well to avoid market manipulation that is likely to backfire.

Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • A Warsh Policy Scenario At the Federal Reserve
  • The Coexistence of Labor and AI-Augmented Capital
  • The Case Against Interest On Reserves
  • Immigration and Merit As Fiscal Propositions
  • Tariff “Dividend” From An Indigent State

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library
  • Scattered Showers and Quicksand

Blog at WordPress.com.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

Scattered Showers and Quicksand

Musings on science, investing, finance, economics, politics, and probably fly fishing.

  • Subscribe Subscribed
    • Sacred Cow Chips
    • Join 128 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...