Tags

, , , , , , , , , , , , , , , , , , , ,

I was happy to see Noah Smith’s recent post on the graces of comparative advantage and the way it should mediate the long-run impact of AI on job prospects for humans. However, I’m embarrassed to have missed his post when it was published in March (and I also missed a New York Times piece about Smith’s position).

I said much the same thing as Smith in my post two weeks ago about the persistence of a human comparative advantage, but I wondered why the argument hadn’t been made prominently by economists. I discussed it myself about seven years ago. But alas, I didn’t see Smith’s post until last week!

I highly recommend it, though I quibble on one or two issues. Primarily, I think Smith qualifies his position based on a faulty historical comparison. Later, he doubles back to offer a kind of guarantee after all. Relatedly, I think Smith mischaracterizes the impact of energy costs on comparative advantages, and more generally the impact of the resources necessary to support a human population.

We Specialize Because…

Smith encapsulates the underlying phenomenon that will provide jobs for humans in a world of high automation and generative AI: “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” He tells technologists “… it’s very possible that regular humans will have plentiful, high-paying jobs in the age of AI dominance — often doing much the same kind of work that they’re doing right now …”

… often, but probably transformed in fundamental ways by AI, and also doing many other new kinds of work that can’t be foreseen at present. Tyler Cowen believes the most important macro effects of AI will be from “new” outputs, not improvements in existing outputs. That emphasis doesn’t necessarily conflict with Smith’s narrative, but again, Smith thinks people will do many of the same jobs as today in a world with advanced AI.

Smith’s Non-Guarantee

Smith hedges, however, in a section of his post entitled “‘Possible’ doesn’t mean guaranteed”. This despite his later assertion that superabundance would not eliminate jobs for humans. That might seem like a separate issue, but it’s strongly intertwined with the declining AI cost argument at the basis of his hedge. More on that below.

On his reluctance to “guarantee” that humans will have jobs in an AI world, Smith links to a 2013 Tyler Cowen post on Why the theory of comparative advantage is overrated”. For example, Cowen says, why do we ever observe long-term unemployment if comparative advantage rules the day? Of course there are many reasons why we observe departures from the predicted results of comparative advantage. Incentives are often manipulated by governments and people differ drastically in their capacities and motivation.

But Cowen cites a theoretical weakness of comparative advantage: that inputs are substitutable (or complementary) by degrees, and the degree might change under different market conditions. An implication is that “comparative advantages are endogenous to trade”, specialization, and prices. Fair enough, but one could say the same thing about any supply curve. And if equilibria exist in input markets it means these endogenous forces tend toward comparative advantages and specializations balancing the costs and benefits of production and trade. These processes might be constrained by various frictions and interventions, and their dynamics might be complex and lengthy, but that doesn’t invalidate their role in establishing specializations and trade.

The Glue Factory

Smith concerns himself mainly with another one of Cowen’s “failings of comparative advantage”: “They do indeed send horses to the glue factory, so to speak.” The gist here is that when a new technology, motorized transportation, displaced draft horses, there was no “wage” low enough to save the jobs performed by horses. Smith says horses were too costly to support (feed, stables, etc…), so their comparative advantage at “pulling things” was essentially worthless.

True, but comparing outmoded draft horses to humans in a world of AI is not quite appropriate. First, feedstock to a “glue factory” better not be an alternative use for humans whose comparative advantages become worthless. We’ll have to leave that question as an imperative for the alignment community.

Second, horses do not have versatile skill sets, so the comparison here is inapt due to their lack of alternative uses as capital assets. Yes, horses can offer other services (racing, riding, nostalgic carriage rides), but sadly, the vast bulk of work horses were “one-trick ponies”. Most draft horses probably had an opportunity cost of less than zero, given the aforementioned costs of supporting them. And it should be obvious that a single-use input has a comparative advantage only in its single use, and only when that use happens to be the state-of-the-art, or at least opportunity-cost competitive.

The drivers, on the other hand, had alternatives, and saw their comparative advantage in horse-driving occupations plunge with the advent of motorized transport. With time it’s certain many of them found new jobs, perhaps some went on to drive motorized vehicles. The point is that humans have alternatives, the number depending only on their ability to learn a crafts and perhaps move to a new location. Thus, as Smith says, “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” But not draft horses in a motorized world, and not square pegs in a world of round holes.

AI Producer Constraints

That brings us to the topic of what Smith calls producer-specific constraints, which place limits on the amount and scope of an input’s productivity. For example, in my last post, there was only one super-talented Harvey Specter, so he’s unlikely to replace you and keep doing his own job. Thus, time is a major constraint. For Harvey or anyone else, the time constraint affects the slope of the tradeoff (and opportunity costs) between one type of specialization versus another.

Draft horses operated under the constraints of land, stable, and feed requirements, which can all be viewed as long-run variable costs. The alternative use for horses at the glue factory did not have those costs.

Humans reliant on wages must feed and house themselves, so those costs also represent constraints, but they probably don’t change the shape of the tradeoff between one occupation and another. That is, they probably do not alter human comparative advantages. Granted, some occupations come with strong expectations among associates or clients regarding an individual’s lifestyle, but this usually represents much more than basic life support. In the other end of the spectrum, displaced workers will take actions along various margins: minimize living costs; rely on savings; avail themselves of charity or any social safety net as might exist; and ultimately they must find new positions at which they maintain comparative advantages.

The Compute Constraint

In the case of AI agents, the key constraint cited by Smith is “compute”, or computer resources like CPUs or GPUs. Advancements in compute have driven the AI revolution, allowing AI models to train on increasingly large data sets and levels of compute. In fact, by one measure of compute, floating point operations per second (FLOPs), compute has become drastically cheaper, with FLOPs per dollar almost doubling every two years. Perhaps I misunderstand him, but Smith seems to assert the opposite: that compute costs are increasing. Regardless, compute is scarce, and will always be scarce because advancements in AI will require vast increases in training. This author explains that while lower compute costs will be more than offset by exponential increases in training requirements, there nevertheless will be an increasing trend in capabilities per compute.

Every AI agent will require compute, and while advancements are enabling explosive growth in AI capabilities, scarce compute places constraints on the kinds of AI development and deployment that some see as a threat to human jobs. In other words, compute scarcity can change the shape of the tradeoffs between various AI applications and thus, comparative advantages.

The Energy Constraint

Another producer constraint on AI is energy. Certainly highly complex applications, perhaps requiring greater training, physical dexterity, manipulation of materials, and judgement, will require a greater compute and energy tradeoff against simpler applications. Smith, however, at one point dismisses energy as a differential producer constraint because “… humans also take energy to run.” That is a reference to absolute energy requirements across inputs (AI vs. human), not differential requirements for an input across different outputs. Only the latter impinge on tradeoffs or opportunity costs facing an inputs. Then, the input having the lowest opportunity cost for a particular output has a comparative advantage for that output. However, it’s not always clear whether an energy tradeoff across outputs for humans will be more or less skewed than for AI, so this might or might not influence a human comparative advantage.

Later, however, Smith speculates that AI might bid up the cost of energy so high that “humans would indeed be immiserated en masse.” That position seems inconsistent. In fact, if AI energy demands are so intensive, it’s more likely to dampen the growth in demand for AI agents as well as increase the human comparative advantage because the most energy-intensive AI applications will be disadvantaged.

And again, there is Smith’s caution regarding the energy required for human life support. Is that a valid long-run variable cost associated with comparative advantages possessed by humans? It’s not wrong to include fertility decisions in the long-run aggregate human labor supply function in some fashion, but it doesn’t imply that energy requirements will eliminate comparative advantages. Those will still exist.

Hype, Or Hyper-Growth?

AI has come a long way over the past two years, and while its prospective impact strikes some as hyped thus far, it has the potential to bring vast gains across a number of fields within just a few years. According to this study, explosive economic growth on the order of 30% annually is a real possibility within decades, as generative AI is embedded throughout the economy. “Unprecedented” is an understatement for that kind of expansive growth. Dylan Matthews in Vox surveys the arguments as to how AI will lead to super-exponential economic growth. This is the kind of scenario that would give rise to superabundance.

I noted above that Smith, despite his unwillingness to guarantee that human jobs will exist in a world of generative AI, asserts (in an update) at the bottom of his post that a superabundance of AI (and abundance generally) would not threaten human comparative advantages. This superabundance is a case of decreasing costs of compute and AI deployment. Here Smith says:

The reason is that the more abundant AI gets, the more value society produces. The more value society produces, the more demand for AI goes up. The more demand goes up, the greater the opportunity cost of using AI for anything other than its most productive use. 

As long as you have to make a choice of where to allocate the AI, it doesn’t matter how much AI there is. A world where AI can do anything, and where there’s massively huge amounts of AI in the world, is a world that’s rich and prosperous to a degree that we can barely imagine. And all that fabulous prosperity has to get spent on something. That spending will drive up the price of AI’s most productive uses. That increased price, in turn, makes it uneconomical to use AI for its least productive uses, even if it’s far better than humans at its least productive uses. 

Simply put, AI’s opportunity cost does not go to zero when AI’s resource costs get astronomically cheap. AI’s opportunity cost continues to scale up and up and up, without limit, as AI produces more and more value.”

This seems as if Smith is backing off his earlier hedge. Some of that spending will be in the form of fabulous investment projects of the kinds I mentioned in my post, and smaller ones as well, all enabled by AI. But the key point is that comparative advantages will not go away, and that means human inputs will continue to be economically useful.

I referenced Andrew Mayne in my last post. He contends that the income growth made possible by AI will ensure that plenty of jobs are available for humans. He mentions comparative advantage in passing, but he centers his argument around applications in which human workers and AI will be strong complements in production, as will sometimes be the case.

A New Age of Worry

The economic success of AI is subject to a number of contingencies. Most important is that AI alignment issues are adequately addressed. That is, the “self-interest” of any agentic AI must align with the interests of human welfare. Do no harm!

The difficulty of universal alignment is illustrated by the inevitability of competition among national governments for AI supremacy, especially in the area of AI-enabled weaponry and espionage. The national security implications are staggering.

A couple of Smith‘s biggest concerns are the social costs of adjusting to the economic disruptions AI is sure to bring, as well as its implications for inequality. Humans will still have comparative advantages, but there will be massive changes in the labor market and transitions that are likely to involve spells of unemployment and interruptions to incomes for some. The speed and strength of the AI revolution may well create social upheaval. That will create incentives for politicians to restrain the development and adoption of AI, and indeed, we already see the stirrings of that today.

Finally, Smith worries that the transition to AI will bring massive gains in wealth to the owners of AI assets, while workers with few skills are likely to languish. I’m not sure that’s consistent with his optimism regarding income growth under AI, and inequality matters much less when incomes are rising generally. Still, the concern is worthy of a more detailed discussion, which I’ll defer to a later post.