There’s a hopeful narrative making the rounds that artificial intelligence will prove to be such a boon to the economy that we need not worry about high levels of government debt. AI investment is already having a substantial economic impact. Jason Thomas of Carlyle says that AI capital expenditures on such things as data centers, hardware, and supporting infrastructure account for about a third of second quarter GDP growth (preliminarily a 3% annual rate). Furthermore, he says relevant orders are growing at an annual rate of about 40%. The capex boom may continue for a number of years before leveling off. In the meantime, we’ll begin to see whether AI is capable of boosting productivity more broadly.
Unfortunately, even with this kind of investment stimulus, there’s no assurance that AI will create adequate economic growth and tax revenue to end federal deficits, let alone pay down the $37 trillion public debt. That thinking puts too much faith in a technology that is unproven as a long-term economic engine. It would also be a naive attitude toward managing debt that now carries an annual interest cost of almost $1 trillion, accounting for about half of the federal budget deficit.
Boom Times?
Predictions of AI’s long-term macro impact are all over the map. Goldman Sachs estimates a boost in global GDP of 7% over 10 years, which is not exactly aggressive. Daren Acemoglu has beenevenmore conservative, estimating a gain of 0.7% in total factor productivity over 10 years. Tyler Cowen has been skeptical about the impact of AI on economic growth. For an even more pessimistic take see these comments.
In July, however, Seth Benzell of the Stanford Digital Economy Lab discussed some simulations showing impressive AI-induced growth (see chart at top). The simulations project additional U.S. GDP growth of between 1% – 3% annually over the next 75 years! The largest boost in growth occurs now through the 2050s. This would produce a major advance in living standards. It would also eliminate the federal deficit and cure our massive entitlement insolvency, but the result comes with heavy qualifications. In fact, Benzell ultimately throws cold water on the notion that AI growth will be strong enough to reduce or even stabilize the public debt to GDP ratio.
The Scarcity Spoiler
The big hitch has to do with the scarcity of capital, which I’ve described asanimpediment to widespread AI application. Competition for capital will drive interest rates up (3% – 4%, according to Benzell’s model). Ongoing needs for federal financing intensify that effect. But it might not be so bad, according to Benzell, if climbing rates are accompanied by heightened productivity powered by AI. Then, tax receipts just might keep-up with or exceed the explosion in the government’s interest obligations.
A further complication cited by Benzell lurks in insatiable demands for public spending, and politicians who simply can’t resist the temptation to buy votes via public largesse. Indeed, as we’ve already seen, government will try to get in on the AI action, channeling taxpayer funds into projects deemed to be in the public interest. And if there are segments of the work force whose jobs are eliminated by AI, there will be pressure for public support. So even if AI succeeds in generating large gains in productivity and tax revenue, there’s very little chance we’ll see a contagion of fiscal discipline in Washington DC. This will put more upward pressure on interest rates, giving rise to the typical crowding out phenomenon, curtailing private investment in AI.
Playing Catch-Up
The capex boom must precede much of the hoped-for growth in productivity from AI. Financing comes first, which means that rates are likely to rise sooner than productivity gains can be expected. And again, competition from government borrowing will crowd out some private AI investment, slowing potential AI-induced increases in tax revenue.
There’s no chance of the converse: that AI investment will crowd out government borrowing! That kind of responsiveness is not what we typically see from politicians. It’s more likely that ballooning interest costs and deficits generally will provoke even more undesirable policy moves, such as money printing or rate ceilings.
The upshot is that higher interest rates will cause deficits to balloon before tax receipts can catch up. And as for tax receipts, the intangibility of AI will create opportunities for tax flight to more favorable jurisdictions, a point well understood by Benzell. As attorneys Bradford S. Cohen and Megan Jones put it:
“Digital assets can be harder to find and more easily shifted offshore, limiting the tax reach of the U.S. government.”
AI Growth Realism
Benzell’s trepidation about our future fiscal imbalances is well founded. However, I also think Benzell’s modeled results, which represent a starting point in his analysis of AI and the public debt, are too optimistic an assessment of AI’s potential to boost growth. As he says himself,
“… many of the benefits from AI may come in the form of intangible improvements in digital consumption goods. … This might be real growth, that really raises welfare, but will be hard to tax or even measure.”
This is unlikely to register as an enhancement to productivity. Yet Benzell somehow buys into the argument that AI will lead to high levels of unemployment. That’s one of his reasons for expecting higher deficits.
My view is that AI will displace workers in some occupations, but it is unlikely to put large numbers of humans permanently out of work and into state support. That’s because the opportunity cost of many AI applications is and will remain quite high. It will have to compete for financing not only with government and more traditional capex projects, but with various forms of itself. This will limit both the growth we are likely to reap from AI and losses of human jobs.
Sovereign Wealth Fund
I have one other bone to pick with Benzell’s post. That’s in regard to his eagerness to see the government create a sovereign wealth fund. Here is his concluding paragraph:
“Instead of contemplating a larger debt, we should instead be talking about a national sovereign wealth fund, that could ‘own the robots on behalf of the people’. This would both boost output and welfare, and put the welfare system on an indefinitely sustainable path.”
Whether the government sells federal assets or collects booty from other kinds of “deals”, the very idea of accumulating risk assets in a sovereign wealth fund undermines the objective to reduce debt. It will be a struggle for a sovereign wealth fund to consistently earn cash returns to compensate for interest costs and pay down the debt. This is especially unwise given the risk of rising rates. Furthermore, government interests in otherwise private concerns will bring cronyism, displacement of market forces by central planning, and a politicization of economic affairs. Just pay off the debt with whatever receipts become available. This will free up savings for investment in AI capital and hasten the hoped-for boom in productivity.
Summary
AI’s contribution to economic growth probably will be inadequate and come too late to end government budget deficits and reduce our burgeoning public debt. To think otherwise seems far fetched in light of our historical inability to restrain the growth of federal spending. Interest on the federal debt already accounts for about half of the annual budget deficit. Refinancing the existing public debt will entail much higher costs if AI capex continues to grow aggressively, pushing interest rates higher. These dynamics make it pretty clear that AI won’t provide an easy fix for federal deficits and debt. In fact, ongoing federal borrowing needs will sop up savings needed for AI development and diffusion, even as the capital needed for AI drives up the cost of funds to the government. It’s a shame that AI won’t be able to crowd out government.
Every now and then I grind my axe against the proposition that AI will put humans out of work. It’s a very fashionable view, along with the presumed need for government to impose “robot taxes” and provide everyone with a universal basic income for life. The thing is, I sense that my explanations for rejecting this kind of narrative have been a little abstruse, so I’m taking another crack at it now.
Will Human Workers Be Obsolete?
The popular account envisions a world in which AI replaces not just white-collar technocrats, but by pairing AI with advanced robotics, it replaces workers in the trades as well as manual laborers. We’ll have machines that cure, litigate, calculate, forecast, design, build, fight wars, make art, fix your plumbing, prune your roses, and replicate. They’ll be highly dextrous, strong, and smart, capable of solving problems both practical and abstract. In short, AI capital will be able to do everything better and faster than humans! The obvious fear is that we’ll all be out of work.
I’m here to tell you it will not happen that way. There will be disruptions to the labor market, extended periods of joblessness for some individuals, and ultimately different patterns of employment. However, the chief problem with the popular narrative is that AI capital will require massive quantities of resources to produce, train, and operate.
Even without robotics, today’s AIs require vast flows of energy and other resources, and that includes a tremendous amount of expensive compute. The needed resources are scarce and highly valued in a variety of other uses. We’ll face tradeoffs as a society and as individuals in allocating resources both to AI and across various AI applications. Those applications will have to compete broadly and amongst themselves for priority.
AI Use Cases
There are many high-value opportunities for AI and robotics, such as industrial automation, customer service, data processing, and supply chain optimization, to name a few. These are already underway to a significant extent. To that, however, we can add medical research, materials research, development of better power technologies and energy storage, and broad deployment in delivering services to consumers and businesses.
In the future, with advanced robotics, AI capital could be deployed in domains that carry high risks for human labor, such as construction of high rise buildings, underwater structures, and rescue operations. This might include such things as construction of solar platforms and large transports in space, or the preparation of space habitats for humans on other worlds.
Scarcity
There is no end to the list of potential applications of AI, but neither is there an end to the list of potential wants and aspirations of humanity. Human wants are insatiable, which sometimes provokes ham-fisted efforts by many governments to curtail growth. We have a long way to go before everyone on the planet lives comfortably. But even then, peoples’ needs and desires will evolve once previous needs are satisfied, or as technology changes lifestyles and practices. New approaches and styles drive fashions and aesthetics generally. There are always individuals who will compete for resources to experiment and to try new things. And the insatiability of human wants extends beyond the strictly private level. Everyone has an opinion about unsatisfied needs in the public sphere, such as infrastructure, maintenance, the environment, defense, space travel, and other dimensions of public activity.
Futurists have predicted that the human race will seek to become a so-called Type I civilization, capable of harnessing all of the energy on our planet. Then there will be the quest to harness all the energy within our solar system (a Type II civilization). Ultimately, we’ll seek to go beyond that by attempting to exploit all the energy in the Milky Way galaxy. Such an expansion of our energy demands would demonstrate how our wants always exceed the resources we have the ability to exploit.
In other words, scarcity will always be with us. The necessity of facing tradeoffs won’t ever be obviated, and prices will always remain positive. The question of dedicating resources to any particular application of AI will bring tradeoffs into sharper relief. The opportunity cost of many “lesser” AI and robotics applications will be quite high relative to their value to investors. Simply put, many of those applications will be rejected because there will be better uses for the requisite energy and other resources.
Tradeoffs
Again, it will be impossible for humans to accomplish many of the tasks that AI’s will perform, or to match the sheer productivity of AIs in doing so. Therefore, AI will have an absolute advantage over humans in all of those tasks.
However, there are many potential applications of AI that are of comparatively low value. These include a variety of low-skill tasks, but also tasks that require some dexterity or continuous judgement and adjustment. Operationalizing AI and robots to perform all these tasks, and diverting the necessary capital and energy away from other uses, would have a tremendously high opportunity cost. Human opportunity costs will not be so high. Thus, people will have a comparative advantage in performing the bulk if not all of these tasks.
Sure, there will be novelty efforts and test cases to train robots to do plumbing or install burglar alarm systems, and at some point buyers might wish to have robots prune their roses. Some people are already amenable to having humanoid robots perform sex work. Nevertheless, humans will remain competitive at these tasks due to the comparatively high opportunity costs faced by AI capital.
There will be many other domains in which humans will remain competitive. Once more, that’s because the opportunity costs for AI capital and other resources will be high. This includes many of the skilled trades, caregivers, and a great many management functions, especially at small companies. Their productivity will be enhanced by AI tools, but those jobs will not be decimated.
The key here is understanding that 1) capital and resources generally are scarce; 2) high value opportunities for AI are plentiful; and 3) the opportunity cost of funding AI in many applications will be very high. Humans will still have a comparative advantage in many areas.
Who’s the Boss?
There are still other ways in which human labor will always be required. One in particular involves the often complementary nature of AI and human inputs. People will have roles in instructing and supervising AIs, especially in tasks requiring customization and feedback. A key to assuring AI alignment with the objectives of almost any pursuit is human review. These kinds of roles are likely to be compensated in line with the complexity of the task. This extends to the necessity of human leadership of any organization.
That brings me to the subject of agentic and fully autonomous AI. No matter how sophisticated they get, AIs will always be the product of machines. They’ll be a kind of capital for which ownership should be confined to humans or organizations representing humans. We must be their masters. Disclaiming ownership and control of AIs, and granting agentic AIs the same rights and freedoms as people (as many have imagined) is unnecessary and possibly dangerous. AIs will do much productive work, but that work should be on behalf of human owners, and human labor will be deployed to direct and assess that work.
AIs (and People) Needing People
The collaboration between AIs and humans described above will manifest more broadly than anything task-specific, or anything we can imagine today. This is typical of technological advance. First-order effects often include job losses as new innovations enhance productivity or replace workers outright, but typically new jobs are created as innovations generate new opportunities for complementary products and services both upstream in production or downstream among ultimate users. In the case of AI, while much of this work might be performed by other AIs, at a minimum these changes will require guidance and supervision by humans.
In addition, consumers tend to have an aesthetic preference for goods and services produced by humans: craftsmen, artists, and entertainers. For example, if you’ve ever shopped for an oriental rug, you know that hand-knotted rugs are more expensive than machine-weaved rugs. Durability is a factor as well as uniqueness, the latter being a hallmark of human craftspeople. AI might narrow these differences over time, but the “human touch” will always have value relative to “comparable” AI output, even at a significant disadvantage in terms of speed and uncertainty regarding performance. The same is true of many other forms, such as sports, dance, music, and the visual arts. People prefer to be entertained by talented people, rather than highly-engineered machines. The “human touch” also has advantages in customer-facing transactions, including most forms of service and high-level sales/financial negotiations.
Owning the Machines
Finally, another word about AI ownership. An extension of the fashionable narrative that AIs will wholly replace human workers is that government will be called upon to tax AI and provide individuals with a universal basic income (UBI). Even if human labor were to be replaced by AIs, I believe that a “classic” UBI would be the wrong approach. Instead, all humans should have an ownership stake in the capital stock. This is wealth that yields compound growth over time and produces returns that make humans less reliant on streams of labor income.
Savings incentives (and negative consumption incentives) are a big step in encouraging more widespread ownership of capital. However, if direct intervention is necessary, early endowments of capital would be far preferable to a UBI because they will largely be saved, fostering economic growth, and they would create better incentives than a UBI. Along those lines, President Trump’s Big Beautiful Bill, which is now law, has established “Baby Bonds” for all American children born in 2025 – 2028, initially funded by the federal government with $1,000. Of course, this is another unfunded federal obligation on top of the existing burden of a huge public debt and ongoing deficits. Given my doubts about the persistence of AI-induced job losses, I reject government establishment of both a UBI and universal endowments of capital.
Summary
Capital and energy are scarce, so the tremendous resource requirements of AI and robotics means that the real world opportunity costs of many AI applications will remain impractically high. The tradeoffs will be so steep that they’ll leave humans with comparative advantages in many traditional areas of employment. Partly, these will come down to a difference in perceived quality owing to a preference for human interaction and human performance in a variety of economic interactions, including patronization of the art and athleticism of human beings. In addition, AIs will open up new occupations never before contemplated. We won’t be out of work. Nevertheless, it’s always a good idea to accumulate ownership in productive assets, including AI capital, and public policy should do a better job of supporting the private initiative to do so.
In this case, the “A” stands for Altman. Now Sam Altman is no slouch, but he’s taken a few ill-considered positions on public policy. Altman, the CEO of Open AI, wrote a blog post back in 2021 entitled “Moore’s Law For Everything” in which he predicted that AI will feed an explosion of economic growth. He also said AI will put a great many people out of work and drive down the price of certain kinds of labor. Furthermore, he fears that the accessibility of AI will be heavily skewed against the lowest socioeconomic classes. In later interviews (see here and here), Altman is somewhat demure about those predictions, but the general outline is the same: despite exceptional growth of GDP and wealth, he envisions job losses, an underclass of AI-illiterates, and a greater degree of income and wealth inequality.
Not Quite Like That
We’ve yet to see an explosion of growth, but it’s still very early in the AI revolution. The next several years will be telling. AI holds the potential to vastly increase our production possibilities over the course of the next few decades. For that and other reasons, I don’t buy the more dismal aspects of Altman’s scenario, as my last two posts make clear (here and here).
There will be plenty of jobs for people because humans will have comparative advantages in various areas of production. AI agents might have absolute advantages across most or even all jobs, but a rational deployment would have AI agents specialize only where they have a comparative advantage.
Scarcity will not be the sort of anachronism envisioned by some AI futurists, Altman included, and scarcity of AI agents (and their inputs) will necessitate their specialization in certain tasks. The demand for AI agents will be quite high, and their energy and “compute” requirements will be massive. AI agents will face extremely high opportunity costs in other tasks, leaving many occupations open for human labor, to say nothing of abundant opportunities for human-AI collaboration.
However, I don’t dismiss the likelihood of disruptions in markets for certain kinds of labor if the AI revolution proceeds as rapidly as Altman thinks it will. Many workers would be displaced, and it would take time, training, and a willingness to adapt for them to find new opportunities. But new kinds of jobs for people will emerge with time as AI is embedded throughout the economy.
Altman’s Rx
Altman’s somewhat pessimistic outlook for human employment and inequality leads him to make a couple of recommendations:
1) Ownership of capital must be more broadly distributed.
2) Capital and land must be taxed, potentially replacing income taxes, but primarily to fund equity investments for all Americans.
Here I agree with the spirit of #1. Broad ownership of capital is desirable. It allows greater participation in the capitalist system, which fosters political and economic stability. And wider access to capital, whether owned or not, allows a greater release of entrepreneurial energy. It also diversifies incomes and reduces economic dependency.
Altman proposes the creation of an American Equity Fund (AEF) to hold the proceeds of taxes on land and corporate assets for the benefit of all Americans. I’ll get to the taxes in a moment, but in discussing the importance of educating the public on the benefits of compounding, Altman seems to imply that assets in AEF would be held in individual accounts, as opposed to a single “public” account controlled by the federal government. Individual accounts would be far preferable, but it’s not clear how much control Altman would grant individuals in managing their accounts.
To Kill a Golden Goose
Taxes on capital are problematic. Capital can only be accumulated over time by saving out of income. Thus, as Michael Munger points out, as a general proposition under an income tax, all capital has already been taxed once. And we tax the income from capital at both the corporate and individual level. So corporate income is already double taxed: corporate profits are taxed along with dividend payments to shareholders.
Altman proposed in his 2021 blog post to levy a tax of 2.5% on the market value of publicly-traded corporations each year. The tax would be payable in cash or in corporate shares to be placed into the AEF. The latter would establish a kind of UnLiquidated Tax Reserve Accounts (ULTRA), which Munger discusses in the article linked above (my bracketed x% in the quote here):
“Instead of taking [x%] of the liquidated value of the wealth, the state would simply take ownership of the wealth, in place. An ULTRA is a ‘notional equity interest.’ The government literally takes a portion of the value of the asset; that value will be paid to the state when the asset is sold. Now, it is only a ‘notional’ stake, in the sense that no shared right of control or voting rights exists. But for those who advocate for ULTRAs, in any situation where tax agencies are authorized to tax an asset today, but cannot because there is no evaluation event, the taxpayer could be made to pay with an ULTRA rather than with cash.”
This solves all sorts of administrative problems associated with wealth taxes, but it is draconian nevertheless. Munger quotes an example of a successful, privately-held business subject to a 2% wealth tax every year in the form of an ULTRA. After 20 years, the government owns more than a third of the company’s value. That represents a substantial penalty for success! However, the incidence of such a tax might fall more on workers and customers and less on business owners. And Altman would tax corporations more heavily than in Munger’s example.
A tax on wealth essentially penalizes thrift, reduces capital accumulation, and diminishes productivity and real wages. But another fundamental reason that taxes on capital should be low is that the supply of capital is elastic. A tax on capital discourages saving and encourages capital flight. The use of avoidance schemes will proliferate, and there will be intense pressure to carve out special exemptions.
A Regressive Dimension
Another drawback of a wealth tax is its regressivity with respect to returns on capital. To see this, we can convert a tax on wealth to an equivalent income tax on returns. Here is Chris Edwardson that point:
“Suppose a person received a pretax return of 6 percent on corporate equities. An annual wealth tax of 2 percent would effectively reduce that return to 4 percent, which would be like a 33 percent income tax—and that would be on top of the current federal individual income tax, which has a top rate of 37 percent.”
… The effect is to impose lower effective tax rates on higher‐yielding assets, and vice versa. If equities produced returns of 8 percent, a 2 percent wealth tax would be like a 25 percent income tax. But if equities produced returns of 4 percent, the wealth tax would be like a 50 percent income tax. People with the lowest returns would get hit with the highest tax rates, and even people losing money would have to pay the wealth tax.“
Edwards notes the extreme inefficiency of wealth taxes demonstrated by the experience of a number of OECD countries. There are better ways to increase revenue and the progressivity of taxes. The best alternative is a tax on consumption, which rewards saving and capital accumulation, promoting higher wages and economic growth. Edwards dedicates a lengthy section of his paper to the superiority of a consumption tax.
Is a Wealth Tax Constitutional?
The constitutionality of a wealth tax is questionable as well. Steven Calabresi and David Schizer (C&S)contend that a federal wealth tax would qualify as a direct tax subject to the rule of apportionment, which would also apply to a federal tax on land. That is, under the U.S. Constitution, these kinds of taxes would have to be the same amount per capita in every state. Thus, higher tax rates would be necessary in less wealthy states.
C&S also note a major distinction between taxes on the value of wealth relative to income, excise, import, and consumption taxes. The latter are all triggered by transactions entered into voluntarily. They are avoidable in that sense, but not wealth taxes. Moreover, C&S believe the founders’ intent was to rely on direct taxes only as a backstop during wartime.
The recent Supreme Court decision in Moore v. United States created doubt as to whether the Court had set a precedent in favor of a potential wealth tax. According to earlier precedent, the Constitution forbade the “laying of taxes” on “unrealized” income or changes in wealth. However, in Moore, the Court ruled that undistributed profits from an ownership interest in a foreign business are taxable under the mandatory repatriation tax, signed into law by President Trump in 2017 as part of his tax overhaul package. But Justice Kavanaugh, who wrote the majority opinion, stated that the ruling was based on the foreign company’s status as a pass-through entity. The Wall Street Journalsays of the decision:
“Five Justices open the door to taxing unrealized gains in assets. Democrats will walk through it.”
In a brief post, Calabrisi laments Justice Ketanji Brown Jackson’s expansive view of the federal government’s taxing authority under the Sixteenth Amendment, which might well be shared by the Biden Administration. But the Wall Street Journal piece also describes Kavanaugh’s admonition regarding any expectation of a broader application of the Moore opinion:
“Justice Kavanaugh does issue a warning that ‘the Due Process Clause proscribes arbitrary attribution’ of undistributed income to shareholders. And he writes that his opinion should not ‘be read to authorize any hypothetical congressional effort to tax both an entity and its shareholders or partners on the same undistributed income realized by the entity.’”
Growth Is the Way, Not Taxes
AI growth will lead to rapid improvements in labor productivity and real wages in many occupations, despite a painful transition for some workers requiring occupational realignment and periods of unemployment and training. However, people will retain comparative advantages over AI agents in a number of existing occupations. Other workers will find that AI allows them to shift their efforts toward higher-value or even new aspects of their jobs. Along the same lines, there will be a huge variety of new occupations made possible by AI of which we’re only now catching the slightest glimpse. Michael Strain has emphasized this aspect of technological diffusion, noting that 60% of the jobs performed in 2018 did not exist in 1940. In fact, few of those “new” jobs could have been imagined in 1940.
AI entrepreneurs and AI investors will certainly capture a disproportionate share of gains from an AI revolution. Of course, they’ll have created a disproportionate share of that wealth. It might well skew the distribution of wealth in their favor, but that does not reflect negatively on the market process driving the outcome, especially because it will also give rise to widespread gains in living standards.
Altman goes wrong in proposing tax-funded redistribution of equity shares. Those taxes would slow AI development and deployment, reduce economic growth, and produce fewer new opportunities for workers. The surest way to effect a broader distribution of equity capital, and of equity in AI assets, is to encourage innovation, economic growth, and saving. Taxing capital more heavily is a very bad way to do that, whether from heavier taxes on income from capital, new taxes on unrealized gains, or (worst of all) from taxes on the value of capital, including ULTRA taxes.
Altman is right, however, to bemoan the narrow ownership of capital. As I mentioned above, he’s also on-target in saying that most people do not fully appreciate the benefits of thrift and the miracle of compounding. That represents both a failure of education and our calamitously high rate of time preference as a society. Perhaps the former can be fixed! However, thrift is a decision best left in private hands, especially to the extent that AI stimulates rapid income growth.
Killer Regulation
Altman also supports AI regulation, and I’ll cut him some slack by noting that his motives might not be of the usual rent-seeking variety. Maybe. Anyway, he’ll get some form of his wish, as legislators are scrambling to draft a “roadmap” for regulating AI. Some are calling for billions of federal outlays to “support” AI development, with a likely and ill-advised effort to “direct” that development as well. That is hardly necessary given the level of private investment AI is already attracting. Other “roadmap” proposals call for export controls on AI and protections for the film and recording industries.
These proposals are fueled by fears about AI, which run the gamut from widespread unemployment to existential risks to humanity. Considerable attention has been devoted to the alignment of AI agents with human interests and well being, but this has emerged largely within the AI development community itself. There are many alignment optimists, however, and still others who decry any race between tech giants to bring superhuman generative AI to market.
The Biden Administration stepped in last fall with an executive order on AI under emergency powers established by the Defense Production Act. The order ranges more broadly than national defense might necessitate, and it could have damaging consequences. Much of the order is redundant with respect to practices already followed by AI developers. It requires federal oversight over all so-called “foundation models” (e.g., ChatGPT), including safety tests and other “critical information”. These requirements are to be followed by the establishment of additional federal safety standards. This will almost certainly hamstring investment and development of AI, especially by smaller competitors.
Patrick Hedger discusses the destructive consequences of attempts to level the competitive AI playing field via regulation and antitrust actions. Traditionally, regulation tends to entrench large players who can best afford heavy compliance costs and influence regulatory decisions. Antitrust actions also impose huge costs on firms and can result in diminished value for investors in AI start-ups that might otherwise thrive as takeover targets.
Conclusion
Sam Altman’s vision of funding a redistribution of equity capital via taxes on wealth suffers from serious flaws. For one thing, it seems to view AI as a sort of exogenous boon to productivity, wholly independent of investment incentives. Taxing capital would inhibit investment in new capital (and in AI), diminish growth, and thwart the very goal of broad ownership Altman wishes to promote. Any effort to tax capital at a global level (which Altman supports) is probably doomed to failure, and that’s a good thing. The burden of taxes on capital at the corporate level would largely be shifted to workers and consumers, pushing real wages down and prices up relative to market outcomes.
Low taxes on income and especially on capital, together with light regulation, promote saving, capital investment, economic growth, higher real wages, and lower prices. For AI, like all capital investment, public policy should focus on encouraging “aligned” development and deployment of AI assets. A consumption tax would be far more efficient than wealth or capital taxes in that respect, and more effective in generating revenue. Policies that promote growth are the best prescription for broadening the distribution of capital ownership.
I was happy to see Noah Smith’s recent post on the graces of comparative advantage and the way it should mediate the long-run impact of AI on job prospects for humans. However, I’m embarrassed to have missed his post when it was published in March (and I also missed a New York Timespiece about Smith’s position).
I said much the same thing as Smith in my post two weeks ago about the persistence of a human comparative advantage, but I wondered why the argument hadn’t been made prominently by economists.I discussed it myself about seven years ago. But alas, I didn’t see Smith’s post until last week!
I highly recommend it, though I quibble on one or two issues. Primarily, I think Smith qualifies his position based on a faulty historical comparison. Later, he doubles back to offer a kind of guarantee after all. Relatedly, I think Smith mischaracterizes the impact of energy costs on comparative advantages, and more generally the impact of the resources necessary to support a human population.
We Specialize Because…
Smith encapsulates the underlying phenomenon that will provide jobs for humans in a world of high automation and generative AI: “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” He tells technologists “… it’s very possible that regular humans will have plentiful, high-paying jobs in the age of AI dominance — often doing much the same kind of work that they’re doing right now …”
… often, but probably transformed in fundamental ways by AI, and also doing many other new kinds of work that can’t be foreseen at present. Tyler Cowen believes themost important macro effects of AI will be from “new” outputs, not improvements in existing outputs. That emphasis doesn’t necessarily conflict with Smith’s narrative, but again, Smith thinks people will do many of the same jobs as today in a world with advanced AI.
Smith’s Non-Guarantee
Smith hedges, however, in a section of his post entitled “‘Possible’ doesn’t mean guaranteed”. This despite his later assertion that superabundance would not eliminate jobs for humans. That might seem like a separate issue, but it’s strongly intertwined with the declining AI cost argument at the basis of his hedge. More on that below.
On his reluctance to “guarantee” that humans will have jobs in an AI world, Smith links to a 2013 Tyler Cowen post on“Why the theory of comparative advantage is overrated”. For example, Cowen says, why do we ever observe long-term unemployment if comparative advantage rules the day? Of course there are many reasons why we observe departures from the predicted results of comparative advantage. Incentives are often manipulated by governments and people differ drastically in their capacities and motivation.
But Cowen cites a theoretical weakness of comparative advantage: that inputs are substitutable (or complementary) by degrees, and the degree might change under different market conditions. An implication is that “comparative advantages are endogenous to trade”, specialization, and prices. Fair enough, but one could say the same thing about any supply curve. And if equilibria exist in input markets it means these endogenous forces tend toward comparative advantages and specializations balancing the costs and benefits of production and trade. These processes might be constrained by various frictions and interventions, and their dynamics might be complex and lengthy, but that doesn’t invalidate their role in establishing specializations and trade.
The Glue Factory
Smith concerns himself mainly with another one of Cowen’s “failings of comparative advantage”: “They do indeed send horses to the glue factory, so to speak.” The gist here is that when a new technology, motorized transportation, displaced draft horses, there was no “wage” low enough to save the jobs performed by horses. Smith says horses were too costly to support (feed, stables, etc…), so their comparative advantage at “pulling things” was essentially worthless.
True, but comparing outmoded draft horses to humans in a world of AI is not quite appropriate. First, feedstock to a “glue factory” better not be an alternative use for humans whose comparative advantages become worthless. We’ll have to leave that question as an imperative for the alignment community.
Second, horses do not have versatile skill sets, so the comparison here is inapt due to their lack of alternative uses as capital assets. Yes, horses can offer other services (racing, riding, nostalgic carriage rides), but sadly, the vast bulk of work horses were “one-trick ponies”. Most draft horses probably had an opportunity cost of less than zero, given the aforementioned costs of supporting them. And it should be obvious that a single-use input has a comparative advantage only in its single use, and only when that use happens to be the state-of-the-art, or at least opportunity-cost competitive.
The drivers, on the other hand, had alternatives, and saw their comparative advantage in horse-driving occupations plunge with the advent of motorized transport. With time it’s certain many of them found new jobs, perhaps some went on to drive motorized vehicles. The point is that humans have alternatives, the number depending only on their ability to learn a crafts and perhaps move to a new location. Thus, as Smith says, “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” But not draft horses in a motorized world, and not square pegs in a world of round holes.
AI Producer Constraints
That brings us to the topic of what Smith calls producer-specific constraints, which place limits on the amount and scope of an input’s productivity. For example, in my last post, there was only one super-talented Harvey Specter, so he’s unlikely to replace you and keep doing his own job. Thus, time is a major constraint. For Harvey or anyone else, the time constraint affects the slope of the tradeoff (and opportunity costs) between one type of specialization versus another.
Draft horses operated under the constraints of land, stable, and feed requirements, which can all be viewed as long-run variable costs. The alternative use for horses at the glue factory did not have those costs.
Humans reliant on wages must feed and house themselves, so those costs also represent constraints, but they probably don’t change the shape of the tradeoff between one occupation and another. That is, they probably do not alter human comparative advantages. Granted, some occupations come with strong expectations among associates or clients regarding an individual’s lifestyle, but this usually represents much more than basic life support. In the other end of the spectrum, displaced workers will take actions along various margins: minimize living costs; rely on savings; avail themselves of charity or any social safety net as might exist; and ultimately they must find new positions at which they maintain comparative advantages.
The Compute Constraint
In the case of AI agents, the key constraint cited by Smith is “compute”, or computer resources like CPUs or GPUs. Advancements in compute have driven the AI revolution, allowing AI models to train on increasingly large data sets and levels of compute. In fact, by one measure of compute, floating point operations per second (FLOPs), compute has become drastically cheaper, with FLOPs per dollar almost doubling every two years. Perhaps I misunderstand him, but Smith seems to assert the opposite: that compute costs are increasing. Regardless, compute is scarce, and will always be scarce because advancements in AI will require vast increases in training. This author explains that while lower compute costs will be more than offset by exponential increases in training requirements, there nevertheless will be an increasing trend in capabilities per compute.
Every AI agent will require compute, and while advancements are enabling explosive growth in AI capabilities, scarce compute places constraints on the kinds of AI development and deployment that some see as a threat to human jobs. In other words, compute scarcity can change the shape of the tradeoffs between various AI applications and thus, comparative advantages.
The Energy Constraint
Another producer constraint on AI is energy. Certainly highly complex applications, perhaps requiring greater training, physical dexterity, manipulation of materials, and judgement, will require a greater compute and energy tradeoff against simpler applications. Smith, however, at one point dismisses energy as a differential producer constraint because “… humans also take energy to run.” That is a reference to absolute energy requirements across inputs (AI vs. human), not differential requirements for an input across different outputs. Only the latter impinge on tradeoffs or opportunity costs facing an inputs. Then, the input having the lowest opportunity cost for a particular output has a comparative advantage for that output. However, it’s not always clear whether an energy tradeoff across outputs for humans will be more or less skewed than for AI, so this might or might not influence a human comparative advantage.
Later, however, Smith speculates that AI might bid up the cost of energy so high that “humans would indeed be immiserated en masse.” That position seems inconsistent. In fact, if AI energy demands are so intensive, it’s more likely to dampen the growth in demand for AI agents as well as increase the human comparative advantage because the most energy-intensive AI applications will be disadvantaged.
And again, there is Smith’s caution regarding the energy required for human life support. Is that a valid long-run variable cost associated with comparative advantages possessed by humans? It’s not wrong to include fertility decisions in the long-run aggregate human labor supply function in some fashion, but it doesn’t imply that energy requirements will eliminate comparative advantages. Those will still exist.
Hype, Or Hyper-Growth?
AI has come a long way over the past two years, and while its prospective impact strikes some as hyped thus far, it has the potential to bring vast gains across a number of fields within just a few years. According to this study, explosive economic growth on the order of 30% annually is a real possibility within decades, as generative AI is embedded throughout the economy. “Unprecedented” is an understatement for that kind of expansive growth. Dylan Matthews in Vox surveys the arguments as to how AI will lead to super-exponential economic growth. This is the kind of scenario that would give rise to superabundance.
I noted above that Smith, despite his unwillingness to guarantee that human jobs will exist in a world of generative AI, asserts (in an update) at the bottom of his post that a superabundance of AI (and abundance generally) would not threaten human comparative advantages. This superabundance is a case of decreasing costs of compute and AI deployment. Here Smith says:
“The reason is that the more abundant AI gets, the more value society produces. The more value society produces, the more demand for AI goes up. The more demand goes up, the greater the opportunity cost of using AI for anything other than its most productive use.
“As long as you have to make a choice of where to allocate the AI, it doesn’t matter how much AI there is. A world where AI can do anything, and where there’s massively huge amounts of AI in the world, is a world that’s rich and prosperous to a degree that we can barely imagine. And all that fabulous prosperity has to get spent on something. That spending will drive up the price of AI’s most productive uses. That increased price, in turn, makes it uneconomical to use AI for its least productive uses, even if it’s far better than humans at its least productive uses.
“Simply put, AI’s opportunity cost does not go to zero when AI’s resource costs get astronomically cheap. AI’s opportunity cost continues to scale up and up and up, without limit, as AI produces more and more value.”
This seems as if Smith is backing off his earlier hedge. Some of that spending will be in the form of fabulous investment projects of the kinds I mentioned in my post, and smaller ones as well, all enabled by AI. But the key point is that comparative advantages will not go away, and that means human inputs will continue to be economically useful.
I referenced Andrew Mayne in my last post. He contends that the income growth made possible by AI will ensure that plenty of jobs are available for humans. He mentions comparative advantage in passing, but he centers his argument around applications in which human workers and AI will be strong complements in production, as will sometimes be the case.
A New Age of Worry
The economic success of AI is subject to a number of contingencies. Most important is that AI alignment issues are adequately addressed. That is, the “self-interest” of any agentic AI must align with the interests of human welfare. Do no harm!
The difficulty of universal alignment is illustrated by the inevitability of competition among national governments for AI supremacy, especially in the area of AI-enabled weaponry and espionage. The national security implications are staggering.
A couple of Smith‘s biggest concerns are the social costs of adjusting to the economic disruptions AI is sure to bring, as well as its implications for inequality. Humans will still have comparative advantages, but there will be massive changes in the labor market and transitions that are likely to involve spells of unemployment and interruptions to incomes for some. The speed and strength of the AI revolution may well create social upheaval. That will create incentives for politicians to restrain the development and adoption of AI, and indeed, we already see the stirrings of that today.
Finally, Smith worries that the transition to AI will bring massive gains in wealth to the owners of AI assets, while workers with few skills are likely to languish. I’m not sure that’s consistent with his optimism regarding income growth under AI, and inequality matters much less when incomes are rising generally. Still, the concern is worthy of a more detailed discussion, which I’ll defer to a later post.
You might know someone so smart and multi-talented that they are objectively better at everything than you. Let’s call him Harvey Specter. Harvey’s prospects on the labor market are very good. Economists would say he has an absolute advantage over you in every single pursuit! What a bummer! But obviously that doesn’t mean Harvey can or should do everything, while you do nothing.
Fears of Human Obsolescence
That’s the very situation many think awaits workers with the advent of artificial general intelligence (AGI), and especially with the marriage of AGI and advanced robotics (also see here). Any job a human can do, AGI or AGI robots of various kinds will be able to do better, faster, and in far greater quantity. The humanoid AGI robots will be like your talented acquaintance Harvey, but exponentiated. They won’t need much “sleep” or downtime, and treating wear and tear on their “health” will be a simple matter of replacing components. AGI and its robotic manifestations will have an absolute advantage in every possible endeavor.
But even with the existence of super-human AGI robots, I claim that work will be available to you if you want or need it. You won’t face the same set of pre-AGI opportunities, but there will be many opportunities for humans nonetheless. How can that be if AGI robots can do everything better? Won’t they be equipped to meet all of our material needs and wants?
Specter of the Super Productive
Let’s return to the example of you and Harvey, your uber-talented acquaintance. You’ll each have an area of specialization, but on what basis? Harvey has his pick of very lucrative and stimulating opportunities. You, however, are limited to a less dazzling array of prospects. There might be some overlap, and hard work or luck can make up for large differences, but chances are you’ll specialize in something that requires less talent than Harvey. You might wind up in the same profession, but Harvey will be a star.
Where will you end up? The answer is you and Harvey will find your respective areas of specialization based on comparative advantages, not absolute advantages. Relative opportunity cost is the key here, or its inverse: how much do you expect to gain from a certain area of specialization relative to the rewards you must forego.
For example, Harvey doesn’t sacrifice much by shunning less challenging areas of specialization. That is, he faces a low opportunity cost, while his chosen area offers great rewards for his talent.
You, on the other hand, might not have much to gain in Harvey’s line of work, if you can get it. You might be a flop if you do! Realistically, you forego very little if you instead pursue more achievable success in a less daunting area. You’ll be better off choosing an option for which your relative gains are highest, or said differently, where your relative opportunity cost is low.
A Quick Illustration
If you’re unwilling to slog through a simple numerical example, skip this section and the graph below. The graph was produced the old fashioned way: by a human being with a pencil, paper, ruler, and smart phone camera.
Here goes: Harvey can produce up to 100 units of X per period or 100 units of Y, or some linear combination of the two. Harvey’s opportunity costs are constant along this tradeoff between X and Y because it’s a straight line. It costs him one unit of Y output to produce every additional unit of X, and vice versa.
You, on the other hand, cannot produce X or Y as well as Harvey in an absolute sense. At most, you can produce up to 50 units of X per period, 20 units of Y, or some combination of the two along your own constant cost (straight line) tradeoff. You sacrifice 5/2 = 2.5 units of X to produce each unit of Y, so Harvey has the lower opportunity cost and a comparative advantage for Y. But it only costs you 2/5 = 0.4 units of Y to produce each additional unit of X, so you have a comparative advantage over Harvey in X production.
Reciprocal Advantages
In the end, you and Harvey specialize in the respective areas for which each has their lowest relative opportunity cost and a comparative advantage. If he has a comparative advantage in one area of production, and unless your respective tradeoffs have identical slopes (unlikely), the reciprocal nature of opportunity costs dictates that you have a comparative advantage in the other area of production.
Obviously, Harvey’s formidable absolute advantage over you in everything doesn’t impinge on these choices. In the real world, of course, comparative advantages play out across many dimensions of output, but the principle is the same. And once we specialize, we can trade with one another to mutual advantage.
No Such Thing As a Free AGI Robot
That brings us back to AGI and AGI robots. Like Harvey, they might well have an absolute advantage in every area of specialization, or they can learn quickly to achieve such an advantage, but that doesn’t mean they should do everything!
Just as in times preceding earlier technological breakthroughs, we cannot even imagine the types of jobs that will dominate the human and AGI work forces in the future. We already see complementarity between humans and AGI in many applications. AGI makes those workers much more productive, which leads to higher wages.
However, substitution of AGIs for human labor is a dominant theme of the many AGI “harm” narratives. In fact, substitution is already a reality in many occupations, like coding, and substitution is likely to broaden and intensify as the marriage of AGI and robotics gains speed. But that will occur only in industries for which the relative opportunity costs of AGIs, including all of the ancillary resources needed to produce them, are favorable. Among other things, AGI will require a gigantic expansion in energy production and infrastructure, which necessitates a massive exploitation of resources. Relative opportunity costs in the use of these resources will not always favor the dominance of AGIs in production. Like Harvey, AGIs and their ancillary resources cannot do everything because they cannot have comparative advantages without reciprocal comparative disadvantages.
Super-Abundance vs. Scarcity
Some might insist that AGIs will lead to such great prosperity that humans will no longer need to work. All of our material wants will be met in a new age of super-abundance. Despite the foregoing, that might suggest to some that AGIs will do everything! But here I make another claim: our future demands on resources will not be satisfied by whatever abundance AGIs make possible. We will still want to do more, whether we choose to construct fusion reactors, megastructures in space (like Dyson spheres or ring worlds), terraform Mars, undertake interstellar travel, perfect asteroid defense, battle disease, extend longevity, or improve our lives in ways now imagined or unimagined.
As a result, scarcity will remain a major force. To that extent, resources will have competing uses, they will face opportunity costs, and they will have comparative advantages vis a vis alternative uses to which they can be put. Scarcity is a reality that governs opportunity costs, and that means humans will always have roles to play in production.
Concluding Remarks
I wrote about human comparative advantages once before, about seven years ago. I think I was groping along the right path. The only other article I’ve seen to explicitly mention a comparative advantage of human labor vs. AGIs in the correct context is by Andrew Mayne in the most recent issue of Reason Magazine. It’s almost a passing reference, but it deserves more because it is foundational.
Harvey Specter shouldn’t occupy his scarce time performing tasks that compromise his ability to deliver his most rewarding services. Likewise, before long it will become apparent that highly productive AGI assets, and the resources required to build and operate them, should not be tied up in activities that humans can perform at lesser sacrifice. That’s a long way of saying that humans will still have productive roles to play, even when AGI achieves an absolute advantage in everything. Some of the roles played by humans will be complimentary to AGIs in production, but human labor will also be valuable as a substitute for AGI assets in other applications. As long as AGI assets have any comparative advantages, humans will have reciprocal comparative advantages as well.
Recent advances in artificial intelligence (AI) are giving hope to advocates of central economic planning. Perhaps, they think, the so-called “knowledge problem” (KP) can be overcome, making society’s reliance on decentralized market forces “unnecessary”. The KP is the barrier faced by planners in collecting and using information to direct resources to their most valued uses. KP is at the heart of the so-called “socialist calculation debate”, but it applies also to the failures of right-wing industrial policies and protectionism.
Apart from raw political motives, run-of-the-mill government incompetence, and poor incentives, the KP is an insurmountable obstacle to successful state planning, as emphasized by Friedrich Hayek and many others. In contrast, market forces are capable of spontaneously harnessing all sources of information on preferences, incentives, resources, as well as existing and emergent technologies in allocating resources efficiently. In addition, the positive sum nature of mutually beneficial exchange makes the market by far the greatest force for voluntary social cooperation known to mankind.
Nevertheless, the hope kindled by AI is that planners would be on an equal footing with markets and allow them to intervene in ways that would be “optimal” for society. This technocratic dream has been astir for years along with advances in computer technology and machine learning. I guess it’s nice that at least a few students of central planning understood the dilemma all along, but as explained below, their hopes for AI are terribly misplaced. AI will never allow planners to allocate resources in ways that exceed or even approximate the efficiency of the market mechanism’s “invisible hand”.
Michael Munger recently described the basic misunderstanding about the information or “data” that markets use to solve the KP. Markets do not rely on a given set of prices, quantities, and production relationships. They do not take any of those as givens with respect to the evolution of transactions, consumption, production, investment, or search activity. Instead, markets generate this data based on unobservable and co-evolving factors such as the shape of preferences across goods, services, and time; perceptions of risk and its cost; the full breadth of technologies; shifting resource availabilities; expectations; locations; perceived transaction costs; and entrepreneurial energy. Most of these factors are “tacit knowledge” that no central database will ever contain.
At each moment, dispersed forces are applied by individual actions in the marketplace. The market essentially solves for the optimal set of transactions subject to all of those factors. These continuously derived solutions are embodied in data on prices, quantities, and production relationships. Opportunity costs and incentives are both an outcome of market processes as well as driving forces, so that they shape the transactional footprint. And then those trades are complete. Attempts to impose the same set of data upon new transactions in some repeated fashion, freezing the observable components of incentives and other requirements, would prevent the market from responding to changing conditions.
Thus, the KP facing planners isn’t really about “calculating” anything. Rather, it’s the impossibility of matching or replicating the market’s capacity to generate these data and solutions. There will never be an AI with sufficient power to match the efficiency of the market mechanism because it’s not a matter of mere “calculation”. The necessary inputs are never fully unobservable and, in any case, are unknown until transactions actually take place such that prices and quantities can be recorded.
In my 2020post “Central Planning With AI Will Still Suck”, I reviewed a paper by Jesús Fernández-Villaverde (JFV), who was skeptical of AI’s powers to achieve better outcomes via planning than under market forces. His critique of the “planner position” anticipated the distinction highlighted by Munger between “market data” and the market’s continuous generation of transactions and their observable footprints.
JFV emphasized three reasons for the ultimate failure of AI-enabled planning: impossible data requirements; the endogeneity of expectations and behavior; and the knowledge problem. Again, the discovery and collection of “data” is a major obstacle to effective planning. If that were the only difficulty, then planners would have a mere “calculation” problem. This shouldn’t be conflated with the broader KP. That is, observable “data” is a narrow category relative the arrays of unobservables and the simultaneous generation of inputs and outcomes that takes place in markets. And these solutions are found by market processes subject to an array of largely unobservable constraints.
An interesting obstacle to AI planning cited by JFV is the endogeneity of expectations. It too can be considered part of the KP. From my 2020 post:
“Policy Change Often Makes the Past Irrelevant: Planning algorithms are subject to the so-called Lucas Critique, a well known principle in macroeconomics named after Nobel Prize winner Robert Lucas. The idea is that policy decisions based on observed behavior will change expectations, prompting responses that differ from the earlier observations under the former policy regime. … If [machine learning] is used to “plan” certain outcomes desired by some authority, based on past relationships and transactions, the Lucas Critique implies that things are unlikely to go as planned.”
Again, note that central planning and attempts at “calculation” are not solely in the province of socialist governance. They are also required by protectionist or industrial policies supported at times by either end of the political spectrum. Don Boudreaux offers this wisdom on the point:
“People on the political right typically assume that support for socialist interventions comes uniquely from people on the political left, but this assumption is mistaken. While conservative interventionists don’t call themselves “socialists,” many of their proposed interventions – for example, industrial policy – are indeed socialist interventions. These interventions are socialist because, in their attempts to improve the overall performance of the economy, proponents of these interventions advocate that market-directed allocations of resources be replaced with allocations carried out by government diktat.”
The hope that non-market planning can be made highly efficient via AI is a fantasy. In addition to substituting the arbitrary preferences of planners and politicians for those of private agents, the multiplicity of forces bearing on individual decisions will always be inaccessible to AIs. Many of these factors are deeply embedded within individual minds, and often in varying ways. That is why the knowledge problem emphasized by Hayek is much deeper than any sort of “calculation problem” fit for exploitation via computer power.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note: The image at the top of this post is attributed by Bing to the CATO Institute-sponsored website Libertarianism.org and an article that appeared there in 2013, though that piece, by Jason Kuznicki, no longer seems to feature that image.
Artificial intelligence (AI) has become a very hot topic with incredible recent advances in AI performance. It’s very promising technology, and the expectations shown in thechart above illustrate what would be a profound economic impact. Like many new technologies, however, many find it threatening and are reacting with great alarm, There’s a movement within the tech industry itself, partly motivated by competitive self-interest, calling for a “pause”, or a six-month moratorium on certain development activities. Politicians in Washington are beginning to clamor for legislation that would subject AI to regulation. However, neither a voluntary pause nor regulatory action are likely to be successful. In fact, either would likely do more harm than good.
Leaps and Bounds
The pace of advance in AI has been breathtaking. From ChatGPT 3.5 to ChatGPT 4, in a matter of just a few months, the tool went from relatively poor performance on tests like professional and graduate entrance exams (e.g., bar exams, LSAT, GRE) to very high scores. Using these tools can be a rather startling experience, as I learned for myself recently when I allowed one to write the first draft of a post. (Despite my initial surprise, my experience with ChatGPT 3.5 was somewhat underwhelming after careful review, but I’ve seen more impressive results with ChatGPT 4). They seem to know so much and produce it almost instantly, though it’s true they sometimes “hallucinate”, reflect bias, or invent sources, so thorough review is a must.
Nevertheless, AIs can write essays and computer code, solve complex problems, create or interpret images, sounds and music, simulate speech, diagnose illnesses, render investment advice, and many other things. They can create subroutines to help themselves solve problems. And they can replicate!
As a gauge of the effectiveness of models like ChatGPT, consider that today AI is helping promote “over-employment”. That is, there are a number of ambitious individuals who, working from home, are holding down several different jobs with the help of AI models. In fact, some of these folks say AIs are doing 80% of their work. They are the best “assistants” one could possibly hire, according to a man who has four different jobs.
Economist Bryan Caplan is an inveterate skeptic of almost all claims that smack of hyperbole, and he’s won a series of bets he’s solicited against others willing to take sides in support of such claims. However, Caplan thinks he’s probably lost his bet on the speed of progress on AI development. Needless to say, it has far exceeded his expectations.
Naturally, the rapid progress has rattled lots of people, including many experts in the AI field. Already, we’re witnessing the emergence of “agency” on the part of AI Learning Language Models (LLMs), or so called “agentic” behavior. Here’s an interesting thread on agentic AI behavior. Certain models are capable of teaching themselves in pursuit of a specified goal, gathering new information and recursively optimizing their performance toward that goal. Continued gains may lead to an AI model having artificial generative intelligence (AGI), a superhuman level of intelligence that would go beyond acting upon an initial set of instructions. Some believe this will occur suddenly, which is often described as the “foom” event.
Team Uh-Oh
Concern about where this will lead runs so deep that a letter was recently signed by thousands of tech industry employees, AI experts, and other interested parties calling for a six-month worldwide pause in AI development activity so that safety protocols can be developed. One prominent researcher in machine intelligence, Eliezer Yudkowsky, goes much further: he believes that avoiding human extinction requires immediate worldwide limits on resources dedicated to AI development. Is this a severely overwrought application of the precautionary principle? That’s a matter I’ll consider at greater length below, but like Caplan, I’m congenitally skeptical of claims of impending doom, whether from the mouth of Yudkowsky, Greta Thunberg, Paul Ehrlich, or Nassim Taleb.
As I mentioned at the top, I suspect competition among AI developers played a role in motivating some of the signatories of the “AI pause” letter, and some of the non-signatories as well. Robin Hanson points out that Sam Altman, the CEO of OpenAI, did not sign the letter. OpenAI (controlled by a nonprofit foundation) owns ChatGPT and is the current leader in rolling out AI tools to the public. ChatGPT 4 can be used with the Microsoft search engine Bing, and Microsoft’s Bill Gates also did not sign the letter. Meanwhile, Google was caught flat-footed by the ChatGPT rollout, and its CEO signed. Elon Musk (who signed) wants to jump in with his own AI development: TruthGPT. Of course, the pause letter stirred up a number of members of Congress, which I suspect was the real intent. It’s reasonable to view the letter as a means of leveling the competitive landscape. Thus, it looks something like a classic rent-seeking maneuver, buttressed by the inevitable calls for regulation of AIs. However, I certainly don’t doubt that a number of signatories did so out of a sincere belief that the risks of AI must be dealt with before further development takes place.
The vast dimensions of the supposed AI “threat” may have some libertarians questioning their unequivocal opposition to public intervention. If so, they might just as well fear the potential that AI already holds for manipulation and control by central authorities in concert with their tech and media industry proxies. But realistically, broad compliance with any precautionary agreement between countries or institutions, should one ever be reached, is pretty unlikely. On that basis, a “scout’s honor” temporary moratorium or set of permanent restrictions might be comparable to something like the Paris Climate Accord. China and a few other nations are unlikely to honor the agreement, and we really won’t know whether they’re going along with it except for any traceable artifacts their models might leave in their wake. So we’ll have to hope that safeguards can be identified and implemented broadly.
Likewise, efforts to regulate by individual nations are likely to fail, and for similar reasons. One cannot count on other powers to enforce the same kinds of rules, or any rules at all. Putting our faith in that kind of cooperation with countries who are otherwise hostile is a prescription for ceding them an advantage in AI development and deployment. Regulation of the evolution of AI will likely fail. As Robert Louis Stevenson once wrote, “Thus paternal laws are made, thus they are evaded”. And if it “succeeds, it will leave us with a technology that will fall short of its potential to benefit consumers and society at large. That, unfortunately, is usually the nature of state intrusion into a process of innovation, especially when devised by a cadre of politicians with little expertise in the area.
Again, according to experts like Yudkowsky, AGI would pose serious risks. He thinks the AI Pause letter falls far short of what’s needed. For this reason, there’s been much discussion of somehow achieving an alignment between the interests of humanity and the objectives of AIs. Here is a good discussion by Seth Herd on the LessWrong blog about the difficulties of alignment issues.
Some experts feel that alignment is an impossibility, and that there are ways to “live and thrive” with unalignment (and see here). Alignment might also be achieved through incentives for AIs. Those are all hopeful opinions. Others insist that these models still have a long way to go before they become a serious threat. More on that below. Of course, the models do have their shortcomings, and current models get easily off-track into indeterminacy when attempting to optimize toward an objective.
But there’s an obvious question that hasn’t been answered in full: what exactly are all these risks?As Tyler Cowen has said, it appears that no one has comprehensively catalogued the risks or specified precise mechanisms through which those risks would present. In fact, AGI is such a conundrum that it might be impossible to know precisely what threats we’ll face. But even now, with deployment of AIs still in its infancy, it’s easy to see a few transition problems on the horizon.
White Collar Wipeout
Job losses seem like a rather mundane outcome relative to extinction. Those losses might come quickly, particularly among white collar workers like programmers, attorneys, accountants, and a variety of administrative staffers. According to a survey of 1,000 businesses conducted in February:
“Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year. … When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say ‘definitely,’ while 26% say ‘probably.’ … Within 5 years, 63% of business leaders say ChatGPT will ‘definitely’ (32%) or ‘probably’ (31%) lead to workers being laid off.”
A rapid rate of adoption could well lead to widespread unemployment and even social upheaval. For perspective, that implies a much more rapid rate of technological diffusion than we’ve ever witnessed, so this outcome is viewed with skepticism in some quarters. But in fact, the early adoption phase of AI models is proceeding rather quickly. You can use ChatGPT 4 easily enough on the Bing platform right now!
Contrary to the doomsayers, AI will not just enhance human productivity. Like all new technologies, it will lead to opportunities for human actors that are as yet unforeseen. AI is likely to identify better ways for humans to do many things, or do wonderful things that are now unimagined. At a minimum, however, the transition will be disruptive for a large number of workers, and it will take some time for new opportunities and roles for humans to come to fruition.
Robin Hanson has a unique proposal for meeting the kind of challenge faced by white collar workers vulnerable to displacement by AI, or for blue collar workers who are vulnerable to displacement by robots (the deployment of which has been hastened by minimum wage and living wage activism). This treatment of Hanson’s idea will be inadequate, but he suggests a kind of insurance or contract sold to both workers and investors by owners of assets likely to be insensitive to AI risks. The underlying assets are paid out to workers if automation causes some defined aggregate level of job loss. Otherwise, the assets are paid out to investors taking the other side of the bet. Workers could buy these contracts themselves, or employers could do so on their workers’ behalf. The prices of the contracts would be determined by a market assessment of the probability of the defined job loss “event”. Governmental units could buy the assets for their citizens, for that matter. The “worker contracts” would be cheap if the probability of the job-loss event is low. Sounds far-fetched, but perhaps the idea is itself an entrepreneurial opportunity for creative players in the financial industry.
The threat of job losses to AI has also given new energy to advocates of widespread adoption of universal basic income payments by government. Hanson’s solution is far preferable to government dependence, but perhaps the state could serve as an enabler or conduit through which workers could acquire AI and non-AI capital.
Human Capital
Current incarnations of AI are not just a threat to employment. One might add the prospect that heavy reliance on AI could undermine the future education and critical thinking skills of the general population. Essentially allowing machines to do all the thinking, research, and planning won’t inure to the cognitive strength of the human race, especially over several generations. Already people suffer from an inability to perform what were once considered basic life skills, to say nothing of tasks that were fundamental to survival in the not too distant past. In other words, AI could exaggerate a process of “dumbing down” the populace, a rather undesirable prospect.
Fraud and Privacy
AI is responsible for still more disruptions already taking place, in particular violations of privacy, security, and trust. For example, a company called Clearview AI has scraped 30 billion photos from social media and used them to create what its CEO proudly calls a “perpetual police lineup”, which it has provided for the convenience of law enforcement and security agencies.
AI is also a threat to encryption in securing data and systems. Conceivably, AI could be of value in perpetrating identity theft and other kinds of fraud, but it can also be of value in preventing them. AI is also a potential source of misleading information. It is often biased, reflecting specific portions of the on-line terrain upon which it is trained, including skewed model weights applied to information reflecting particular points of view. Furthermore, misinformation can be spread by AIs via “synthetic media” and the propagation of “fake news”. These are fairly clear and present threats of social, economic, and political manipulation. They are all foreseeable dangers posed by AI in the hands of bad actors, and I would include certain nudge-happy and politically-motivated players in that last category.
The Sky-Already-Fell Crowd
Certain ethicists with extensive experience in AI have condemned the signatories of the “Pause Letter” for a focus on “longtermism”, or risks as yet hypothetical, rather than the dangers and wrongs attributable to AIs that are already extant:TechCrunch quotes a rebuke penned by some of these dissenting ethicists to supporters of the “Pause Letter”:
“‘Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,’ they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.”
So these ethicists bemoan AI’s presumed contribution to the strength and concentration of “existing power structures”. In that, I detect just a whiff of distaste for private initiative and private rewards, or perhaps against the sovereign power of states to allow a laissez faire approach to AI development (or to actively sponsor it). I have trouble taking this “rebuke” too seriously, but it will be fruitless in any case. Some form of cooperation between AI developers on safety protocols might be well advised, but competing interests also serve as a check on bad actors, and it could bring us better solutions as other dilemmas posed by AI reveal themselves.
ImaginingAI Catastrophes
What are the more consequential (and completely hypothetical) risks feared by the “pausers” and “stoppers”. Some might have to do with the possibility of widespread social upheaval and ultimately mayhem caused by some of the “mundane” risks described above. But the most noteworthy warnings are existential: the end of the human race! How might this occur when AGI is something confined to computers? Just how does the supposed destructive power of AGIs get “outside the box”? It must do so either by tricking us into doing something stupid, hacking into dangerous systems (including AI weapons systems or other robotics), and/or through the direction and assistance of bad human actors. Perhaps all three!
The first question is this: why would an AGI do anything so destructive? No matter how much we might like to anthropomorphize an “intelligent” machine, it would still be a machine. It really wouldn’t like or dislike humanity. What it would do, however, is act on its objectives. It would seek to optimize a series of objective functions toward achieving a goal or a set of goals it is given. Hence the role for bad actors. Let’s face it, there are suicidal people who might like nothing more than to take the whole world with them.
Otherwise, if humanity happens to be an obstruction to solving an AGI’s objective, then we’d have a very big problem. Humanity could be an aid to solving an AGI’s optimization problem in ways that are dangerous. As Yudkowsky says, we might represent mere “atoms it could use somewhere else.” And if an autonomous AGI were capable of setting it’s own objectives, without alignment, the danger would be greatly magnified. An example might be the goal of reducing carbon emissions to pre-industrial levels. How aggressively would an AGI act in pursuit of that goal? Would killing most humans contribute to the achievement of that goal?
Here’s one that might seem far-fetched, but the imagination runs wild: some individuals might be so taken with the power of vastly intelligent AGI as to make it an object of worship. Such an “AGI God” might be able to convert a sufficient number of human disciples to perpetrate deadly mischief on its behalf. Metaphorically speaking, the disciples might be persuaded to deliver poison kool-aid worldwide before gulping it down themselves in a Jim Jones style mass suicide. Or perhaps the devoted will survive to live in a new world mono-theocracy. Of course, these human disciples would be able to assist the “AGI God” in any number of destructive ways. And when brain-wave translation comes to fruition, they better watch out. Only the truly devoted will survive.
An AGI would be able to create the illusion of emergency, such as a nuclear launch by an adversary nation. In fact, two or many adversary nations might each be fooled into taking actions that would assure mutual destruction and a nuclear winter. If safeguards such as human intermediaries were required to authorize strikes, it might still be possible for an AGI to fool those humans. And there is no guarantee that all parties to such a manufactured conflict could be counted upon to have adequate safeguards, even if some did.
Yudkowsky offers at least one fairly concrete example of existential AGI risk:
“A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”
There are many types of physical infrastructure or systems that an AGI could conceivably compromise, especially with the aid of machinery like robots or drones to which it could pass instructions. Safeguards at nuclear power plants could be disabled before steps to trigger melt down. Water systems, rivers, and bodies of water could be poisoned. The same is true of food sources, or even the air we breathe. In any case, complete social disarray might lead to a situation in which food supply chains become completely dysfunctional. So, a super-intelligence could probably devise plenty of “imaginative” ways to rid the earth of human beings.
Back To Earth
Is all this concern overblown? Many think so. Bryan Caplan now has a $500 bet with Eliezer Yudkowsky that AI will not exterminate the human race by 2030. He’s already paid Yudkowsky, who will pay him $1,000 if we survive. Robin Hanson says “Most AI Fear Is Future Fear”, and I’m inclined to agree with that assessment. In a way, I’m inclined to view the AI doomsters as highly sophisticated, change-fearing Luddites, but Luddites nevertheless.
Ben Hayum is very concerned about the dangers of AI, butwriting at LessWrong, he recognizes some real technical barriers that must be overcome for recursive optimization to be successful. He also notes that the big AI developers are all highly focused on safety. Nevertheless, he says it might not take long before independent users are able to bootstrap their own plug-ins or modules on top of AI models to successfully optimize without running off the rails. Depending on the specified goals, he thinks that will be a scary development.
James Pethokoukis raises a point that hasn’t had enough recognition: successful innovations are usually dependent on other enablers, such as appropriate infrastructure and process adaptations. What this means is that AI, while making spectacular progress thus far, won’t have a tremendous impact on productivity for at least several years, nor will it pose a truly existential threat. The lag in the response of productivity growth would also limit the destructive potential of AGI in the near term, since installation of the “social plant” that a destructive AGI would require will take time. This also buys time for attempting to solve the AI alignment problem.
In another Robin Hanson piece, he expresses the view that the large institutions developing AI have a reputational Al stake and are liable for damages their AI’s might cause. He notes that they are monitoring and testing AIs in great detail, so he thinks the dangers are overblown.:
“So, the most likely AI scenario looks like lawful capitalism…. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives.”
In the longer term, the chief focus of the AI doomsters, Hanson is truly an AI optimist. He thinks AGIs will be “designed and evolved to think and act roughly like humans, in order to fit smoothly into our many roughly-human-shaped social roles.” Furthermore, he notes that AI owners will have strong incentives to monitor and “delimit” AI behavior that runs contrary to its intended purpose. Thus, a form of alignment is achieved by virtue of economic and legal incentives. In fact, Hanson believes the “foom” scenario is implausible because:
“… it stacks up too many unlikely assumptions in terms of our prior experiences with related systems. Very lumpy tech advances, techs that broadly improve abilities, and powerful techs that are long kept secret within one project are each quite rare. Making techs that meet all three criteria even more rare. In addition, it isn’t at all obvious that capable AIs naturally turn into agents, or that their values typically change radically as they grow. Finally, it seems quite unlikely that owners who heavily test and monitor their very profitable but powerful AIs would not even notice such radical changes.”
As smart as AGIs would be, Hanson asserts that the problem of AGI coordination with other AIs, robots, and systems would present insurmountable obstacles to a bloody “AI revolution”. This is broadly similar to Pethokoukis’ theme. Other AIs or AGIs are likely to have competing goals and “interests”. Conflicting objectives and competition of this kind will do much to keep AGIs honest and foil malign AGI behavior.
The kill switch is a favorite response of those who think AGI fears are exaggerated. Just shut down an AI if its behavior is at all aberrant, or if a user attempts to pair an AI model with instructions or code that might lead to a radical alteration in an AI’s level of agency. Kill switches would indeed be effective at heading off disaster if monitoring and control is incorruptible. This is the sort of idea that begs for a general solution, and one hopes that any advance of that nature will be shared broadly.
One final point about AI agency is whether autonomous AGIs might ever be treated as independent factors of production. Could they be imbued with self-ownership?Tyler Cowen asks whether an AGI created by a “parent” AGI could legitimately be considered an independent entity in law, economics, and society. And how should income “earned” by such an AGI be treated for tax purposes. I suspect it will be some time before AIs, including AIs in a lineage, are treated separately from their “controlling” human or corporate entities. Nevertheless, as Cowen says, the design of incentives and tax treatment of AI’s might hold some promise for achieving a form of alignment.
Letting It Roll
There’s plenty of time for solutions to the AGI threat to be worked out. As I write this, the consensus forecast for the advent of real AGI on the Metaculus online prediction platform is July 27, 2031. Granted, that’s more than a year sooner than it was 11 days ago, but it still allows plenty of time for advances in controlling and bounding agentic AI behavior. In the meantime, AI is presenting opportunities to enhance well being through areas like medicine, nutrition, farming practices, industrial practices, and productivity enhancement across a range of processes. Let’s not forego these opportunities. AI technology is far too promising to hamstring with a pause, moratoria, or ill-devised regulations. It’s also simply impossible to stop development work on a global scale.
Nevertheless, AI issues are complex for all private and public institutions. Without doubt, it will change our world. This AI Policy Guide from Mercatus is a helpful effort to lay out issues at a high-level.
Artificial intelligence (AI) or machine learning (ML) will never make central economic planning a successful reality. Jesús Fernández-Villaverde of the University of Pennsylvania has written a strong disavowal of AI’s promise in central planning, and on the general difficulty of using ML to design social and economic policies. His paper, “Simple Rules for a Complex World with Artificial Intelligence“, was linked last week by Tyler Cowen at Marginal Revolution. Note that the author isn’t saying “digital socialism” won’t be attempted. Judging by the attention it’s getting, and given the widespread acceptance of the scientism of central planning, there is no question that future efforts to collectivize will involve “data science” to one degree or another. But Fernández-Villaverde, who is otherwise an expert and proponent of ML in certain applications, is simply saying it won’t work as a curative for the failings of central economic planning — that the “simple rules” of the market will aways produce superior social outcomes.
The connection between central planning and socialism should be obvious. Central planning implies control over the use of resources, and therefore ownership by a central authority, whether or not certain rents are paid as a buy-off to the erstwhile owners of those resources. By “digital socialism”, Fernández-Villaverde means the use of ML to perform the complex tasks of central planning. The hope among its cheerleaders is that adaptive algorithms can discern the optimal allocation of resources within some “big data” representation of resource availability and demands, and that this is possible on an ongoing, dynamic basis.
Fernández-Villaverde makes the case against this fantasy on three fronts or barriers to the use of AI in policy applications: data requirements; the endogeneity of expectations and behavior; and the knowledge problem.
The Data Problem: ML requires large data sets to do anything. And impossibly large data sets are required for ML to perform the task of planning economic activity, even for a small portion of the economy. Today, those data sets do not exist except in certain lines of business. Can they exist more generally, capturing the details of all economic transactions? Can the data remain current? Only at great expense, and ML must be trained to recognize whether data should be discarded as it becomes stale over time due to shifting demographics, tastes, technologies, and other changes in the social and physical environment.
Policy Change Often Makes the Past Irrelevant: Planning algorithms are subject to the so-called Lucas Critique, a well known principle in macroeconomics named after Nobel Prize winner Robert Lucas. The idea is that policy decisions based on observed behavior will change expectations, prompting responses that differ from the earlier observations under the former policy regime. A classic case involves the historical tradeoff between inflation and unemployment. Can this tradeoff be exploited by policy? That is, can unemployment be reduced by a policy that increases the rate of inflation (by printing money at a faster rate)? In this case, the Lucas Critique is that once agents expect a higher rate of inflation, they are unlikely to confuse higher prices with a more profitable business environment, so higher employment will not be sustained. If ML is used to “plan” certain outcomes desired by some authority, based on past relationships and transactions, the Lucas Critique implies that things are unlikely to go as planned.
The Knowledge Problem: Not only are impossibly large data sets required for economic planning with ML, as noted above. To achieve the success of markets in satisfying unlimited wants given scarce resources, the required information is impossible to collect or even to know. This is what Friedrich Hayek called the “knowledge problem”. Just imagine the difficulty of arranging a data feed on the shifting preferences of many different individuals across a huge number of products, services and they way preference orderings will change across the range of possible prices. The data must have immediacy, not simply a historical record. Add to this the required information on shifting supplies and opportunity costs of resources needed to produce those things. And the detailed technological relationships between production inputs and outputs, including time requirements, and the dynamics of investment in future productive capacity. And don’t forget to consider the variety of risks agents face, their degree of risk aversion, and the ways in which risks can be mitigated or hedged. Many of these things are simply unknowable to a central authority. The information is hopelessly dispersed. The task of collecting even the knowable pieces is massive beyond comprehension.
The market system, however, is able to process all of this information in real time, the knowable and the unknowable, in ways that balance preferences with the true scarcity of resources. No one actor or authority need know it all. It is the invisible hand. Among many other things, it ensures the deployment of ML only where it makes economic sense. Here is Fernández-Villaverde:
“The only reliable method we have found to aggregate those preferences, abilities, and efforts is the market because it aligns, through the price system, incentives with information revelation. The method is not perfect, and the outcomes that come from it are often unsatisfactory. Nevertheless, like democracy, all the other alternatives, including ‘digital socialism,’ are worse.”
Later, he says:
“… markets work when we implement simple rules, such as first possession, voluntary exchange, and pacta sunt servanda. This result is not a surprise. We did not come up with these simple rules thanks to an enlightened legislator (or nowadays, a blue-ribbon committee of academics ‘with a plan’). … The simple rules were the product of an evolutionary process. Roman law, the Common law, and Lex mercatoria were bodies of norms that appeared over centuries thanks to the decisions of thousands and thousands of agents.”
These simple rules represent good private governance. Beyond reputational enforcement, the rules require only trust in the system of property rights and a private or public judicial authority. Successfully replacing private arrangements in favor of a central plan, however intricately calculated via ML, will remain a pipe dream. At best, it would suspend many economic relationships in amber, foregoing the rational adjustments private agents would make as conditions change. And ultimately, the relationships and activities that planning would sanction would be shaped by political whim. It’s a monstrous thing to contemplate — both fruitless and authoritarian.
In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun