• About

Sacred Cow Chips

Sacred Cow Chips

Tag Archives: Artificial Intelligence

AI Won’t Repeal Scarcity, Tradeoffs, Or Jobs

04 Monday Aug 2025

Posted by Nuetzel in Artificial Intelligence, Labor Markets

≈ 1 Comment

Tags

Absolute Advantage, AI Capital, Artificial Intelligence, Baby Bonds, Comparative advantage, Complementary Inputs, Human Touch, Opportunity cost, Robitics, Scarcity, Tradeoffs, Type I Civilization, Universal Basic Income, Universal Capital Endowments

Every now and then I grind my axe against the proposition that AI will put humans out of work. It’s a very fashionable view, along with the presumed need for government to impose “robot taxes” and provide everyone with a universal basic income for life. The thing is, I sense that my explanations for rejecting this kind of narrative have been a little abstruse, so I’m taking another crack at it now.

Will Human Workers Be Obsolete?

The popular account envisions a world in which AI replaces not just white-collar technocrats, but by pairing AI with advanced robotics, it replaces workers in the trades as well as manual laborers. We’ll have machines that cure, litigate, calculate, forecast, design, build, fight wars, make art, fix your plumbing, prune your roses, and replicate. They’ll be highly dextrous, strong, and smart, capable of solving problems both practical and abstract. In short, AI capital will be able to do everything better and faster than humans! The obvious fear is that we’ll all be out of work.

I’m here to tell you it will not happen that way. There will be disruptions to the labor market, extended periods of joblessness for some individuals, and ultimately different patterns of employment. However, the chief problem with the popular narrative is that AI capital will require massive quantities of resources to produce, train, and operate.

Even without robotics, today’s AIs require vast flows of energy and other resources, and that includes a tremendous amount of expensive compute. The needed resources are scarce and highly valued in a variety of other uses. We’ll face tradeoffs as a society and as individuals in allocating resources both to AI and across various AI applications. Those applications will have to compete broadly and amongst themselves for priority.

AI Use Cases

There are many high-value opportunities for AI and robotics, such as industrial automation, customer service, data processing, and supply chain optimization, to name a few. These are already underway to a significant extent. To that, however, we can add medical research, materials research, development of better power technologies and energy storage, and broad deployment in delivering services to consumers and businesses.

In the future, with advanced robotics, AI capital could be deployed in domains that carry high risks for human labor, such as construction of high rise buildings, underwater structures, and rescue operations. This might include such things as construction of solar platforms and large transports in space, or the preparation of space habitats for humans on other worlds.

Scarcity

There is no end to the list of potential applications of AI, but neither is there an end to the list of potential wants and aspirations of humanity. Human wants are insatiable, which sometimes provokes ham-fisted efforts by many governments to curtail growth. We have a long way to go before everyone on the planet lives comfortably. But even then, peoples’ needs and desires will evolve once previous needs are satisfied, or as technology changes lifestyles and practices. New approaches and styles drive fashions and aesthetics generally. There are always individuals who will compete for resources to experiment and to try new things. And the insatiability of human wants extends beyond the strictly private level. Everyone has an opinion about unsatisfied needs in the public sphere, such as infrastructure, maintenance, the environment, defense, space travel, and other dimensions of public activity.

Futurists have predicted that the human race will seek to become a so-called Type I civilization, capable of harnessing all of the energy on our planet. Then there will be the quest to harness all the energy within our solar system (a Type II civilization). Ultimately, we’ll seek to go beyond that by attempting to exploit all the energy in the Milky Way galaxy. Such an expansion of our energy demands would demonstrate how our wants always exceed the resources we have the ability to exploit.

In other words, scarcity will always be with us. The necessity of facing tradeoffs won’t ever be obviated, and prices will always remain positive. The question of dedicating resources to any particular application of AI will bring tradeoffs into sharper relief. The opportunity cost of many “lesser” AI and robotics applications will be quite high relative to their value to investors. Simply put, many of those applications will be rejected because there will be better uses for the requisite energy and other resources.

Tradeoffs

Again, it will be impossible for humans to accomplish many of the tasks that AI’s will perform, or to match the sheer productivity of AIs in doing so. Therefore, AI will have an absolute advantage over humans in all of those tasks.

However, there are many potential applications of AI that are of comparatively low value. These include a variety of low-skill tasks, but also tasks that require some dexterity or continuous judgement and adjustment. Operationalizing AI and robots to perform all these tasks, and diverting the necessary capital and energy away from other uses, would have a tremendously high opportunity cost. Human opportunity costs will not be so high. Thus, people will have a comparative advantage in performing the bulk if not all of these tasks.

Sure, there will be novelty efforts and test cases to train robots to do plumbing or install burglar alarm systems, and at some point buyers might wish to have robots prune their roses. Some people are already amenable to having humanoid robots perform sex work. Nevertheless, humans will remain competitive at these tasks due to the comparatively high opportunity costs faced by AI capital.

There will be many other domains in which humans will remain competitive. Once more, that’s because the opportunity costs for AI capital and other resources will be high. This includes many of the skilled trades, caregivers, and a great many management functions, especially at small companies. Their productivity will be enhanced by AI tools, but those jobs will not be decimated.

The key here is understanding that 1) capital and resources generally are scarce; 2) high value opportunities for AI are plentiful; and 3) the opportunity cost of funding AI in many applications will be very high. Humans will still have a comparative advantage in many areas.

Who’s the Boss?

There are still other ways in which human labor will always be required. One in particular involves the often complementary nature of AI and human inputs. People will have roles in instructing and supervising AIs, especially in tasks requiring customization and feedback. A key to assuring AI alignment with the objectives of almost any pursuit is human review. These kinds of roles are likely to be compensated in line with the complexity of the task. This extends to the necessity of human leadership of any organization.

That brings me to the subject of agentic and fully autonomous AI. No matter how sophisticated they get, AIs will always be the product of machines. They’ll be a kind of capital for which ownership should be confined to humans or organizations representing humans. We must be their masters. Disclaiming ownership and control of AIs, and granting agentic AIs the same rights and freedoms as people (as many have imagined) is unnecessary and possibly dangerous. AIs will do much productive work, but that work should be on behalf of human owners, and human labor will be deployed to direct and assess that work.

AIs (and People) Needing People

The collaboration between AIs and humans described above will manifest more broadly than anything task-specific, or anything we can imagine today. This is typical of technological advance. First-order effects often include job losses as new innovations enhance productivity or replace workers outright, but typically new jobs are created as innovations generate new opportunities for complementary products and services both upstream in production or downstream among ultimate users. In the case of AI, while much of this work might be performed by other AIs, at a minimum these changes will require guidance and supervision by humans.

In addition, consumers tend to have an aesthetic preference for goods and services produced by humans: craftsmen, artists, and entertainers. For example, if you’ve ever shopped for an oriental rug, you know that hand-knotted rugs are more expensive than machine-weaved rugs. Durability is a factor as well as uniqueness, the latter being a hallmark of human craftspeople. AI might narrow these differences over time, but the “human touch” will always have value relative to “comparable” AI output, even at a significant disadvantage in terms of speed and uncertainty regarding performance. The same is true of many other forms, such as sports, dance, music, and the visual arts. People prefer to be entertained by talented people, rather than highly-engineered machines. The “human touch” also has advantages in customer-facing transactions, including most forms of service and high-level sales/financial negotiations.

Owning the Machines

Finally, another word about AI ownership. An extension of the fashionable narrative that AIs will wholly replace human workers is that government will be called upon to tax AI and provide individuals with a universal basic income (UBI). Even if human labor were to be replaced by AIs, I believe that a “classic” UBI would be the wrong approach. Instead, all humans should have an ownership stake in the capital stock. This is wealth that yields compound growth over time and produces returns that make humans less reliant on streams of labor income.

Savings incentives (and negative consumption incentives) are a big step in encouraging more widespread ownership of capital. However, if direct intervention is necessary, early endowments of capital would be far preferable to a UBI because they will largely be saved, fostering economic growth, and they would create better incentives than a UBI. Along those lines, President Trump’s Big Beautiful Bill, which is now law, has established “Baby Bonds” for all American children born in 2025 – 2028, initially funded by the federal government with $1,000. Of course, this is another unfunded federal obligation on top of the existing burden of a huge public debt and ongoing deficits. Given my doubts about the persistence of AI-induced job losses, I reject government establishment of both a UBI and universal endowments of capital.

Summary

Capital and energy are scarce, so the tremendous resource requirements of AI and robotics means that the real world opportunity costs of many AI applications will remain impractically high. The tradeoffs will be so steep that they’ll leave humans with comparative advantages in many traditional areas of employment. Partly, these will come down to a difference in perceived quality owing to a preference for human interaction and human performance in a variety of economic interactions, including patronization of the art and athleticism of human beings. In addition, AIs will open up new occupations never before contemplated. We won’t be out of work. Nevertheless, it’s always a good idea to accumulate ownership in productive assets, including AI capital, and public policy should do a better job of supporting the private initiative to do so.

On Noah Smith’s Take Re: Human/AI Comparative Advantage

13 Thursday Jun 2024

Posted by Nuetzel in Artificial Intelligence, Comparative advantage, Labor Markets

≈ 3 Comments

Tags

Absolute Advantage, Agentic AI, Alignment, Andrew Mayne, Artificial Intelligence, Comparative advantage, Compute, Decreasing Costs, Dylan Matthews, Fertility, Floating Point Operations Per Second, Generative AI, Harvey Specter, Inequality, National Security, Noah Smith, Opportunity cost, Producer Constraints, Substitutability, Superabundance, Tyler Cowen

I was happy to see Noah Smith’s recent post on the graces of comparative advantage and the way it should mediate the long-run impact of AI on job prospects for humans. However, I’m embarrassed to have missed his post when it was published in March (and I also missed a New York Times piece about Smith’s position).

I said much the same thing as Smith in my post two weeks ago about the persistence of a human comparative advantage, but I wondered why the argument hadn’t been made prominently by economists. I discussed it myself about seven years ago. But alas, I didn’t see Smith’s post until last week!

I highly recommend it, though I quibble on one or two issues. Primarily, I think Smith qualifies his position based on a faulty historical comparison. Later, he doubles back to offer a kind of guarantee after all. Relatedly, I think Smith mischaracterizes the impact of energy costs on comparative advantages, and more generally the impact of the resources necessary to support a human population.

We Specialize Because…

Smith encapsulates the underlying phenomenon that will provide jobs for humans in a world of high automation and generative AI: “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” He tells technologists “… it’s very possible that regular humans will have plentiful, high-paying jobs in the age of AI dominance — often doing much the same kind of work that they’re doing right now …”

… often, but probably transformed in fundamental ways by AI, and also doing many other new kinds of work that can’t be foreseen at present. Tyler Cowen believes the most important macro effects of AI will be from “new” outputs, not improvements in existing outputs. That emphasis doesn’t necessarily conflict with Smith’s narrative, but again, Smith thinks people will do many of the same jobs as today in a world with advanced AI.

Smith’s Non-Guarantee

Smith hedges, however, in a section of his post entitled “‘Possible’ doesn’t mean guaranteed”. This despite his later assertion that superabundance would not eliminate jobs for humans. That might seem like a separate issue, but it’s strongly intertwined with the declining AI cost argument at the basis of his hedge. More on that below.

On his reluctance to “guarantee” that humans will have jobs in an AI world, Smith links to a 2013 Tyler Cowen post on “Why the theory of comparative advantage is overrated”. For example, Cowen says, why do we ever observe long-term unemployment if comparative advantage rules the day? Of course there are many reasons why we observe departures from the predicted results of comparative advantage. Incentives are often manipulated by governments and people differ drastically in their capacities and motivation.

But Cowen cites a theoretical weakness of comparative advantage: that inputs are substitutable (or complementary) by degrees, and the degree might change under different market conditions. An implication is that “comparative advantages are endogenous to trade”, specialization, and prices. Fair enough, but one could say the same thing about any supply curve. And if equilibria exist in input markets it means these endogenous forces tend toward comparative advantages and specializations balancing the costs and benefits of production and trade. These processes might be constrained by various frictions and interventions, and their dynamics might be complex and lengthy, but that doesn’t invalidate their role in establishing specializations and trade.

The Glue Factory

Smith concerns himself mainly with another one of Cowen’s “failings of comparative advantage”: “They do indeed send horses to the glue factory, so to speak.” The gist here is that when a new technology, motorized transportation, displaced draft horses, there was no “wage” low enough to save the jobs performed by horses. Smith says horses were too costly to support (feed, stables, etc…), so their comparative advantage at “pulling things” was essentially worthless.

True, but comparing outmoded draft horses to humans in a world of AI is not quite appropriate. First, feedstock to a “glue factory” better not be an alternative use for humans whose comparative advantages become worthless. We’ll have to leave that question as an imperative for the alignment community.

Second, horses do not have versatile skill sets, so the comparison here is inapt due to their lack of alternative uses as capital assets. Yes, horses can offer other services (racing, riding, nostalgic carriage rides), but sadly, the vast bulk of work horses were “one-trick ponies”. Most draft horses probably had an opportunity cost of less than zero, given the aforementioned costs of supporting them. And it should be obvious that a single-use input has a comparative advantage only in its single use, and only when that use happens to be the state-of-the-art, or at least opportunity-cost competitive.

The drivers, on the other hand, had alternatives, and saw their comparative advantage in horse-driving occupations plunge with the advent of motorized transport. With time it’s certain many of them found new jobs, perhaps some went on to drive motorized vehicles. The point is that humans have alternatives, the number depending only on their ability to learn a crafts and perhaps move to a new location. Thus, as Smith says, “… everyone — every single person, every single AI, everyone — always has a comparative advantage at something!” But not draft horses in a motorized world, and not square pegs in a world of round holes.

AI Producer Constraints

That brings us to the topic of what Smith calls producer-specific constraints, which place limits on the amount and scope of an input’s productivity. For example, in my last post, there was only one super-talented Harvey Specter, so he’s unlikely to replace you and keep doing his own job. Thus, time is a major constraint. For Harvey or anyone else, the time constraint affects the slope of the tradeoff (and opportunity costs) between one type of specialization versus another.

Draft horses operated under the constraints of land, stable, and feed requirements, which can all be viewed as long-run variable costs. The alternative use for horses at the glue factory did not have those costs.

Humans reliant on wages must feed and house themselves, so those costs also represent constraints, but they probably don’t change the shape of the tradeoff between one occupation and another. That is, they probably do not alter human comparative advantages. Granted, some occupations come with strong expectations among associates or clients regarding an individual’s lifestyle, but this usually represents much more than basic life support. In the other end of the spectrum, displaced workers will take actions along various margins: minimize living costs; rely on savings; avail themselves of charity or any social safety net as might exist; and ultimately they must find new positions at which they maintain comparative advantages.

The Compute Constraint

In the case of AI agents, the key constraint cited by Smith is “compute”, or computer resources like CPUs or GPUs. Advancements in compute have driven the AI revolution, allowing AI models to train on increasingly large data sets and levels of compute. In fact, by one measure of compute, floating point operations per second (FLOPs), compute has become drastically cheaper, with FLOPs per dollar almost doubling every two years. Perhaps I misunderstand him, but Smith seems to assert the opposite: that compute costs are increasing. Regardless, compute is scarce, and will always be scarce because advancements in AI will require vast increases in training. This author explains that while lower compute costs will be more than offset by exponential increases in training requirements, there nevertheless will be an increasing trend in capabilities per compute.

Every AI agent will require compute, and while advancements are enabling explosive growth in AI capabilities, scarce compute places constraints on the kinds of AI development and deployment that some see as a threat to human jobs. In other words, compute scarcity can change the shape of the tradeoffs between various AI applications and thus, comparative advantages.

The Energy Constraint

Another producer constraint on AI is energy. Certainly highly complex applications, perhaps requiring greater training, physical dexterity, manipulation of materials, and judgement, will require a greater compute and energy tradeoff against simpler applications. Smith, however, at one point dismisses energy as a differential producer constraint because “… humans also take energy to run.” That is a reference to absolute energy requirements across inputs (AI vs. human), not differential requirements for an input across different outputs. Only the latter impinge on tradeoffs or opportunity costs facing an inputs. Then, the input having the lowest opportunity cost for a particular output has a comparative advantage for that output. However, it’s not always clear whether an energy tradeoff across outputs for humans will be more or less skewed than for AI, so this might or might not influence a human comparative advantage.

Later, however, Smith speculates that AI might bid up the cost of energy so high that “humans would indeed be immiserated en masse.” That position seems inconsistent. In fact, if AI energy demands are so intensive, it’s more likely to dampen the growth in demand for AI agents as well as increase the human comparative advantage because the most energy-intensive AI applications will be disadvantaged.

And again, there is Smith’s caution regarding the energy required for human life support. Is that a valid long-run variable cost associated with comparative advantages possessed by humans? It’s not wrong to include fertility decisions in the long-run aggregate human labor supply function in some fashion, but it doesn’t imply that energy requirements will eliminate comparative advantages. Those will still exist.

Hype, Or Hyper-Growth?

AI has come a long way over the past two years, and while its prospective impact strikes some as hyped thus far, it has the potential to bring vast gains across a number of fields within just a few years. According to this study, explosive economic growth on the order of 30% annually is a real possibility within decades, as generative AI is embedded throughout the economy. “Unprecedented” is an understatement for that kind of expansive growth. Dylan Matthews in Vox surveys the arguments as to how AI will lead to super-exponential economic growth. This is the kind of scenario that would give rise to superabundance.

I noted above that Smith, despite his unwillingness to guarantee that human jobs will exist in a world of generative AI, asserts (in an update) at the bottom of his post that a superabundance of AI (and abundance generally) would not threaten human comparative advantages. This superabundance is a case of decreasing costs of compute and AI deployment. Here Smith says:

“The reason is that the more abundant AI gets, the more value society produces. The more value society produces, the more demand for AI goes up. The more demand goes up, the greater the opportunity cost of using AI for anything other than its most productive use. 

“As long as you have to make a choice of where to allocate the AI, it doesn’t matter how much AI there is. A world where AI can do anything, and where there’s massively huge amounts of AI in the world, is a world that’s rich and prosperous to a degree that we can barely imagine. And all that fabulous prosperity has to get spent on something. That spending will drive up the price of AI’s most productive uses. That increased price, in turn, makes it uneconomical to use AI for its least productive uses, even if it’s far better than humans at its least productive uses. 

“Simply put, AI’s opportunity cost does not go to zero when AI’s resource costs get astronomically cheap. AI’s opportunity cost continues to scale up and up and up, without limit, as AI produces more and more value.”

This seems as if Smith is backing off his earlier hedge. Some of that spending will be in the form of fabulous investment projects of the kinds I mentioned in my post, and smaller ones as well, all enabled by AI. But the key point is that comparative advantages will not go away, and that means human inputs will continue to be economically useful.

I referenced Andrew Mayne in my last post. He contends that the income growth made possible by AI will ensure that plenty of jobs are available for humans. He mentions comparative advantage in passing, but he centers his argument around applications in which human workers and AI will be strong complements in production, as will sometimes be the case.

A New Age of Worry

The economic success of AI is subject to a number of contingencies. Most important is that AI alignment issues are adequately addressed. That is, the “self-interest” of any agentic AI must align with the interests of human welfare. Do no harm!

The difficulty of universal alignment is illustrated by the inevitability of competition among national governments for AI supremacy, especially in the area of AI-enabled weaponry and espionage. The national security implications are staggering.

A couple of Smith‘s biggest concerns are the social costs of adjusting to the economic disruptions AI is sure to bring, as well as its implications for inequality. Humans will still have comparative advantages, but there will be massive changes in the labor market and transitions that are likely to involve spells of unemployment and interruptions to incomes for some. The speed and strength of the AI revolution may well create social upheaval. That will create incentives for politicians to restrain the development and adoption of AI, and indeed, we already see the stirrings of that today.

Finally, Smith worries that the transition to AI will bring massive gains in wealth to the owners of AI assets, while workers with few skills are likely to languish. I’m not sure that’s consistent with his optimism regarding income growth under AI, and inequality matters much less when incomes are rising generally. Still, the concern is worthy of a more detailed discussion, which I’ll defer to a later post.

The Scary Progress and Hairy Promise of AI

18 Tuesday Apr 2023

Posted by Nuetzel in Artificial Intelligence, Existential Threats, Growth

≈ Leave a comment

Tags

Agentic Behavior, AI Bias, AI Capital, AI Risks, Alignment, Artificial Intelligence, Ben Hayum, Bill Gates, Bryan Caplan, ChatGPT, Clearview AI, Dumbing Down, Eliezer Yudkowsky, Encryption, Existential Risk, Extinction, Foom, Fraud, Generative Intelligence, Greta Thunberg, Human capital, Identity Theft, James Pethokoukis, Jim Jones, Kill Switch, Labor Participation Insurance, Learning Language Models, Lesswrong, Longtermism, Luddites, Mercatus Center, Metaculus, Nassim Taleb, Open AI, Over-Employment, Paul Ehrlich, Pause Letter, Precautionary Principle, Privacy, Robert Louis Stevenson, Robin Hanson, Seth Herd, Synthetic Media, TechCrunch, TruthGPT, Tyler Cowen, Universal Basic Income

Artificial intelligence (AI) has become a very hot topic with incredible recent advances in AI performance. It’s very promising technology, and the expectations shown in the chart above illustrate what would be a profound economic impact. Like many new technologies, however, many find it threatening and are reacting with great alarm, There’s a movement within the tech industry itself, partly motivated by competitive self-interest, calling for a “pause”, or a six-month moratorium on certain development activities. Politicians in Washington are beginning to clamor for legislation that would subject AI to regulation. However, neither a voluntary pause nor regulatory action are likely to be successful. In fact, either would likely do more harm than good.

Leaps and Bounds

The pace of advance in AI has been breathtaking. From ChatGPT 3.5 to ChatGPT 4, in a matter of just a few months, the tool went from relatively poor performance on tests like professional and graduate entrance exams (e.g., bar exams, LSAT, GRE) to very high scores. Using these tools can be a rather startling experience, as I learned for myself recently when I allowed one to write the first draft of a post. (Despite my initial surprise, my experience with ChatGPT 3.5 was somewhat underwhelming after careful review, but I’ve seen more impressive results with ChatGPT 4). They seem to know so much and produce it almost instantly, though it’s true they sometimes “hallucinate”, reflect bias, or invent sources, so thorough review is a must.

Nevertheless, AIs can write essays and computer code, solve complex problems, create or interpret images, sounds and music, simulate speech, diagnose illnesses, render investment advice, and many other things. They can create subroutines to help themselves solve problems. And they can replicate!

As a gauge of the effectiveness of models like ChatGPT, consider that today AI is helping promote “over-employment”. That is, there are a number of ambitious individuals who, working from home, are holding down several different jobs with the help of AI models. In fact, some of these folks say AIs are doing 80% of their work. They are the best “assistants” one could possibly hire, according to a man who has four different jobs.

Economist Bryan Caplan is an inveterate skeptic of almost all claims that smack of hyperbole, and he’s won a series of bets he’s solicited against others willing to take sides in support of such claims. However, Caplan thinks he’s probably lost his bet on the speed of progress on AI development. Needless to say, it has far exceeded his expectations.

Naturally, the rapid progress has rattled lots of people, including many experts in the AI field. Already, we’re witnessing the emergence of “agency” on the part of AI Learning Language Models (LLMs), or so called “agentic” behavior. Here’s an interesting thread on agentic AI behavior. Certain models are capable of teaching themselves in pursuit of a specified goal, gathering new information and recursively optimizing their performance toward that goal. Continued gains may lead to an AI model having artificial generative intelligence (AGI), a superhuman level of intelligence that would go beyond acting upon an initial set of instructions. Some believe this will occur suddenly, which is often described as the “foom” event.

Team Uh-Oh

Concern about where this will lead runs so deep that a letter was recently signed by thousands of tech industry employees, AI experts, and other interested parties calling for a six-month worldwide pause in AI development activity so that safety protocols can be developed. One prominent researcher in machine intelligence, Eliezer Yudkowsky, goes much further: he believes that avoiding human extinction requires immediate worldwide limits on resources dedicated to AI development. Is this a severely overwrought application of the precautionary principle? That’s a matter I’ll consider at greater length below, but like Caplan, I’m congenitally skeptical of claims of impending doom, whether from the mouth of Yudkowsky, Greta Thunberg, Paul Ehrlich, or Nassim Taleb.

As I mentioned at the top, I suspect competition among AI developers played a role in motivating some of the signatories of the “AI pause” letter, and some of the non-signatories as well. Robin Hanson points out that Sam Altman, the CEO of OpenAI, did not sign the letter. OpenAI (controlled by a nonprofit foundation) owns ChatGPT and is the current leader in rolling out AI tools to the public. ChatGPT 4 can be used with the Microsoft search engine Bing, and Microsoft’s Bill Gates also did not sign the letter. Meanwhile, Google was caught flat-footed by the ChatGPT rollout, and its CEO signed. Elon Musk (who signed) wants to jump in with his own AI development: TruthGPT. Of course, the pause letter stirred up a number of members of Congress, which I suspect was the real intent. It’s reasonable to view the letter as a means of leveling the competitive landscape. Thus, it looks something like a classic rent-seeking maneuver, buttressed by the inevitable calls for regulation of AIs. However, I certainly don’t doubt that a number of signatories did so out of a sincere belief that the risks of AI must be dealt with before further development takes place.

The vast dimensions of the supposed AI “threat” may have some libertarians questioning their unequivocal opposition to public intervention. If so, they might just as well fear the potential that AI already holds for manipulation and control by central authorities in concert with their tech and media industry proxies. But realistically, broad compliance with any precautionary agreement between countries or institutions, should one ever be reached, is pretty unlikely. On that basis, a “scout’s honor” temporary moratorium or set of permanent restrictions might be comparable to something like the Paris Climate Accord. China and a few other nations are unlikely to honor the agreement, and we really won’t know whether they’re going along with it except for any traceable artifacts their models might leave in their wake. So we’ll have to hope that safeguards can be identified and implemented broadly.

Likewise, efforts to regulate by individual nations are likely to fail, and for similar reasons. One cannot count on other powers to enforce the same kinds of rules, or any rules at all. Putting our faith in that kind of cooperation with countries who are otherwise hostile is a prescription for ceding them an advantage in AI development and deployment. Regulation of the evolution of AI will likely fail. As Robert Louis Stevenson once wrote, “Thus paternal laws are made, thus they are evaded”. And if it “succeeds, it will leave us with a technology that will fall short of its potential to benefit consumers and society at large. That, unfortunately, is usually the nature of state intrusion into a process of innovation, especially when devised by a cadre of politicians with little expertise in the area.

Again, according to experts like Yudkowsky, AGI would pose serious risks. He thinks the AI Pause letter falls far short of what’s needed. For this reason, there’s been much discussion of somehow achieving an alignment between the interests of humanity and the objectives of AIs. Here is a good discussion by Seth Herd on the LessWrong blog about the difficulties of alignment issues.

Some experts feel that alignment is an impossibility, and that there are ways to “live and thrive” with unalignment (and see here). Alignment might also be achieved through incentives for AIs. Those are all hopeful opinions. Others insist that these models still have a long way to go before they become a serious threat. More on that below. Of course, the models do have their shortcomings, and current models get easily off-track into indeterminacy when attempting to optimize toward an objective.

But there’s an obvious question that hasn’t been answered in full: what exactly are all these risks? As Tyler Cowen has said, it appears that no one has comprehensively catalogued the risks or specified precise mechanisms through which those risks would present. In fact, AGI is such a conundrum that it might be impossible to know precisely what threats we’ll face. But even now, with deployment of AIs still in its infancy, it’s easy to see a few transition problems on the horizon.

White Collar Wipeout

Job losses seem like a rather mundane outcome relative to extinction. Those losses might come quickly, particularly among white collar workers like programmers, attorneys, accountants, and a variety of administrative staffers. According to a survey of 1,000 businesses conducted in February:

“Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year. … When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say ‘definitely,’ while 26% say ‘probably.’ … Within 5 years, 63% of business leaders say ChatGPT will ‘definitely’ (32%) or ‘probably’ (31%) lead to workers being laid off.”

A rapid rate of adoption could well lead to widespread unemployment and even social upheaval. For perspective, that implies a much more rapid rate of technological diffusion than we’ve ever witnessed, so this outcome is viewed with skepticism in some quarters. But in fact, the early adoption phase of AI models is proceeding rather quickly. You can use ChatGPT 4 easily enough on the Bing platform right now!

Contrary to the doomsayers, AI will not just enhance human productivity. Like all new technologies, it will lead to opportunities for human actors that are as yet unforeseen. AI is likely to identify better ways for humans to do many things, or do wonderful things that are now unimagined. At a minimum, however, the transition will be disruptive for a large number of workers, and it will take some time for new opportunities and roles for humans to come to fruition.

Robin Hanson has a unique proposal for meeting the kind of challenge faced by white collar workers vulnerable to displacement by AI, or for blue collar workers who are vulnerable to displacement by robots (the deployment of which has been hastened by minimum wage and living wage activism). This treatment of Hanson’s idea will be inadequate, but he suggests a kind of insurance or contract sold to both workers and investors by owners of assets likely to be insensitive to AI risks. The underlying assets are paid out to workers if automation causes some defined aggregate level of job loss. Otherwise, the assets are paid out to investors taking the other side of the bet. Workers could buy these contracts themselves, or employers could do so on their workers’ behalf. The prices of the contracts would be determined by a market assessment of the probability of the defined job loss “event”. Governmental units could buy the assets for their citizens, for that matter. The “worker contracts” would be cheap if the probability of the job-loss event is low. Sounds far-fetched, but perhaps the idea is itself an entrepreneurial opportunity for creative players in the financial industry.

The threat of job losses to AI has also given new energy to advocates of widespread adoption of universal basic income payments by government. Hanson’s solution is far preferable to government dependence, but perhaps the state could serve as an enabler or conduit through which workers could acquire AI and non-AI capital.

Human Capital

Current incarnations of AI are not just a threat to employment. One might add the prospect that heavy reliance on AI could undermine the future education and critical thinking skills of the general population. Essentially allowing machines to do all the thinking, research, and planning won’t inure to the cognitive strength of the human race, especially over several generations. Already people suffer from an inability to perform what were once considered basic life skills, to say nothing of tasks that were fundamental to survival in the not too distant past. In other words, AI could exaggerate a process of “dumbing down” the populace, a rather undesirable prospect.

Fraud and Privacy

AI is responsible for still more disruptions already taking place, in particular violations of privacy, security, and trust. For example, a company called Clearview AI has scraped 30 billion photos from social media and used them to create what its CEO proudly calls a “perpetual police lineup”, which it has provided for the convenience of law enforcement and security agencies.

AI is also a threat to encryption in securing data and systems. Conceivably, AI could be of value in perpetrating identity theft and other kinds of fraud, but it can also be of value in preventing them. AI is also a potential source of misleading information. It is often biased, reflecting specific portions of the on-line terrain upon which it is trained, including skewed model weights applied to information reflecting particular points of view. Furthermore, misinformation can be spread by AIs via “synthetic media” and the propagation of “fake news”. These are fairly clear and present threats of social, economic, and political manipulation. They are all foreseeable dangers posed by AI in the hands of bad actors, and I would include certain nudge-happy and politically-motivated players in that last category.

The Sky-Already-Fell Crowd

Certain ethicists with extensive experience in AI have condemned the signatories of the “Pause Letter” for a focus on “longtermism”, or risks as yet hypothetical, rather than the dangers and wrongs attributable to AIs that are already extant: TechCrunch quotes a rebuke penned by some of these dissenting ethicists to supporters of the “Pause Letter”:

“‘Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,’ they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.”

So these ethicists bemoan AI’s presumed contribution to the strength and concentration of “existing power structures”. In that, I detect just a whiff of distaste for private initiative and private rewards, or perhaps against the sovereign power of states to allow a laissez faire approach to AI development (or to actively sponsor it). I have trouble taking this “rebuke” too seriously, but it will be fruitless in any case. Some form of cooperation between AI developers on safety protocols might be well advised, but competing interests also serve as a check on bad actors, and it could bring us better solutions as other dilemmas posed by AI reveal themselves.

Imagining AI Catastrophes

What are the more consequential (and completely hypothetical) risks feared by the “pausers” and “stoppers”. Some might have to do with the possibility of widespread social upheaval and ultimately mayhem caused by some of the “mundane” risks described above. But the most noteworthy warnings are existential: the end of the human race! How might this occur when AGI is something confined to computers? Just how does the supposed destructive power of AGIs get “outside the box”? It must do so either by tricking us into doing something stupid, hacking into dangerous systems (including AI weapons systems or other robotics), and/or through the direction and assistance of bad human actors. Perhaps all three!

The first question is this: why would an AGI do anything so destructive? No matter how much we might like to anthropomorphize an “intelligent” machine, it would still be a machine. It really wouldn’t like or dislike humanity. What it would do, however, is act on its objectives. It would seek to optimize a series of objective functions toward achieving a goal or a set of goals it is given. Hence the role for bad actors. Let’s face it, there are suicidal people who might like nothing more than to take the whole world with them.

Otherwise, if humanity happens to be an obstruction to solving an AGI’s objective, then we’d have a very big problem. Humanity could be an aid to solving an AGI’s optimization problem in ways that are dangerous. As Yudkowsky says, we might represent mere “atoms it could use somewhere else.” And if an autonomous AGI were capable of setting it’s own objectives, without alignment, the danger would be greatly magnified. An example might be the goal of reducing carbon emissions to pre-industrial levels. How aggressively would an AGI act in pursuit of that goal? Would killing most humans contribute to the achievement of that goal?

Here’s one that might seem far-fetched, but the imagination runs wild: some individuals might be so taken with the power of vastly intelligent AGI as to make it an object of worship. Such an “AGI God” might be able to convert a sufficient number of human disciples to perpetrate deadly mischief on its behalf. Metaphorically speaking, the disciples might be persuaded to deliver poison kool-aid worldwide before gulping it down themselves in a Jim Jones style mass suicide. Or perhaps the devoted will survive to live in a new world mono-theocracy. Of course, these human disciples would be able to assist the “AGI God” in any number of destructive ways. And when brain-wave translation comes to fruition, they better watch out. Only the truly devoted will survive.

An AGI would be able to create the illusion of emergency, such as a nuclear launch by an adversary nation. In fact, two or many adversary nations might each be fooled into taking actions that would assure mutual destruction and a nuclear winter. If safeguards such as human intermediaries were required to authorize strikes, it might still be possible for an AGI to fool those humans. And there is no guarantee that all parties to such a manufactured conflict could be counted upon to have adequate safeguards, even if some did.

Yudkowsky offers at least one fairly concrete example of existential AGI risk:

“A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”

There are many types of physical infrastructure or systems that an AGI could conceivably compromise, especially with the aid of machinery like robots or drones to which it could pass instructions. Safeguards at nuclear power plants could be disabled before steps to trigger melt down. Water systems, rivers, and bodies of water could be poisoned. The same is true of food sources, or even the air we breathe. In any case, complete social disarray might lead to a situation in which food supply chains become completely dysfunctional. So, a super-intelligence could probably devise plenty of “imaginative” ways to rid the earth of human beings.

Back To Earth

Is all this concern overblown? Many think so. Bryan Caplan now has a $500 bet with Eliezer Yudkowsky that AI will not exterminate the human race by 2030. He’s already paid Yudkowsky, who will pay him $1,000 if we survive. Robin Hanson says “Most AI Fear Is Future Fear”, and I’m inclined to agree with that assessment. In a way, I’m inclined to view the AI doomsters as highly sophisticated, change-fearing Luddites, but Luddites nevertheless.

Ben Hayum is very concerned about the dangers of AI, but writing at LessWrong, he recognizes some real technical barriers that must be overcome for recursive optimization to be successful. He also notes that the big AI developers are all highly focused on safety. Nevertheless, he says it might not take long before independent users are able to bootstrap their own plug-ins or modules on top of AI models to successfully optimize without running off the rails. Depending on the specified goals, he thinks that will be a scary development.

James Pethokoukis raises a point that hasn’t had enough recognition: successful innovations are usually dependent on other enablers, such as appropriate infrastructure and process adaptations. What this means is that AI, while making spectacular progress thus far, won’t have a tremendous impact on productivity for at least several years, nor will it pose a truly existential threat. The lag in the response of productivity growth would also limit the destructive potential of AGI in the near term, since installation of the “social plant” that a destructive AGI would require will take time. This also buys time for attempting to solve the AI alignment problem.

In another Robin Hanson piece, he expresses the view that the large institutions developing AI have a reputational Al stake and are liable for damages their AI’s might cause. He notes that they are monitoring and testing AIs in great detail, so he thinks the dangers are overblown.:

“So, the most likely AI scenario looks like lawful capitalism…. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives.”

In the longer term, the chief focus of the AI doomsters, Hanson is truly an AI optimist. He thinks AGIs will be “designed and evolved to think and act roughly like humans, in order to fit smoothly into our many roughly-human-shaped social roles.” Furthermore, he notes that AI owners will have strong incentives to monitor and “delimit” AI behavior that runs contrary to its intended purpose. Thus, a form of alignment is achieved by virtue of economic and legal incentives. In fact, Hanson believes the “foom” scenario is implausible because:

“… it stacks up too many unlikely assumptions in terms of our prior experiences with related systems. Very lumpy tech advances, techs that broadly improve abilities, and powerful techs that are long kept secret within one project are each quite rare. Making techs that meet all three criteria even more rare. In addition, it isn’t at all obvious that capable AIs naturally turn into agents, or that their values typically change radically as they grow. Finally, it seems quite unlikely that owners who heavily test and monitor their very profitable but powerful AIs would not even notice such radical changes.”

As smart as AGIs would be, Hanson asserts that the problem of AGI coordination with other AIs, robots, and systems would present insurmountable obstacles to a bloody “AI revolution”. This is broadly similar to Pethokoukis’ theme. Other AIs or AGIs are likely to have competing goals and “interests”. Conflicting objectives and competition of this kind will do much to keep AGIs honest and foil malign AGI behavior.

The kill switch is a favorite response of those who think AGI fears are exaggerated. Just shut down an AI if its behavior is at all aberrant, or if a user attempts to pair an AI model with instructions or code that might lead to a radical alteration in an AI’s level of agency. Kill switches would indeed be effective at heading off disaster if monitoring and control is incorruptible. This is the sort of idea that begs for a general solution, and one hopes that any advance of that nature will be shared broadly.

One final point about AI agency is whether autonomous AGIs might ever be treated as independent factors of production. Could they be imbued with self-ownership? Tyler Cowen asks whether an AGI created by a “parent” AGI could legitimately be considered an independent entity in law, economics, and society. And how should income “earned” by such an AGI be treated for tax purposes. I suspect it will be some time before AIs, including AIs in a lineage, are treated separately from their “controlling” human or corporate entities. Nevertheless, as Cowen says, the design of incentives and tax treatment of AI’s might hold some promise for achieving a form of alignment.

Letting It Roll

There’s plenty of time for solutions to the AGI threat to be worked out. As I write this, the consensus forecast for the advent of real AGI on the Metaculus online prediction platform is July 27, 2031. Granted, that’s more than a year sooner than it was 11 days ago, but it still allows plenty of time for advances in controlling and bounding agentic AI behavior. In the meantime, AI is presenting opportunities to enhance well being through areas like medicine, nutrition, farming practices, industrial practices, and productivity enhancement across a range of processes. Let’s not forego these opportunities. AI technology is far too promising to hamstring with a pause, moratoria, or ill-devised regulations. It’s also simply impossible to stop development work on a global scale.

Nevertheless, AI issues are complex for all private and public institutions. Without doubt, it will change our world. This AI Policy Guide from Mercatus is a helpful effort to lay out issues at a high-level.

Fix TikTok? Or Nix It? The Authoritarian RESTRICT Act

08 Saturday Apr 2023

Posted by Nuetzel in anti-Semitism, Big Government, Liberty, Technology

≈ 1 Comment

Tags

AI, Artificial Intelligence, Attention Span, ByteDance, CATO Institute, Caveat Emptor, ChatGPT, Community Standards, Data Privacy, Elon Musk, First Amendment, Free Speech, Hate Speech, L. Frank Baum, Munger Test, National Security, Open Source, PATRIOT Act, People’s Republic of China, Philip Hamburger, Protectionism, RESTRICT Act, Scott Lincicome, Separation of Powers, The Land of Oz, TikTok, Twitter

There’s justifiable controversy surrounding TikTok, the social media app. I find much to dislike about TikTok but also much to dislike about the solutions some have proposed, such as a complete ban on the app in the United States. Such proposals would grant the federal executive branch powers that most of us wouldn’t grant to our worst enemy (i.e., they fail the “Munger test”).

Congressional Activity

The proposed RESTRICT Act (Restricting the Emergence of Security Threats that Risk Information and Communications Technology) is a bipartisan effort to eliminate the perceived threats to national security posed by technologies like TikTok. That would include a ban on the app. Proponents of a ban go further than national security concerns, arguing that TikTok represents a threat to the health and productivity of users. However, an outright ban on the app would be a drastic abridgment of free speech rights, and it would limit Americans’ access to a popular platform for creativity and entertainment. In addition, the proposed legislation would authorize intrusions into the privacy of Americans and extend new executive authority into the private sphere, such as tampering with trade and commerce in ways that could facilitate protectionist actions. In fact, so intrusive is the RESTRICT Act that it’s been called a “Patriot Act for the digital age.” From Scott Lincicome and several coauthors at CATO:

“… the proposal—at least as currently written—raises troubling and far‐reaching concerns for the First Amendment, international commerce, technology, privacy, and separation of powers.”

Bad Company

TikTok is owned by a Chinese company, ByteDance, and there is understandable concern about the app’s data collection practices and the potential for the Chinese government to access user data for nefarious purposes. The Trump administration cited these concerns when it attempted to ban TikTok in 2020, and while the ban was ultimately blocked by a federal judge, the Biden administration has also expressed concerns about the app’s data security.

TikTok has also been accused of promoting harmful content, including hate speech, misinformation, and sexually explicit material. Critics argue that the app’s algorithm rewards provocative and controversial content, which can lead to the spread of harmful messages and the normalization of inappropriate behavior. Of course, those are largely value judgements, including labels like “provocative”, “inappropriate”, and many interpretations of content as “hate speech”. With narrow exceptions, such content is protected under the First Amendment.

Unlike L. Frank Baum’s Tik-Tok machine in the land of Oz, the TikTok app might not always qualify as a “faithful servant”. There are some well-founded health and performance concerns related to TikTok, however. Some experts have expressed reservations about the effects of the app on attention span. The short-form videos typical of TikTok, and endless scrolling, suggest that the app is designed to be addictive, though I’m not aware of studies that purport to prove its “addictive nature. Of course, it can easily become a time sink for users, but so can almost all social media platforms. Nevertheless, some experts contend that heavy use of TikTok may lead to a decrease in attention span and an increase in distraction, which can have negative implications for productivity, learning, and mental health.

Bad Government

The RESTRICT Act, or a ban on TikTok, would drastically violate free speech rights and limit Americans’ access to a popular platform for creativity and self-expression. TikTok has become a cultural phenomenon, with millions of users creating and sharing content on the app every day. This is particularly true of more youthful individuals, who are less likely to be persuaded by their elders’ claims that the content available on TikTok is “inappropriate”. And they’re right! At the very least, “appropriateness” depends on an individual’s age, and it is generally not an area over which government should have censorship authority, “community standards” arguments notwithstanding. Furthermore, allowing access for children is a responsibility best left in the hands of parents, not government.

Likewise, businesses should be free to operate without undue interference from government. The RESTRICT Act would violate these principles, as it would limit individual choice and potentially harm innovation within the U.S. tech industry.

A less compelling argument against banning TikTok is that it could harm U.S.-China relations and have broader economic consequences. China has already warned that a TikTok ban could prompt retaliation, and such a move could escalate tensions between the two countries. That’s all true to one degree or another, but China has already demonstrated a willingness and intention to harm U.S.-China relations. As for economic repercussions, do business with China at your own risk. According to this piece, U.S. investment in the PRC’s tech industry has fallen by almost 80% since 2018, so the private sector is already taking strong steps to reduce that risk.

Like it or not, however, many software companies are subject to at least partial Chinese jurisdiction. The means the RESTRICT Act would do far more than simply banning TikTok in the U.S. First, it would subject on-line activity to much greater scrutiny. Second, it would threaten users of a variety of information or communications products and services with severe penalties for speech deemed to be “unsafe”. According to Columbia Law Professor Philip Hamburger:

“Under the proposed statute, the commerce secretary could therefore take ‘any mitigation measure to address any risk’ arising from the use of the relevant communications products or services, if the secretary determines there is an ‘undue or unacceptable risk to the national security of the United States or the safety of United States persons.’

We live in an era in which dissenting speech is said to be violence. In recent years, the Federal Bureau of Investigation has classified concerned parents and conservative Catholics as violent extremists. So when the TikTok bill authorizes the commerce secretary to mitigate communications risks to ‘national security’ or ‘safety,’ that means she can demand censorship.”

A Lighter Touch

The RESTRICT Act is unreasonably broad and intrusive and an outright ban of TikTok is unnecessarily extreme. There are less draconian alternatives, though all may involve some degree of intrusion. For example, TikTok could be compelled to allow users to opt out of certain types of data collection, and to allow independent audits of its data handling practices. TikTok could also be required to store user data within the U.S. or in other countries that have strong data privacy laws. While this option would represent stronger regulation of TikTok, it could also be construed as strengthening the property rights of users.

To address concerns about TikTok’s ownership by a Chinese company, its U.S. operations could be required to partner with a U.S. company. Perhaps this could satisfied by allowing a U.S. company to acquire a stake in TikTok, or by having TikTok spin off its U.S. operations into a separate company that is majority-owned by a U.S. entity.

Finally, perhaps political or regulatory pressure could persuade TikTok to switch to using open-source software, as Elon Musk has done with Twitter. Then, independent developers would have the ability to audit code and identify security vulnerabilities or suspicious data handling practices. From there, it’s a matter of caveat emptor.

Restrain the Restrictive Impulse

The TikTok debate raises important questions about the role of government in regulating technology and free speech. Rather than impulsively harsh legislation like the RESTRICT Act or an outright ban on TikTok, an enlightened approach would encourage transparency and competition in the tech industry. That, in turn, could help address concerns about data security and promote innovation. Additionally, individuals should take personal responsibility for their use of technology by being mindful of the content they consume and what they reveal about themselves on social media. That includes parental responsibility and supervision of the use of social media by children. Ultimately, the TikTok debate highlights tensions between national security, technological innovation, and individual liberty. and it’s important to find a balance that protects all three.

Note: The first draft of this post was written by ChatGPT, based on an initial prompt and sequential follow-ups. It was intended as an experiment in preparation for a future post on artificial intelligence (AI). While several vestiges of the first draft remain, what appears above bears little resemblance to what ChatGPT produced. There were many deletions, rewrites, and supplements in arriving at the final draft.

My first impression of the ChatGPT output was favorable. It delineated a few of the major issues surrounding a TikTok ban, but later I was struck by its repetition of bland generalities and its lack of information on more recent developments like the RESTRICT Act. The latter shortfall was probably due to my use of ChatGPT 3.5 rather than 4.0. On the whole, the exercise was fascinating, but I will limit my use of AI tools like ChatGPT to investigation of background on certain questions.

Stealth Hiring Quotas Via AI

24 Monday Oct 2022

Posted by Nuetzel in Discrimination, Diversity, Quotas, Uncategorized

≈ Leave a comment

Tags

AI, AI Bill of Rights, Algorithmic Bias, Algorithms, American Data Privacy and Protection Act, Artificial Intelligence, DEI, Disparate impact, Diversity Equity Inclusion, EEOC, Hiring Quotas, Machine Learning, Neural Networks, Protected Classes, Stealth Quotas, Stewart Baker, Volokh Conspiracy

Hiring quotas are of questionable legal status, but for several years, some large companies have been adopting quota-like “targets” under the banner of Diversity, Equity and Inclusion (DEI) initiatives. Many of these so-called targets apply to the placement of minority candidates into “leadership positions”, and some targets may apply more broadly. Explicit quotas have long been viewed negatively by the public. Quotas have also been proscribed under most circumstances by the Supreme Court, and the EEOC’s Compliance Manual still includes rigid limits on when the setting of minority hiring “goals” is permissible.

Yet large employers seem to prefer the legal risks posed by aggressive DEI policies to the risk of lawsuits by minority interests, unrest among minority employees and “woke” activists, and “disparate impact” inquiries by the EEOC. Now, as Stewart Baker writes in a post over at the Volokh Conspiracy, employers have a new way of improving — or even eliminating — the tradeoff they face between these risks: “stealth quotas” delivered via artificial intelligence (AI) decisioning tools.

Skynet Smiles

A few years ago I discussed the extensive use of algorithms to guide a range of decisions in “Behold Our Algorithmic Overlords“. There, I wrote:

“Imagine a world in which all the information you see is selected by algorithm. In addition, your success in the labor market is determined by algorithm. Your college admission and financial aid decisions are determined by algorithm. Credit applications are decisioned by algorithm. The prioritization you are assigned for various health care treatments is determined by algorithm. The list could go on and on, but many of these ‘use-cases’ are already happening to one extent or another.”

That post dealt primarily with the use of algorithms by large tech companies to suppress information and censor certain viewpoints, a danger still of great concern. However, the use of AI to impose de facto quotas in hiring is a phenomenon that will unequivocally reduce the efficiency of the labor market. But exactly how does this mechanism work to the satisfaction of employers?

Machine Learning

As Baker explains, AI algorithms are “trained” to find optimal solutions to problems via machine learning techniques, such as neural networks, applied to large data sets. These techniques are are not as straightforward as more traditional modeling approaches such as linear regression, which more readily lend themselves to intuitive interpretation of model results. Baker uses the example of lung x-rays showing varying degrees of abnormalities, which range from the appearance of obvious masses in the lungs to apparently clear lungs. Machine learning algorithms sometimes accurately predict the development of lung cancer in individuals based on clues that are completely non-obvious to expert evaluators. This, I believe, is a great application of the technology. It’s too bad that the intuition behind many such algorithmic decisions are often impossible to discern. And the application of AI decisioning to social problems is troubling, not least because it necessarily reduces the richness of individual qualities to a set of data points, and in many cases, defines individuals based on group membership.

When it comes to hiring decisions, an AI algorithm can be trained to select the “best” candidate for a position based on all encodable information available to the employer, but the selection might not align with a hiring manager’s expectations, and it might be impossible to explain the reasons for the choice to the manager. Still, giving the AI algorithm the benefit of the doubt, it would tend to make optimal candidate selections across reasonably large sets of similar, open positions.

Algorithmic Bias

A major issue with respect to these algorithms has been called “algorithmic bias”. Here, I limit the discussion to hiring decisions. Ironically, “bias” in this context is a rather slanted description, but what’s meant is that the algorithms tend to select fewer candidates from “protected classes” than their proportionate shares of the general population. This is more along the lines of so-called “disparate impact”, as opposed to “bias” in the statistical sense. Baker discusses the attacks this has provoked against algorithmic decision techniques. In fact, a privacy bill is pending before Congress containing provisions to address “AI bias” called the American Data Privacy and Protection Act (ADPPA). Baker is highly skeptical of claims regarding AI bias both because he believes they have little substance and because “bias” probably means that AIs sometimes make decisions that don’t please DEI activists. Baker elaborates on these developments:

“The ADPPA was embraced almost unanimously by Republicans as well as Democrats on the House energy and commerce committee; it has stalled a bit, but still stands the best chance of enactment of any privacy bill in a decade (its supporters hope to push it through in a lame-duck session). The second is part of the AI Bill of Rights released last week by the Biden White House.”

What the hell are the Republicans thinking? Whether or not it becomes a matter of law, misplaced concern about AI bias can be addressed in a practical sense by introducing the “right” constraints to the algorithm, such as a set of aggregate targets for hiring across pools of minority and non-minority job candidates. Then, the algorithm still optimizes, but the constraints impinge on the selections. The results are still “optimal”, but in a more restricted sense.

Stealth Quotas

As Baker says, these constrains on algorithmic tools would constitute a way of imposing quotas on hiring that employers won’t really have to explain to anyone. That’s because: 1) the decisioning rationale is so obtuse that it can’t readily be explained; and 2) the decisions are perceived as “fair” in the aggregate due to the absence of disparate impacts. As to #1, however, the vendors who create hiring algorithms, and specific details regarding algorithm development, might well be subject to regulatory scrutiny. In the end, the chief concern of these regulators is the absence of disparate impacts, which is cinched by #2.

About a month ago I posted about the EEOC’s outrageous and illegal enforcement of disparate impact liability. Should I welcome AI interventions because they’ll probably limit the number of enforcement actions against employers by the EEOC? After all, there is great benefit in avoiding as much of the rigamarole of regulatory challenges as possible. Nonetheless, as a constraint on hiring, quotas necessarily reduce productivity. By adopting quotas, either explicitly or via AI, the employer foregoes the opportunity to select the best candidate from the full population for a certain share of open positions, and instead limits the pool to narrow demographics.

Demographics are dynamic, and therefore stealth quotas must be dynamic to continue to meet the demands of zero disparate impact. But what happens as an increasing share of the population is of mixed race? Do all mixed race individuals receive protected status indefinitely, gaining preferences via algorithm? Does one’s protected status depend solely upon self-identification of racial, ethnic, or gender identity?

For that matter, do Asians receive hiring preferences? Sometimes they are excluded from so-called protected status because, as a minority, they have been “too successful”. Then, for example, there are issues such as the classification of Hispanics of European origin, who are likely to help fill quotas that are really intended for Hispanics of non-European descent.

Because self-identity has become so critical, quotas present massive opportunities for fraud. Furthermore, quotas often put minority candidates into positions at which they are less likely to be successful, with damaging long-term consequences to both the employer and the minority candidate. And of course there should remain deep concern about the way quotas violate the constitutional guarantee of equal protection to many job applicants.

The acceptance of AI hiring algorithms in the business community is likely to depend on the nature of the positions to be filled, especially when they require highly technical skills and/or the pool of candidates is limited. Of course, there can be tensions between hiring managers and human resources staff over issues like screening job candidates, but HR organizations are typically charged with spearheading DEI initiatives. They will be only too eager to adopt algorithmic selection and stealth quotas for many positions and will probably succeed, whether hiring departments like it or not.

The Death of Merit

Unfortunately, quotas are socially counter-productive, and they are not a good way around the dilemma posed by the EEOC’s aggressive enforcement of disparate impact liability. The latter can only be solved only when Congress acts to more precisely define the bounds of illegal discrimination in hiring. Meanwhile, stealth quotas cede control over important business decisions to external vendors selling algorithms that are often unfathomable. Quotas discard judgements as to relevant skills in favor of awarding jobs based on essentially superficial characteristics. This creates an unnecessary burden on producers, even if it goes unrecognized by those very firms and is self-inflicted. Even worse, once these algorithms and stealth quotas are in place, they are likely to become heavily regulated and manipulated in order to achieve political goals.

Baker sums up a most fundamental objection to quotas thusly:

“Most Americans recognize that there are large demographic disparities in our society, and they are willing to believe that discrimination has played a role in causing the differences. But addressing disparities with group remedies like quotas runs counter to a deep-seated belief that people are, and should be, judged as individuals. Put another way, given a choice between fairness to individuals and fairness on a group basis, Americans choose individual fairness. They condemn racism precisely for its refusal to treat people as individuals, and they resist remedies grounded in race or gender for the same reason.”

Quotas, and stealth quotas, substitute overt discrimination against individuals in non-protected classes, and sometimes against individuals in protected classes as well, for the imagined sin of a disparate impact that might occur when the best candidate is hired for a job. AI algorithms with protection against “algorithmic bias” don’t satisfy this objection. In fact, the lack of accountability inherent in this kind of hiring solution makes it far worse than the status quo.

Central Planning With AI Will Still Suck

23 Sunday Feb 2020

Posted by Nuetzel in Artificial Intelligence, Central Planning, Free markets

≈ Leave a comment

Tags

Artificial Intelligence, central planning, Common Law, Data Science, Digital Socialism, Friedrich Hayek, Jesús Fernández-Villaverde, Machine Learning, Marginal Revolution, Property Rights, Robert Lucas, Roman Law, Scientism, The Invisible Hand, The Knowledge Problem, The Lucas Critique, Tyler Cowen

 

Artificial intelligence (AI) or machine learning (ML) will never make central economic planning a successful reality. Jesús Fernández-Villaverde of the University of Pennsylvania has written a strong disavowal of AI’s promise in central planning, and on the general difficulty of using ML to design social and economic policies. His paper, “Simple Rules for a Complex World with Artificial Intelligence“, was linked last week by Tyler Cowen at Marginal Revolution. Note that the author isn’t saying “digital socialism” won’t be attempted. Judging by the attention it’s getting, and given the widespread acceptance of the scientism of central planning, there is no question that future efforts to collectivize will involve “data science” to one degree or another. But Fernández-Villaverde, who is otherwise an expert and proponent of ML in certain applications, is simply saying it won’t work as a curative for the failings of central economic planning — that the “simple rules” of the market will aways produce superior social outcomes.

The connection between central planning and socialism should be obvious. Central planning implies control over the use of resources, and therefore ownership by a central authority, whether or not certain rents are paid as a buy-off to the erstwhile owners of those resources. By “digital socialism”, Fernández-Villaverde means the use of ML to perform the complex tasks of central planning. The hope among its cheerleaders is that adaptive algorithms can discern the optimal allocation of resources within some “big data” representation of resource availability and demands, and that this is possible on an ongoing, dynamic basis.

Fernández-Villaverde makes the case against this fantasy on three fronts or barriers to the use of AI in policy applications: data requirements; the endogeneity of expectations and behavior; and the knowledge problem.

The Data Problem: ML requires large data sets to do anything. And impossibly large data sets are required for ML to perform the task of planning economic activity, even for a small portion of the economy. Today, those data sets do not exist except in certain lines of business. Can they exist more generally, capturing the details of all economic transactions? Can the data remain current? Only at great expense, and ML must be trained to recognize whether data should be discarded as it becomes stale over time due to shifting demographics, tastes, technologies, and other changes in the social and physical environment. 

Policy Change Often Makes the Past Irrelevant: Planning algorithms are subject to the so-called Lucas Critique, a well known principle in macroeconomics named after Nobel Prize winner Robert Lucas. The idea is that policy decisions based on observed behavior will change expectations, prompting responses that differ from the earlier observations under the former policy regime. A classic case involves the historical tradeoff between inflation and unemployment. Can this tradeoff be exploited by policy? That is, can unemployment be reduced by a policy that increases the rate of inflation (by printing money at a faster rate)? In this case, the Lucas Critique is that once agents expect a higher rate of inflation, they are unlikely to confuse higher prices with a more profitable business environment, so higher employment will not be sustained. If ML is used to “plan” certain outcomes desired by some authority, based on past relationships and transactions, the Lucas Critique implies that things are unlikely to go as planned.  

The Knowledge Problem: Not only are impossibly large data sets required for economic planning with ML, as noted above. To achieve the success of markets in satisfying unlimited wants given scarce resources, the required information is impossible to collect or even to know. This is what Friedrich Hayek called the “knowledge problem”. Just imagine the difficulty of arranging a data feed on the shifting preferences of many different individuals across a huge number of products,  services and they way preference orderings will change across the range of possible prices. The data must have immediacy, not simply a historical record. Add to this the required information on shifting supplies and opportunity costs of resources needed to produce those things. And the detailed technological relationships between production inputs and outputs, including time requirements, and the dynamics of investment in future productive capacity. And don’t forget to consider the variety of risks agents face, their degree of risk aversion, and the ways in which risks can be mitigated or hedged. Many of these things are simply unknowable to a central authority. The information is hopelessly dispersed. The task of collecting even the knowable pieces is massive beyond comprehension.

The market system, however, is able to process all of this information in real time, the knowable and the unknowable, in ways that balance preferences with the true scarcity of resources. No one actor or authority need know it all. It is the invisible hand. Among many other things, it ensures the deployment of ML only where it makes economic sense. Here is Fernández-Villaverde:

“The only reliable method we have found to aggregate those preferences, abilities, and efforts is the market because it aligns, through the price system, incentives with information revelation. The method is not perfect, and the outcomes that come from it are often unsatisfactory. Nevertheless, like democracy, all the other alternatives, including ‘digital socialism,’ are worse.”

Later, he says:

“… markets work when we implement simple rules, such as first possession, voluntary exchange, and pacta sunt servanda. This result is not a surprise. We did not come up with these simple rules thanks to an enlightened legislator (or nowadays, a blue-ribbon committee of academics ‘with a plan’). … The simple rules were the product of an evolutionary process. Roman law, the Common law, and Lex mercatoria were bodies of norms that appeared over centuries thanks to the decisions of thousands and thousands of agents.” 

These simple rules represent good private governance. Beyond reputational enforcement, the rules require only trust in the system of property rights and a private or public judicial authority. Successfully replacing private arrangements in favor of a central plan, however intricately calculated via ML, will remain a pipe dream. At best, it would suspend many economic relationships in amber, foregoing the rational adjustments private agents would make as conditions change. And ultimately, the relationships and activities that planning would sanction would be shaped by political whim. It’s a monstrous thing to contemplate — both fruitless and authoritarian.

The Tyranny of the Job Saviors

17 Monday Jul 2017

Posted by Nuetzel in Automation, Free markets, Technology

≈ Leave a comment

Tags

Artificial Intelligence, Automation, Capital-Labor Substitution, Creative Destruction, Dierdre McCloskey, Don Boudreaux, Frederic Bastiat, James Pethokoukas, Opportunity Costs, Robert Samuelson, Robot Tax, Seen and Unseen, Technological Displacement, Universal Basic Income

Many jobs have been lost to technology over the last few centuries, yet more people are employed today than ever before. Despite this favorable experience, politicians can’t help the temptation to cast aspersions at certain production technologies, constantly advocating intervention in markets to “save jobs”. Today, some serious anti-tech policy proposals and legislative efforts are underway: regional bans on autonomous vehicles, “robot taxes” (advocated by Bill Gates!!), and even continuing legal resistance to technology-enabled services such as ride sharing and home sharing. At the link above, James Pethokoukas expresses trepidation about one legislative proposal taking shape, sponsored by Senator Maria Cantwell (D-WA), to create a federal review board with the potential to throttle innovation and the deployment of technology, particularly artificial intelligence.

Last week I mentioned the popular anxiety regarding automation and artificial intelligence in my post on the Universal Basic Income. This anxiety is based on an incomplete accounting of the “seen” and “unseen” effects of technological advance, to borrow the words of Frederic Bastiat, and of course it is unsupported by historical precedent. Dierdre McCloskey reviews the history of technological innovations and its positive impact on dynamic labor markets:

“In 1910, one out of 20 of the American workforce was on the railways. In the late 1940s, 350,000 manual telephone operators worked for AT&T alone. In the 1950s, elevator operators by the hundreds of thousands lost their jobs to passengers pushing buttons. Typists have vanished from offices. But if blacksmiths unemployed by cars or TV repairmen unemployed by printed circuits never got another job, unemployment would not be 5 percent, or 10 percent in a bad year. It would be 50 percent and climbing.

Each month in the United States—a place with about 160 million civilian jobs—1.7 million of them vanish. Every 30 days, in a perfectly normal manifestation of creative destruction, over 1 percent of the jobs go the way of the parlor maids of 1910. Not because people quit. The positions are no longer available. The companies go out of business, or get merged or downsized, or just decide the extra salesperson on the floor of the big-box store isn’t worth the costs of employment.“

Robert Samuelson discusses a recent study that found that technological advance consistently improves opportunities for labor income. This is caused by cost reductions in the innovating industries, which are subsequently passed through to consumers, business profits, and higher pay to retained workers whose productivity is enhanced by the improved technology inputs. These gains consistently outweigh losses to those who are displaced by the new capital. Ultimately, the gains diffuse throughout society, manifesting in an improved standard of living.

In a brief, favorable review of Samuelson’s piece, Don Boudreaux adds some interesting thoughts on the dynamics of technological advance and capital-labor substitution:

“… innovations release real resources, including labor, to be used in other productive activities – activities that become profitable only because of this increased availability of resources.  Entrepreneurs, ever intent on seizing profitable opportunities, hire and buy these newly available resources to expand existing businesses and to create new ones.  Think of all the new industries made possible when motorized tractors, chemical fertilizers and insecticides, improved food-packaging, and other labor-saving innovations released all but a tiny fraction of the workforce from agriculture.

Labor-saving techniques promote economic growth not so much because they increase monetary profits that are then spent but, instead, because they release real resources that are then used to create and expand productive activities that would otherwise be too costly.”

Those released resources, having lower opportunity costs than in their former, now obsolete uses, can find new and profitable uses provided they are priced competitively. Some displaced resources might only justify use after undergoing dramatic transformations, such as recycling of raw components or, for workers, education in new fields or vocations. Indeed, some of  those transformations are unforeeeable prior to the innovations, and might well add more value than was lost via displacement. But that is how the process of creative destruction often unfolds.

A government that seeks to intervene in this process can do only harm to the long-run interests of its citizens. “Saving a job” from technological displacement surely appeals to the mental and emotive mindset of the populist, and it has obvious value as a progressive virtue-signalling tool. These reactions, however, demonstrate a perspective limited to first-order, “seen” changes. What is less obvious to these observers is the impact of politically-induced tech inertia on consumers’ standard of living. This is accompanied by a stultifying impact on market competition, long-run penalization of the most productive workers, and a degradation of freedom from restraints on private decision-makers. As each “visible” advance is impeded, the negative impact compounds with the loss of future, unseen, but path-dependent advances that cannot ever occur.

Sell the Interstates and Poof — Get a Universal Basic Income

11 Tuesday Jul 2017

Posted by Nuetzel in Automation, Universal Basic Income

≈ 3 Comments

Tags

Artificial Intelligence, Basic Income, James P. Murphy, Jesse Walker, Minimum Wage, Opportunity cost, Private Infrastructure, Private Roads, Public Lands, Rainy Day Funds, Universal Basic Income, Vernon Smith, work incentives

Proposals for a universal basic income (UBI) seem to come up again and again. Many observers uncritically accept the notion that robots and automation will eliminate labor as a factor of production in the not-too-distant future. As a result, they cannot imagine how traditional wage earners, and even many salary earners, will get along in life without the helping hand of government. Those who own capital assets — machines, buildings and land — will have to be taxed to support UBI payments, according to this logic.

Even with artificial intelligence added to the mix, I view robot anxiety as overblown, but it makes for great headlines. The threat is likely no greater than the substitution of capital for labor that’s been ongoing since the start of the industrial revolution, and which ultimately led to the creation of more jobs in occupations that were never before imagined. See below for more on my skepticism for robot dystopia. For now, I’ll stipulate that human obsolescence will happen someday, or that a great many workers will be displaced by automation over an extended period. How will society manage with minimal rewards for labor? The question of distributing goods and services will depend more exclusively on the ownership of capital, or else it will be charity and/or government redistribution.

The UBI, as typically framed, is an example of the latter. However, a UBI needn’t require government to tax and redistribute income on an ongoing basis. Nobel Prize winner Vernon Smith suggests that the government owns salable assets sufficient to fund a permanent UBI. He suggests privatizing the interstate highway system and selling off federal lands in the West. The proceeds could then be invested in a variety of assets to generate growth and income. Every American would receive a dividend check each year, under this plan.

Why a UBI?

Given the stipulation that human labor will become obsolete, the UBI is predicated on the presumption that the ownership of earning capital cannot diffuse through society to the working class in time to provide for them adequately. Working people who save are quite capable of accumulating assets, though government does them no favors via tax policy and manipulation of interest rates. But accumulating assets takes time, and it is fair to say that today’s distribution of capital would not support the current distribution of living standards without opportunities to earn labor income.

Still, a UBI might not be a good reason to auction public assets. That question depends more critically on the implicit return earned by those assets via government ownership relative to the gains from privatization, including the returns to alternative uses of the proceeds from a sale.

Objections to the UBI often center on the generally poor performance of government in managing programs, the danger of entrusting resources to the political process, and the corrosive effect of individual dependency. However, if government can do anything well at all, one might think it could at least cut checks. But even if we lay aside the simple issue of mismanagement, politics is a different matter. Over time, there is every chance that a UBI program will be modified as the political winds shift, that exceptions will be carved out, and that complex rules will be established. And that brings us back to the possibility of mismanagement. Even worse, it creates opportunities for rent seekers to skim funds or benefit indirectly from the program. In the end, these considerations might mean that the UBI will yield a poor return for society on the funds placed into the program, much as returns on major entitlements like Social Security are lousy.

Another area of concern is that policy should not discourage work effort while jobs still exist for humans. After all, working and saving is traditionally the most effective route to accumulating capital. Recipients of a UBI would not face the negative marginal work incentives associated with means-tested transfer payments because the UBI would not (should not) be dependent on income. It would go to the rich and poor alike. A UBI could still have a negative impact on labor supply via an income effect, however, depending on how individuals value incremental leisure versus consumption at a higher level of money income. On the whole, the UBI does not impart terrible incentive effects, but that is hardly a rationale for a UBI, let alone a reason to sell public assets.

Funding the UBI

We usually think of funding a UBI via taxes, and it’s well known that taxes harm productive incentives. If the trend toward automation is a natural response to a high return on capital, taxes on capital will retard the transition and might well inhibit the diffusion of capital ownership into lower economic strata. If your rationale for a UBI is truly related to automation and the obsolescence of labor, then funding a UBI should somehow take advantage of the returns to private capital short of taxing those returns away. This makes Smith’s idea more appealing as a funding mechanism.

Will there be a private investment appetite for highways and western land? Selling these assets would take time, of course, and it is difficult to know what bids they could attract. There is no question that toll roads can be profitable. Robert P. Murphy provides an informative discussion of private roads and takes issue with arguments against privatization, such as the presumptions of monopoly pricing and increased risk to drivers. Actually, privatization holds promise as a way of improving the efficiency of infrastructure use and upkeep. In fact, government mispricing of roads is a primary cause of congestion, and private operators have incentives to maintain and improve road safety and quality. Public land sales in the West are complex to the extent that existing mineral and grazing rights could be subject to dispute, and those sales might be unpopular with other landowners.

Once the assets are sold to investors, who will manage the UBI fund? Whether managed publicly or privately, the best arrangement would be no active trading management. Nevertheless, the appropriate mix of investments would be the subject of endless political debate. Every market downturn would bring new calls for conservatism. The level of distributions would also be a politically contentious issue. Dividend yields and price appreciation are not constant, and so it is necessary to determine a sustainable payout rate as well as if and when adjustments are needed. Furthermore, there must be some allowance to assure fund growth over time so that population growth, whatever the source, will not diminish the per capita payout.

Jesse Walker has a good retrospective on the history of “basic income” proposals and programs over time. He demonstrate that economic windfalls have frequently been the impetus for establishment of “rainy day” programs. Alaska, enabled by oil revenue, is unique in establishing a fund paying dividends to residents:

“From time to time a state will find itself awash in riches from natural resources. Some voices will suggest that the government not spend the new money at once but put some away for a rainy day. Some fraction of those voices will suggest it create a sovereign wealth fund to invest the windfall. And some fraction of that fraction will want the fund to pay dividends.

Now, there are all sorts of potential problems with government-run investment portfolios, as anyone who has followed California’s pension troubles can tell you. If you’re wary about mismanagement, you’ll be wary about states playing the market; they won’t all invest as conservatively as Alaska has.

Still, several states have such funds already—the most recent additions to the list are North Dakota and West Virginia—and the number may well grow. None has followed Juneau’s example and started paying dividends, but it is hardly unimaginable that someone else will eventually adopt an Alaska-style system.”

Human-Machine Collaboration

A world without human labor is unlikely to evolve. Automation, for the foreseeable future, can improve existing processes such as line tasks in manufacturing, order taking in fast food outlets, and even burger flipping. Declines in retail employment can also be viewed in this context, as internet sales have grown as a share of consumer spending. However, innovation itself cannot be automated. In today’s applications, the deployment and ongoing use of robots often requires human collaboration. Like earlier increases in capital intensity, automation today spurs the creation of new kinds of jobs. Operational technology now exists alongside information technology as an employment category.

I have addressed concerns about human obsolescence several times in the past (most recently here, and also here). Government must avoid policies that hasten automation, like drastic hikes in the minimum wage (see here and here). U.S. employment is at historic highs even though the process of automation has been underway in industry for a very long time. Today there are almost 6.4 million job vacancies in the U.S., so plenty of work is available. Again, new technologies certainly destroy some jobs, but they tend to create new jobs that were never before imagined and that often pay more than the jobs lost. Human augmentation will also provide an important means through which workers can add to their value in the future. And beyond the new technical opportunities, there will always be roles available in personal service. The human touch is often desired by consumers, and it might even be desirable on a social-psychological level.

Opportunity Costs

Finally, is a UBI the best use of the proceeds of public asset sales? That’s doubtful unless you truly believe that human labor will be obsolete. It might be far more beneficial to pay down the public debt. Doing so would reduce interest costs and allow taxpayer funds to flow to other programs (or allow tax reductions), and it would give the government greater borrowing capacity going forward. Another attractive alternative is to spend the the proceeds of asset sales on educational opportunities, especially vocational instruction that would enhance worker value in the new world of operational technology. Then again, the public assets in question have been funded by taxpayers over many years. Some would therefore argue that the proceeds of any asset sale should be returned to taxpayers immediately and, to the extent possible, in proportion to past taxes paid. The UBI just might rank last.

Embracing the Robots

03 Friday Mar 2017

Posted by Nuetzel in Automation, Labor Markets, Technology

≈ 1 Comment

Tags

3-D Printing, Artificial Intelligence, Automation, David Henderson, Don Boudreaux, Great Stagnation, Herbert Simon, Human Augmentation, Industrial Revolution, Marginal Revolution, Mass Unemployment, Matt Ridley, Russ Roberts, Scarcity, Skills Gap, Transition Costs, Tyler Cowan, Wireless Internet

automation84s

Machines have always been regarded with suspicion as a potential threat to the livelihood of workers. That is still the case, despite the demonstrated power of machines make life easier and goods cheaper. Today, the automation of jobs in manufacturing and even service jobs has raised new alarm about the future of human labor, and the prospect of a broad deployment of artificial intelligence (AI) has made the situation seem much scarier. Even the technologists of Silicon Valley have taken a keen interest in promoting policies like the Universal Basic Income (UBI) to cushion the loss of jobs they expect their inventions to precipitate. The UBI is an idea discussed in last Sunday’s post on Sacred Cow Chips. In addition to the reasons for rejecting that policy cited in that post, however, we should question the premise that automation and AI are unambiguously job killing.

The same stories of future joblessness have been told for over two centuries, and they have been wrong every time. The vulnerability in our popular psyche with respect to automation is four-fold: 1) the belief that we compete with machines, rather than collaborate with them; 2) our perpetual inability to anticipate the new and unforeseeable opportunities that arise as technology is deployed; 3) our tendency to undervalue new technologies for the freedoms they create for higher-order pursuits; and 4) the heavy discount we apply to the ability of workers and markets to anticipate and adjust to changes in market conditions.

Despite the technological upheavals of the past, employment has not only risen over time, but real wages have as well. Matt Ridley writes of just how wrong the dire predictions of machine-for-human substitution have been. He also disputes the notion that “this time it’s different”:

“The argument that artificial intelligence will cause mass unemployment is as unpersuasive as the argument that threshing machines, machine tools, dishwashers or computers would cause mass unemployment. These technologies simply free people to do other things and fulfill other needs. And they make people more productive, which increases their ability to buy other forms of labour. ‘The bogeyman of automation consumes worrying capacity that should be saved for real problems,’ scoffed the economist Herbert Simon in the 1960s.“

As Ridley notes, the process of substituting capital for labor has been more or less continuous over the past 250 years, and there are now more jobs, and at far higher wages, than ever. Automation has generally involved replacement of strictly manual labor, but it has always required collaboration with human labor to one degree or another.

The tools and machines we use in performing all kinds of manual tasks become ever-more sophisticated, and while they change the human role in performing those tasks, the tasks themselves largely remain or are replaced by new, higher-order tasks. Will the combination of automation and AI change that? Will it make human labor obsolete? Call me an AI skeptic, but I do not believe it will have broad enough applicability to obviate a human role in the production of goods and services. We will perform tasks much better and faster, and AI will create new and more rewarding forms of human-machine collaboration.

Tyler Cowen believes that AI and  automation will bring powerful benefits in the long run, but he raises the specter of a transition to widespread automation involving a lengthy period of high unemployment and depressed wages. Cowen points to a 70-year period for England, beginning in 1760, covering the start of the industrial revolution. He reports one estimate that real wages rose just 22% during this transition, and that gains in real wages were not sustained until the 1830s. Evidently, Cowen views more recent automation of factories as another stage of the “great stagnation” phenomenon he has emphasized. Some commenters on Cowen’s blog, Marginal Revolution, insist that estimates of real wages from the early stages of the industrial revolution are basically junk. Others note that the population of England doubled during that period, which likely depressed wages.

David Henderson does not buy into Cowans’ pessimism about transition costs. For one thing, a longer perspective on the industrial revolution would undoubtedly show that average growth in the income of workers was dismal or nonexistent prior to 1760. Henderson also notes that Cowen hedges his description of the evidence of wage stagnation during that era. It should also be mentioned the share of the U.S. work force engaged in agricultural production was 40% in 1900, but is only 2% today, and the rapid transition away from farm jobs in the first half of the 20th century did not itself lead to mass unemployment nor declining wages (HT: Russ Roberts). Cowen cites more recent data on stagnant median income, but Henderson warns that even recent inflation adjustments are fraught with difficulties, that average household size has changed, and that immigration, by adding households and bringing labor market competition, has had at least some depressing effect on the U.S. median wage.

Even positive long-run effects and a smooth transition in the aggregate won’t matter much to any individual whose job is easily automated. There is no doubt that some individuals will fall on hard times, and finding new work might require a lengthy search, accepting lower pay, or retraining. Can something be done to ease the transition? This point is addressed by Don Boudreaux in another context in “Transition Problems and Costs“. Specifically, Boudreaux’s post is about transitions made necessary by changing patterns of international trade, but his points are relevant to this discussion. Most fundamentally, we should not assume that the state must have a role in easing those transitions. We don’t reflexively call for aid when workers of a particular firm lose their jobs because a competitor captures a greater share of the market, nor when consumers decide they don’t like their product. In the end, these are private problems that can and should be solved privately. However, the state certainly should take a role in improving the function of markets such that unemployed resources are absorbed more readily:

“Getting rid of, or at least reducing, occupational licensing will certainly help laid-off workers transition to new jobs. Ditto for reducing taxes, regulations, and zoning restrictions – many of which discourage entrepreneurs from starting new firms and from expanding existing ones. While much ‘worker transitioning’ involves workers moving to where jobs are, much of it also involves – and could involve even more – businesses and jobs moving to where available workers are.“

Boudreaux also notes that workers should never be treated as passive victims. They are quite capable of acting on their own behalf. They often act out of risk avoidance to save their funds against the advent of a job loss, invest in retraining, and seek out new opportunities. There is no question, however, that many workers will need new skills in an economy shaped by increasing automation and AI. This article discusses some private initiatives that can help close the so-called “skills gap”.

Crucially, government should not accelerate the process of automation beyond its natural pace. That means markets and prices must be allowed to play their natural role in directing resources to their highest-valued uses. Unfortunately, government often interferes with that process by imposing employment regulations and wage controls — i.e., the minimum wage. Increasingly, we are seeing that many jobs performed by low-skilled workers can be automated, and the expense of automation becomes more worthwhile as the cost of labor is inflated to artificial levels by government mandate. That point was emphasized in a 2015 post on Sacred Cow Chips entitled “Automate No Job Before Its Time“.

Another past post on Sacred Cow Chips called “Robots and Tradeoffs” covered several ways in which we will adjust to a more automated economy, none of which will require the intrusive hand of government. One certainty is that humans will always value human service, even when a robot is more efficient, so there will be always be opportunities for work. There will also be ways in which humans can compete with machines (or collaborate more effectively) via human augmentation. Moreover, we should not discount the potential for the ownership of machines to become more widely dispersed over time, mitigating the feared impact of automation on the distribution of income. The diffusion of specific technologies become more widespread as their costs decline. That phenomenon has unfolded rapidly with wireless technology, particularly the hardware and software necessary to make productive use of the wireless internet. The same is likely to occur with 3-D printing and other advances. For example, robots are increasingly entering consumer markets, and there is no reason to believe that the same downward cost pressures won’t allow them to be used in home production or small-scale business applications. The ability to leverage technology will require learning, but web-enabled instruction is becoming increasingly accessible as well.

Can the ownership of productive technologies become sufficiently widespread to assure a broad distribution of rewards? It’s possible that cost reductions will allow that to happen, but broadening the ownership of capital might require new saving constructs as well. That might involve cooperative ownership of capital by associations of private parties engaged in diverse lines of business. Stable family structures can also play a role in promoting saving.

It is often said that automation and AI will mean an end to scarcity. If that were the case, the implications for labor would be beside the point. Why would anyone care about jobs in a world without want? Of course, work might be done purely for pleasure, but that would make “labor” economically indistinguishable from leisure. Reaching that point would mean a prolonged process of falling prices, lifting real wages on a pace matching increases in productivity. But in a world without scarcity, prices must be zero, and that will never happen. Human wants are unlimited and resources are finite. We’ll use resources more productively, but we will always find new wants. And if prices are positive, including the cost of capital, it is certain that demands for labor will remain.

The Insidious Guaranteed Income

26 Sunday Feb 2017

Posted by Nuetzel in Welfare State

≈ 3 Comments

Tags

Artificial Intelligence, Automation, Bryan Caplan, Cash vs. In-Kind Aid, Don Boudreaux, Earned Income Tax Credit, Forced Charity, Guaranteed Income, Incentive Effects, Mises Wire, Nathan Keeble, Permanent Income Hypothesis, Subsidies, Tax Cliff, UBI, Universal Basic Income

free-money-gif

Praise for the concept of a “universal basic income” (UBI) is increasingly common among people who should know better. The UBI’s appeal is based on: 1) improvement in work incentives for those currently on public aid; 2) the permanent and universal cushion it promises against loss of livelihood; 3) the presumed benefits to those whose work requires a lengthy period of development to attain economic viability; and 4) the fact that everyone gets a prize, so it is “fair”. There are advocates who believe #2 is the primary reason a UBI is needed because they fear a mass loss of employment in the age of artificial intelligence and automation. I’ll offer some skepticism regarding that prospect in a forthcoming post.

And what are the drawbacks of a UBI? As an economic matter, it is outrageously expensive in both budgetary terms and, more subtly but no less importantly, in terms of its perverse effects on the allocation of resources. However, there are more fundamental reasons to oppose the UBI on libertarian grounds.

Advocates of a UBI often use $10,000 per adult per year as a working baseline. That yields a cost of a guaranteed income for every adult in the U.S. on the order of $2.1 trillion. We now spend about $0.7 trillion a year on public aid programs, excluding administrative costs (the cost is $1.1 trillion all-in). The incremental cost of a UBI as a wholesale replacement for all other aid programs would therefore be about $1.4 trillion. That’s roughly a 40% increase in federal outlays…. Good luck funding that! And there’s a strong chance that some of the existing aid programs would be retained. The impact could be blunted by excluding individuals above certain income thresholds, or via taxes applied to the UBI in higher tax brackets. However, a significant dent in the cost would require denying the full benefit to a large segment of the middle class, making the program into something other than a UBI.

Nathan Keeble at Mises Wire discusses some of the implications of a UBI for incentives and resource allocation. A traditional criticism of means-tested welfare programs is that benefits decline as market income increases, so market income is effectively taxed at a high marginal rate. (This is not a feature of the Earned Income Tax Credit (EITC).) Thus, low-income individuals face negative incentives to earn market income. This is the so-called “welfare cliff”. A UBI doesn’t have this shortcoming, but it would create serious incentive problems in other ways. A $1.4 trillion hit on taxpayers will distort work, saving and investment incentives in ways that would make the welfare cliff look minor by comparison. The incidence of these taxes would fall heavily on the most productive segments of society. It would also have very negative implications for the employment prospects of individuals in the lowest economic strata.

Keeble describes another way in which a UBI is destructive. It is a subsidy granted irrespective of the value created by work effort. Should an individual have a strong preference for leisure as opposed to work, a UBI subsidy exerts a strong income effect in accommodating that choice. Or, should an individual have a strong preference for performing varieties of work for which they are not well-suited, and despite having a relatively low market value for them, the income effect of a UBI subsidy will tend to accommodate that choice as well. In other words, a UBI will subsidize non-economic activity:

“The struggling entrepreneurs and artists mentioned earlier are struggling for a reason. For whatever reason, the market has deemed the goods they are providing to be insufficiently valuable. Their work simply isn’t productive according to those who would potentially consume the goods or services in question. In a functioning marketplace, producers of goods the consumers don’t want would quickly have to abandon such endeavors and focus their efforts into productive areas of the economy. The universal basic income, however, allows them to continue their less-valued endeavors with the money of those who have actually produced value, which gets to the ultimate problem of all government welfare programs.“

I concede, however, that unconditional cash transfers can be beneficial as a way of delivering aid to impoverished communities. This application, however, involves a subsidy that is less than universal, as it targets cash at the poor, or poor segments of society. The UBI experiments described in this article involve private charity in delivering aid to poor communities in underdeveloped countries, not government sponsored foreign aid or redistribution. Yes, cash is more effective than in-kind aid such as food or subsidized housing, a proposition that economists have always tended to support as a rule. The cash certainly provides relief, and it may well be used as seed money for productive enterprises, especially if the aid is viewed as temporary rather than permanent. But that is not in the spirit of a true UBI.

More fundamentally, a UBI is objectionable from a libertarian perspective because it involves a confiscation of resources. In “Why Libertarians Should Oppose the Universal Basic Income“, Bryan Caplan makes the point succinctly:

“Forced charity is unjust. Individuals have a moral right to decide if and when they want to help others….

Forcing people to help others who can’t help themselves… is at least defensible. Forcing people to help everyone is not. And for all its faults, at least the status quo makes some effort to target people who can’t help themselves. The whole idea of the Universal Basic Income, in contrast, is to give money to everyone whether they need it or not.”

Later, Caplan says:

…libertarianism isn’t about the freedom to be coercively supported by strangers. It’s about the freedom to be left alone by strangers.“

Both Keeble and Caplan would argue that the status quo, with its hodge-podge of welfare programs offering tempting but rotten incentives to recipients, is preferable to the massive distortions that would be created by a UBI. The mechanics of such an intrusion are costly enough, but as Don Boudreaux has warned, the UBI would put government in a fairly dominant position as a provider:

“… such an income-guarantee by government will further fuel the argument that government is a uniquely important and foundational source of our rights and our prosperity – and, therefore, government is uniquely entitled to regulate our behavior.“

← Older posts
Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • Immigration and Merit As Fiscal Propositions
  • Tariff “Dividend” From An Indigent State
  • Almost Looks Like the Fed Has a 3% Inflation Target
  • Government Malpractice Breeds Health Care Havoc
  • A Tax On Imports Takes a Toll on Exports

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library
  • Scattered Showers and Quicksand

Blog at WordPress.com.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

Scattered Showers and Quicksand

Musings on science, investing, finance, economics, politics, and probably fly fishing.

  • Subscribe Subscribed
    • Sacred Cow Chips
    • Join 128 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...