• About

Sacred Cow Chips

Sacred Cow Chips

Category Archives: Automation

Behold Our Algorithmic Overlords

18 Thursday Jul 2019

Posted by Nuetzel in Automation, Censorship, Discrimination, Marketplace of Ideas

≈ Leave a comment

Tags

Algorithmic Governance, American Affairs, Antitrust, Behavioral Economics, Bryan Caplan, Claremont Institute, David French, Deplatforming, Facebook, Gleichschaltung, Google, Jonah Goldberg, Joseph Goebbels, Mark Zuckerberg, Matthew D. Crawford, nudge, Peeter Theil, Political Legitimacy, Populism, Private Governance, Twitter, Viewpoint Diversity

A willingness to question authority is healthy, both in private matters and in the public sphere, but having the freedom to do so is even healthier. It facilitates free inquiry, the application of the scientific method, and it lies at the heart of our constitutional system. Voluntary acceptance of authority, and trust in its legitimacy, hinges on our ability to identify its source, the rationale for its actions, and its accountability. Unaccountable authority, on the other hand, cannot be tolerated. It’s the stuff of which tyranny is made.

That’s one linchpin of a great essay by Matthew D. Crawford in American Affairs entitled “Algorithmic Governance and Political Legitimacy“. It’s a lengthy piece that covers lots of ground, and very much worth reading. Or you can read my slightly shorter take on it!

Imagine a world in which all the information you see is selected by algorithm. In addition, your success in the labor market is determined by algorithm. Your college admission and financial aid decisions are determined by algorithm. Credit applications are decisioned by algorithm. The prioritization you are assigned for various health care treatments is determined by algorithm. The list could go on and on, but many of these “use-cases” are already happening to one extent or another.

Blurring Private and Public Governance

Much of what Crawford describes has to do with the way we conduct private transactions and/or private governance. Most governance in free societies, of the kind that touches us day-to-day, is private or self-government, as Crawford calls it. With the advent of giant on-line platforms, algorithms are increasingly an aspect of that governance. Crawford notes the rising concentration of private governmental power within these organizations. While the platforms lack complete monopoly power, they are performing functions that we’d ordinarily be reluctant to grant any public form of government: they curate the information we see, conduct surveillance, exercise control over speech, and even indulge in the “deplatforming” of individuals and organizations when it suits them. Crawford quotes Facebook CEO Mark Zuckerberg:

“In a lot of ways Facebook is more like a government than a traditional company. . . . We have this large community of people, and more than other technology companies we’re really setting policies.”

At the same time, the public sector is increasingly dominated by a large administrative apparatus that is outside of the normal reach of legislative, judicial and even executive checks. Crawford worries about “… the affinities between administrative governance and algorithmic governance“.  He emphasizes that neither algorithmic governance on technology platforms nor an algorithmic administrative state are what one could call representative democracy. But whether these powers have been seized or we’ve granted them voluntarily, there are already challenges to their legitimacy. And no wonder! As Crawford says, algorithms are faceless pathways of neural connections that are usually difficult to explain, and their decisions often strike those affected as arbitrary or even nonsensical.

Ministry of Wokeness

Political correctness plays a central part in this story. There is no question that the platforms are setting policies that discriminate against certain viewpoints. But Crawford goes further, asserting that algorithms have a certain bureaucratic logic to elites desiring “cutting edge enforcement of social norms“, i.e., political correctness, or “wokeness”, the term of current fashion.

“First, in the spirit of Václav Havel we might entertain the idea that the institutional workings of political correctness need to be shrouded in peremptory and opaque administrative mechanisms be­cause its power lies precisely in the gap between what people actu­ally think and what one is expected to say. It is in this gap that one has the experience of humiliation, of staying silent, and that is how power is exercised.

But if we put it this way, what we are really saying is not that PC needs administrative enforcement but rather the reverse: the expand­ing empire of bureaucrats needs PC. The conflicts created by identi­ty politics become occasions to extend administrative authority into previously autonomous domains of activity. …

The incentive to technologize the whole drama enters thus: managers are answerable (sometimes legally) for the conflict that they also feed on. In a corporate setting, especially, some kind of ass‑covering becomes necessary. Judgments made by an algorithm (ideally one supplied by a third-party vendor) are ones that nobody has to take responsibility for. The more contentious the social and political landscape, the bigger the institutional taste for automated decision-making is likely to be.

Political correctness is a regime of institutionalized insecurity, both moral and material. Seemingly solid careers are subject to sud­den reversal, along with one’s status as a decent person.”

The Tyranny of Deliberative Democracy

Crawford takes aim at several other trends in intellectual fashion that seem to complement algorithmic governance. One is “deliberative democracy”, an ironically-named theory which holds that with the proper framing conditions, people will ultimately support the “correct” set of policies. Joseph Goebbels couldn’t have put it better. As Crawford explains, the idea is to formalize those conditions so that action can be taken if people do not support the “correct” policies. And if that doesn’t sound like Gleichschaltung (enforcement of conformity), nothing does! This sort of enterprise would require:

 “… a cadre of subtle dia­lecticians working at a meta-level on the formal conditions of thought, nudging the populace through a cognitive framing operation to be conducted beneath the threshold of explicit argument. 

… the theory has proved immensely successful. By that I mean the basic assumptions and aspira­tions it expressed have been institutionalized in elite culture, perhaps nowhere more than at Google, in its capacity as directorate of information. The firm sees itself as ‘definer and defender of the public interest’ …“

Don’t Nudge Me

Another of Crawford’s targets is the growing field of work related to the irrationality of human behavior. This work resulted from the revolutionary development of  experimental or behavioral economics, in which various hypotheses are tested regarding choice, risk aversion, an related issues. Crawford offers the following interpretation, which rings true:

“… the more psychologically informed school of behavioral economics … teaches that we need all the help we can get in the form of external ‘nudges’ and cognitive scaffolding if we are to do the rational thing. But the glee and sheer repetition with which this (needed) revision to our under­standing of the human person has been trumpeted by journalists and popularizers indicates that it has some moral appeal, quite apart from its intellectual merits. Perhaps it is the old Enlightenment thrill at disabusing human beings of their pretensions to specialness, whether as made in the image of God or as ‘the rational animal.’ The effect of this anti-humanism is to make us more receptive to the work of the nudgers.”

While changes in the framing of certain decisions, such as opt-in versus opt-out rules, can often benefit individuals, most of us would rather not have nudgers cum central planners interfere with too many of our decisions, no matter how poorly they think those decisions approximate rationality. Nudge engineers cannot replicate your personal objectives or know your preference map. Indeed, externally applied nudges might well be intended to serve interests other than your own. If the political equilibrium involves widespread nudging, it is not even clear that the result will be desirable for society: the history of central planning is one of unintended consequences and abject failure. But it’s plausible that this is where the elitist technocrats in Silicon Vally and within the administrative state would like to go with algorithmic governance.

Crawford’s larger thesis is summarized fairly well by the following statements about Google’s plans for the future:

“The ideal being articulated in Mountain View is that we will inte­grate Google’s services into our lives so effortlessly, and the guiding presence of this beneficent entity in our lives will be so pervasive and unobtrusive, that the boundary between self and Google will blur. The firm will provide a kind of mental scaffold for us, guiding our intentions by shaping our informational context. This is to take the idea of trusteeship and install it in the infrastructure of thought.

Populism is the rejection of this.”

He closes with reflections on the attitudes of the technocratic elite toward those who reject their vision as untrustworthy. The dominance of algorithmic governance is unlikely to help them gain that trust.

What’s to be done?

Crawford seems resigned to the idea that the only way forward is an ongoing struggle for political dominance “to be won and held onto by whatever means necessary“. Like Bryan Caplan, I have always argued that we should eschew anti-trust action against the big tech platforms, largely because we still have a modicum of choice in all of the services they provide. Caplan rejects the populist arguments against the tech “monopolies” and insists that the data collection so widely feared represents a benign phenomenon. And after all, consumers continue to receive a huge surplus from the many free services offered on-line.

But the reality elucidated by Crawford is that the tech firms are much more than private companies. They are political and quasi-governmental entities. Their tentacles reach deeply into our lives and into our institutions, public and private. They are capable of great social influence, and putting their tools in the hands of government (with a monopoly on force), they are capable of exerting social control. They span international boundaries, bringing their technical skills to bear in service to foreign governments. This week Peter Theil stated that Google’s work with the Chinese military was “treasonous”. It was only a matter of time before someone prominent made that charge.

The are no real safeguards against abusive governance by the tech behemoths short of breaking them up or subjecting them to tight regulation, and neither of those is likely to turn out well for users. I would, however, support safeguards on the privacy of customer data from scrutiny by government security agencies for which the platforms might work. Firewalls between their consumer and commercial businesses and government military and intelligence interests would be perfectly fine by me. 

The best safeguard of viewpoint diversity and against manipulation is competition. Of course, the seriousness of threats these companies actually face from competitors is open to question. One paradox among many is that the effectiveness of the algorithms used by these companies in delivering services might enhance their appeal to some, even as those algorithms can undermine public trust.

There is an ostensible conflict in the perspective Crawford offers with respect to the social media giants: despite the increasing sophistication of their algorithms, the complaint is really about the motives of human beings who wish to control political debate through those algorithms, or end it once and for all. Jonah Goldberg puts it thusly:

“The recent effort by Google to deny the Claremont Institute the ability to advertise its gala was ridiculous. Facebook’s blocking of Prager University videos was absurd. And I’m glad Facebook apologized.

But the fact that they apologized points to the fact that while many of these platforms clearly have biases — often encoded in bad algorithms — points to the possibility that these behemoths aren’t actually conspiring to ‘silence’ all conservatives. They’re just making boneheaded mistakes based in groupthink, bias, and ignorance.”

David French notes that the best antidote for hypocrisy in the management of user content on social media is to expose it loud and clear, which sets the stage for a “market correction“. And after all, the best competition for any social media platform is real life. Indeed, many users are dropping out of various forms of on-line interaction. Social media companies might be able to retain users and appeal to a broader population if they could demonstrate complete impartiality. French proposes that these companies adopt free speech policies fashioned on the First Amendment itself:

“…rules and regulations restricting speech must be viewpoint-neutral. Harassment, incitement, invasion of privacy, and intentional infliction of emotional distress are speech limitations with viewpoint-neutral definitions…”

In other words, the companies must demonstrate that both moderators and algorithms governing user content and interaction are neutral. That is one way for them to regain broad trust. The other crucial ingredient is a government that is steadfast in defending free speech rights and the rights of the platforms to be neutral. Among other things, that means the platforms must retain protection under Section 230 of the Telecommunications Decency Act, which assures their immunity against lawsuits for user content. However, the platforms have had that immunity since quite early in internet history, yet they have developed an aggressive preference for promoting certain viewpoints and suppressing others. The platforms should be content to ensure that their policies and algorithms provide useful tools for users without compromising the free exchange of ideas. Good governance, political legitimacy, and ultimately freedom demand it. 

The Comparative Human Advantage

10 Thursday Aug 2017

Posted by Nuetzel in Automation, Technology, Tradeoffs

≈ 1 Comment

Tags

Absolute Advantage, Automation, Comparative advantage, Elon Musk, Kardashev Scale, Minimum Wage, Opportunity cost, Scarcity, Specialization, Superabundance, Trade

There are so many talented individuals in this world, people who can do many things well. In fact, they can probably do everything better than most other people in an absolute sense. In other words, they can produce more of everything at a given cost than most others. Yet amazingly, they still find it advantageous to trade with others. How can that be?

It is due to the law of comparative advantage, one of the most important lessons in economics. It’s why we specialize and trade with others for almost all of ours needs and wants, even if we are capable of doing all things better than them. Here’s a simple numerical example… don’t bail out on me (!):

  • Let’s say that you can produce either 1,000 bushels of barley or 500 bushels of hops in a year, or any combination of the two in those proportions. Each extra bushel of hops you produce involves the sacrifice of two bushels of barley.
  • Suppose that I can produce only 500 bushels of barley and 400 bushels of hops in a year, or any combination in those proportions. It costs me only 1.25 bushels of barley to produce an extra bushel of hops.
  • You can produce more hops than I can, but hops are costlier for you at the margin: 2 bushels of barley to get an extra bushel of hops, more than the 1.25 bushels it costs me.
  • That means you can probably obtain a better combination (for you) of barley and hops by specializing in barley and trading some of it to me for hops. You don’t have to do everything yourself. It’s just not in your self-interest even if you have an absolute advantage over me in everything!

This is not a coincidental outcome. Exploiting opportunities for trade with those who face lower marginal costs effectively increases our real income. In production, we tend to specialize — to do what we do — because we have a comparative advantage. We specialize because our costs are lower at the margin in those activities. And that’s also what motivates trade with others. That’s why nations should trade with others. And, as I mentioned about one week ago here, that’s why we have less to fear from automation than many assume.

Certain tasks will be automated as increasingly productive “robots” (or their equivalents) justify the costs of the resources required to produce and deploy them. This process will be accelerated to the extent that government makes it appear as if robots have a comparative advantage over humans via minimum wage laws and other labor market regulations. As a general rule, employment will be less vulnerable to automation if wages are flexible. 

What if one day, as Elon Musk has asserted, robots can do everything better than us? Will humans have anywhere to work? Yes, if human labor is less costly at the margin. Once deployed, a robot in any application has other potential uses, and even a robot has just 24 hours in a day. Diverting a robot into another line of production involves the sacrifice of its original purpose. There will always be uses in which human labor is less costly at the margin, even with lower absolute productivity, than repurposing a robot or the resources needed to produce a new robot. That’s comparative advantage! That will be true for many of the familiar roles we have today, to say nothing of the unimagined new roles for humans that more advanced technology will bring.

Some have convinced themselves that a fully-automated economy will bring an end to scarcity itself. Were that to occur, there would be no tradeoffs except one kind: how you use your time (barring immortality). Superabundance would cause the prices of goods and services to fall to zero; real incomes would approach infinity. In fact, income as a concept would become meaningless. Of course, you will still be free to perform whatever “work” you enjoy, physical or mental, as long as you assign it a greater value than leisure at the margin.

Do I believe that superabundance is realistic? Not at all. To appreciate the contradictions inherent in the last paragraph, think only of the scarcity of talented human performers and their creativity. Perhaps people will actually enjoy watching other humans “perform” work. They always have! If the worker’s time has any other value (and it is scarce to them), what can they collect in return for their “performance”? Adulation and pure enjoyment of their “work”? Some other form of payment? Not everything can be free, even in an age of superabundance.

Scarcity will always exist to one extent or another as long as our wants are insatiable and our time is limited. As technology solves essential problems, we turn our attention to higher-order needs and desires, including various forms of risk reduction. These pursuits are likely to be increasingly resource intensive. For example, interplanetary or interstellar travel will be massively expensive, but they are viewed as desirable pursuits precisely because resources are, and will be, scarce. Discussions of the transition of civilizations across the Kardashev scale, from “Type 0” (today’s Earth) up to “Type III” civilizations, capable of harnessing the energy equivalent of the luminosity of its home galaxy, are fundamentally based on presumed efforts to overcome scarcity. Type III is a long way off, at best. The upshot of ongoing scarcity is that opportunity costs of lines of employment will remain positive for both robots and humans, and humans will often have a comparative advantage.

Mr. Musk Often Goes To Washington

31 Monday Jul 2017

Posted by Nuetzel in Automation, Labor Markets, Technology

≈ 1 Comment

Tags

Absolute Advantage, Comparative advantage, DeepMind, Elon Musk, Eric Schmidt, Facebook, Gigafactory, Google, Mark Zuckerberg, OpenAI, rent seeking, Ronald Bailey, SpaceX, Tesla

Elon Musk says we should be very scared of artificial intelligence (AI). He believes it poses an “existential risk” to humanity and  calls for “proactive regulation” of AI to limit its destructive potential. His argument encompasses “killer robots”: “A.I. & The Art of Machine War” is a good read and is consistent with Musk’s message. Military applications already involve autonomous machine decisions to terminate human life, but the Pentagon is weighing whether decisions to kill should be made only by humans. Musk also focuses on more subtle threats from machine intelligence: It could be used to disrupt power and communication systems, to manipulate human opinion in dangerous ways, and even to sow panic via cascades of “fake robot news”, leading to a breakdown in civil order. Musk has also expressed a fear that AI could have disastrous consequences in commercial applications with runaway competition for resources. He sounds like a businessmen who really dislikes competition! After all, market competition is self-regulating and self-limiting. The most “destructive” effects occur only when competitors come crying to the state for relief!

Several prominent tech leaders and AI experts have disputed Musk’s pessimistic view of AI, including Mark Zuckerberg of Facebook and Eric Schmidt, chairman of Google’s parent company, Alphabet, Inc. Schmidt says:

“My question to you is: don’t you think the humans would notice this, and start turning off the computers? We’d have a race between humans turning off computers, and the AI relocating itself to other computers, in this mad race to the last computer, and we can’t turn it off, and that’s a movie. It’s a movie. The state of the earth currently does not support any of these scenarios.“

Along those lines, Google’s AI lab known as “DeepMind” has developed an AI off-switch, otherwise known as the “big red button“. Obviously, this is based on human supervision of AI processes and on ensuring the interruptibility of AI processes.

Another obvious point is that AI, ideally, would operate under an explicit objective function(s). This is the machine’s “reward system”, as it were. Could that reward system always be linked to human intent? To a highly likely non-negative human assessment of outcomes? Improved well-being? That’s not straightforward in a world of uncertainty, but it is at least clear that a relatively high probability of harm to humans should impose a large negative effect on any intelligent machine’s objective function.

Those kinds of steps can be regarded as regulatory recommendations, which is what Musk has advocated. Musk has outlined a role for regulators as gatekeepers who would review and ensure the safety of any new AI application. Ronald Bailey reveals the big problem with this approach:

“This may sound reasonable. But Musk is, perhaps unknowingly, recommending that AI researchers be saddled with the precautionary principle. According to one definition, that’s ‘the precept that an action should not be taken if the consequences are uncertain and potentially dangerous.’ Or as I have summarized it: ‘Never do anything for the first time.’“

Regulation is the enemy of innovation, and there are many ways in which current and future AI applications can improve human welfare. Musk knows this. He is the consummate innovator and big thinker, but he is also skilled at leveraging the power of government to bring his ideas to fruition. All of his major initiatives, from Tesla to SpaceX, to Hyperloop, battery technology and solar roofing material, have gained viability via subsidies.

But another hallmark of crony capitalists is a willingness to use regulation to their advantage. Could proposed regulation be part of a hidden agenda for Musk? For example, what does Musk mean when he says, “There’s only one AI company that worries me” in the context of dangerous AI? His own company(ies)? Or another? One he does not own?

Musk’s startup OpenAI is a non-profit engaged in developing open-source AI technology. Musk and his partners in this venture argue that widespread, free availability of AI code and applications would prevent malicious use of AI. Musk knows that his companies can use AI to good effect as well as anyone. And he also knows that open-source AI can neutralize potential advantages for competitors like Google and Facebook. Perhaps he hopes that his first-mover advantage in many new industries will lead to entrenched market positions just in time for the AI regulatory agenda to stifle competitive innovation within his business space, providing him with ongoing rents. Well played, cronyman!

Any threat that AI will have catastrophic consequences for humanity is way down the road, if ever. In the meantime, there are multiple efforts underway within the machine learning community (which is not large) to prevent or at least mitigate potential dangers from AI. This is taking place independent of any government action, and so it should remain. That will help to maximize the potential for beneficial innovation.

Musk also asserts that robots will someday be able to do “everything better than us”, thus threatening the ability of the private sector to provide income to individuals across a broad range of society. This is not at all realistic. There are many detailed and nuanced tasks to which robots will not be able to attend without human collaboration. Creativity and the “human touch” will always have value and will always compete in input markets. Even if robots can do everything better than humans someday, an absolute advantage is not determinative. Those who use robot-intensive production process will still find it advantageous to use labor, or to trade with those utilizing more labor-intensive production processes. Such are the proven outcomes of the law of comparative advantage.

The Tyranny of the Job Saviors

17 Monday Jul 2017

Posted by Nuetzel in Automation, Free markets, Technology

≈ Leave a comment

Tags

Artificial Intelligence, Automation, Capital-Labor Substitution, Creative Destruction, Dierdre McCloskey, Don Boudreaux, Frederic Bastiat, James Pethokoukas, Opportunity Costs, Robert Samuelson, Robot Tax, Seen and Unseen, Technological Displacement, Universal Basic Income

Many jobs have been lost to technology over the last few centuries, yet more people are employed today than ever before. Despite this favorable experience, politicians can’t help the temptation to cast aspersions at certain production technologies, constantly advocating intervention in markets to “save jobs”. Today, some serious anti-tech policy proposals and legislative efforts are underway: regional bans on autonomous vehicles, “robot taxes” (advocated by Bill Gates!!), and even continuing legal resistance to technology-enabled services such as ride sharing and home sharing. At the link above, James Pethokoukas expresses trepidation about one legislative proposal taking shape, sponsored by Senator Maria Cantwell (D-WA), to create a federal review board with the potential to throttle innovation and the deployment of technology, particularly artificial intelligence.

Last week I mentioned the popular anxiety regarding automation and artificial intelligence in my post on the Universal Basic Income. This anxiety is based on an incomplete accounting of the “seen” and “unseen” effects of technological advance, to borrow the words of Frederic Bastiat, and of course it is unsupported by historical precedent. Dierdre McCloskey reviews the history of technological innovations and its positive impact on dynamic labor markets:

“In 1910, one out of 20 of the American workforce was on the railways. In the late 1940s, 350,000 manual telephone operators worked for AT&T alone. In the 1950s, elevator operators by the hundreds of thousands lost their jobs to passengers pushing buttons. Typists have vanished from offices. But if blacksmiths unemployed by cars or TV repairmen unemployed by printed circuits never got another job, unemployment would not be 5 percent, or 10 percent in a bad year. It would be 50 percent and climbing.

Each month in the United States—a place with about 160 million civilian jobs—1.7 million of them vanish. Every 30 days, in a perfectly normal manifestation of creative destruction, over 1 percent of the jobs go the way of the parlor maids of 1910. Not because people quit. The positions are no longer available. The companies go out of business, or get merged or downsized, or just decide the extra salesperson on the floor of the big-box store isn’t worth the costs of employment.“

Robert Samuelson discusses a recent study that found that technological advance consistently improves opportunities for labor income. This is caused by cost reductions in the innovating industries, which are subsequently passed through to consumers, business profits, and higher pay to retained workers whose productivity is enhanced by the improved technology inputs. These gains consistently outweigh losses to those who are displaced by the new capital. Ultimately, the gains diffuse throughout society, manifesting in an improved standard of living.

In a brief, favorable review of Samuelson’s piece, Don Boudreaux adds some interesting thoughts on the dynamics of technological advance and capital-labor substitution:

“… innovations release real resources, including labor, to be used in other productive activities – activities that become profitable only because of this increased availability of resources.  Entrepreneurs, ever intent on seizing profitable opportunities, hire and buy these newly available resources to expand existing businesses and to create new ones.  Think of all the new industries made possible when motorized tractors, chemical fertilizers and insecticides, improved food-packaging, and other labor-saving innovations released all but a tiny fraction of the workforce from agriculture.

Labor-saving techniques promote economic growth not so much because they increase monetary profits that are then spent but, instead, because they release real resources that are then used to create and expand productive activities that would otherwise be too costly.”

Those released resources, having lower opportunity costs than in their former, now obsolete uses, can find new and profitable uses provided they are priced competitively. Some displaced resources might only justify use after undergoing dramatic transformations, such as recycling of raw components or, for workers, education in new fields or vocations. Indeed, some of  those transformations are unforeeeable prior to the innovations, and might well add more value than was lost via displacement. But that is how the process of creative destruction often unfolds.

A government that seeks to intervene in this process can do only harm to the long-run interests of its citizens. “Saving a job” from technological displacement surely appeals to the mental and emotive mindset of the populist, and it has obvious value as a progressive virtue-signalling tool. These reactions, however, demonstrate a perspective limited to first-order, “seen” changes. What is less obvious to these observers is the impact of politically-induced tech inertia on consumers’ standard of living. This is accompanied by a stultifying impact on market competition, long-run penalization of the most productive workers, and a degradation of freedom from restraints on private decision-makers. As each “visible” advance is impeded, the negative impact compounds with the loss of future, unseen, but path-dependent advances that cannot ever occur.

Sell the Interstates and Poof — Get a Universal Basic Income

11 Tuesday Jul 2017

Posted by Nuetzel in Automation, Universal Basic Income

≈ 3 Comments

Tags

Artificial Intelligence, Basic Income, James P. Murphy, Jesse Walker, Minimum Wage, Opportunity cost, Private Infrastructure, Private Roads, Public Lands, Rainy Day Funds, Universal Basic Income, Vernon Smith, work incentives

Proposals for a universal basic income (UBI) seem to come up again and again. Many observers uncritically accept the notion that robots and automation will eliminate labor as a factor of production in the not-too-distant future. As a result, they cannot imagine how traditional wage earners, and even many salary earners, will get along in life without the helping hand of government. Those who own capital assets — machines, buildings and land — will have to be taxed to support UBI payments, according to this logic.

Even with artificial intelligence added to the mix, I view robot anxiety as overblown, but it makes for great headlines. The threat is likely no greater than the substitution of capital for labor that’s been ongoing since the start of the industrial revolution, and which ultimately led to the creation of more jobs in occupations that were never before imagined. See below for more on my skepticism for robot dystopia. For now, I’ll stipulate that human obsolescence will happen someday, or that a great many workers will be displaced by automation over an extended period. How will society manage with minimal rewards for labor? The question of distributing goods and services will depend more exclusively on the ownership of capital, or else it will be charity and/or government redistribution.

The UBI, as typically framed, is an example of the latter. However, a UBI needn’t require government to tax and redistribute income on an ongoing basis. Nobel Prize winner Vernon Smith suggests that the government owns salable assets sufficient to fund a permanent UBI. He suggests privatizing the interstate highway system and selling off federal lands in the West. The proceeds could then be invested in a variety of assets to generate growth and income. Every American would receive a dividend check each year, under this plan.

Why a UBI?

Given the stipulation that human labor will become obsolete, the UBI is predicated on the presumption that the ownership of earning capital cannot diffuse through society to the working class in time to provide for them adequately. Working people who save are quite capable of accumulating assets, though government does them no favors via tax policy and manipulation of interest rates. But accumulating assets takes time, and it is fair to say that today’s distribution of capital would not support the current distribution of living standards without opportunities to earn labor income.

Still, a UBI might not be a good reason to auction public assets. That question depends more critically on the implicit return earned by those assets via government ownership relative to the gains from privatization, including the returns to alternative uses of the proceeds from a sale.

Objections to the UBI often center on the generally poor performance of government in managing programs, the danger of entrusting resources to the political process, and the corrosive effect of individual dependency. However, if government can do anything well at all, one might think it could at least cut checks. But even if we lay aside the simple issue of mismanagement, politics is a different matter. Over time, there is every chance that a UBI program will be modified as the political winds shift, that exceptions will be carved out, and that complex rules will be established. And that brings us back to the possibility of mismanagement. Even worse, it creates opportunities for rent seekers to skim funds or benefit indirectly from the program. In the end, these considerations might mean that the UBI will yield a poor return for society on the funds placed into the program, much as returns on major entitlements like Social Security are lousy.

Another area of concern is that policy should not discourage work effort while jobs still exist for humans. After all, working and saving is traditionally the most effective route to accumulating capital. Recipients of a UBI would not face the negative marginal work incentives associated with means-tested transfer payments because the UBI would not (should not) be dependent on income. It would go to the rich and poor alike. A UBI could still have a negative impact on labor supply via an income effect, however, depending on how individuals value incremental leisure versus consumption at a higher level of money income. On the whole, the UBI does not impart terrible incentive effects, but that is hardly a rationale for a UBI, let alone a reason to sell public assets.

Funding the UBI

We usually think of funding a UBI via taxes, and it’s well known that taxes harm productive incentives. If the trend toward automation is a natural response to a high return on capital, taxes on capital will retard the transition and might well inhibit the diffusion of capital ownership into lower economic strata. If your rationale for a UBI is truly related to automation and the obsolescence of labor, then funding a UBI should somehow take advantage of the returns to private capital short of taxing those returns away. This makes Smith’s idea more appealing as a funding mechanism.

Will there be a private investment appetite for highways and western land? Selling these assets would take time, of course, and it is difficult to know what bids they could attract. There is no question that toll roads can be profitable. Robert P. Murphy provides an informative discussion of private roads and takes issue with arguments against privatization, such as the presumptions of monopoly pricing and increased risk to drivers. Actually, privatization holds promise as a way of improving the efficiency of infrastructure use and upkeep. In fact, government mispricing of roads is a primary cause of congestion, and private operators have incentives to maintain and improve road safety and quality. Public land sales in the West are complex to the extent that existing mineral and grazing rights could be subject to dispute, and those sales might be unpopular with other landowners.

Once the assets are sold to investors, who will manage the UBI fund? Whether managed publicly or privately, the best arrangement would be no active trading management. Nevertheless, the appropriate mix of investments would be the subject of endless political debate. Every market downturn would bring new calls for conservatism. The level of distributions would also be a politically contentious issue. Dividend yields and price appreciation are not constant, and so it is necessary to determine a sustainable payout rate as well as if and when adjustments are needed. Furthermore, there must be some allowance to assure fund growth over time so that population growth, whatever the source, will not diminish the per capita payout.

Jesse Walker has a good retrospective on the history of “basic income” proposals and programs over time. He demonstrate that economic windfalls have frequently been the impetus for establishment of “rainy day” programs. Alaska, enabled by oil revenue, is unique in establishing a fund paying dividends to residents:

“From time to time a state will find itself awash in riches from natural resources. Some voices will suggest that the government not spend the new money at once but put some away for a rainy day. Some fraction of those voices will suggest it create a sovereign wealth fund to invest the windfall. And some fraction of that fraction will want the fund to pay dividends.

Now, there are all sorts of potential problems with government-run investment portfolios, as anyone who has followed California’s pension troubles can tell you. If you’re wary about mismanagement, you’ll be wary about states playing the market; they won’t all invest as conservatively as Alaska has.

Still, several states have such funds already—the most recent additions to the list are North Dakota and West Virginia—and the number may well grow. None has followed Juneau’s example and started paying dividends, but it is hardly unimaginable that someone else will eventually adopt an Alaska-style system.”

Human-Machine Collaboration

A world without human labor is unlikely to evolve. Automation, for the foreseeable future, can improve existing processes such as line tasks in manufacturing, order taking in fast food outlets, and even burger flipping. Declines in retail employment can also be viewed in this context, as internet sales have grown as a share of consumer spending. However, innovation itself cannot be automated. In today’s applications, the deployment and ongoing use of robots often requires human collaboration. Like earlier increases in capital intensity, automation today spurs the creation of new kinds of jobs. Operational technology now exists alongside information technology as an employment category.

I have addressed concerns about human obsolescence several times in the past (most recently here, and also here). Government must avoid policies that hasten automation, like drastic hikes in the minimum wage (see here and here). U.S. employment is at historic highs even though the process of automation has been underway in industry for a very long time. Today there are almost 6.4 million job vacancies in the U.S., so plenty of work is available. Again, new technologies certainly destroy some jobs, but they tend to create new jobs that were never before imagined and that often pay more than the jobs lost. Human augmentation will also provide an important means through which workers can add to their value in the future. And beyond the new technical opportunities, there will always be roles available in personal service. The human touch is often desired by consumers, and it might even be desirable on a social-psychological level.

Opportunity Costs

Finally, is a UBI the best use of the proceeds of public asset sales? That’s doubtful unless you truly believe that human labor will be obsolete. It might be far more beneficial to pay down the public debt. Doing so would reduce interest costs and allow taxpayer funds to flow to other programs (or allow tax reductions), and it would give the government greater borrowing capacity going forward. Another attractive alternative is to spend the the proceeds of asset sales on educational opportunities, especially vocational instruction that would enhance worker value in the new world of operational technology. Then again, the public assets in question have been funded by taxpayers over many years. Some would therefore argue that the proceeds of any asset sale should be returned to taxpayers immediately and, to the extent possible, in proportion to past taxes paid. The UBI just might rank last.

Embracing the Robots

03 Friday Mar 2017

Posted by Nuetzel in Automation, Labor Markets, Technology

≈ 1 Comment

Tags

3-D Printing, Artificial Intelligence, Automation, David Henderson, Don Boudreaux, Great Stagnation, Herbert Simon, Human Augmentation, Industrial Revolution, Marginal Revolution, Mass Unemployment, Matt Ridley, Russ Roberts, Scarcity, Skills Gap, Transition Costs, Tyler Cowan, Wireless Internet

automation84s

Machines have always been regarded with suspicion as a potential threat to the livelihood of workers. That is still the case, despite the demonstrated power of machines make life easier and goods cheaper. Today, the automation of jobs in manufacturing and even service jobs has raised new alarm about the future of human labor, and the prospect of a broad deployment of artificial intelligence (AI) has made the situation seem much scarier. Even the technologists of Silicon Valley have taken a keen interest in promoting policies like the Universal Basic Income (UBI) to cushion the loss of jobs they expect their inventions to precipitate. The UBI is an idea discussed in last Sunday’s post on Sacred Cow Chips. In addition to the reasons for rejecting that policy cited in that post, however, we should question the premise that automation and AI are unambiguously job killing.

The same stories of future joblessness have been told for over two centuries, and they have been wrong every time. The vulnerability in our popular psyche with respect to automation is four-fold: 1) the belief that we compete with machines, rather than collaborate with them; 2) our perpetual inability to anticipate the new and unforeseeable opportunities that arise as technology is deployed; 3) our tendency to undervalue new technologies for the freedoms they create for higher-order pursuits; and 4) the heavy discount we apply to the ability of workers and markets to anticipate and adjust to changes in market conditions.

Despite the technological upheavals of the past, employment has not only risen over time, but real wages have as well. Matt Ridley writes of just how wrong the dire predictions of machine-for-human substitution have been. He also disputes the notion that “this time it’s different”:

“The argument that artificial intelligence will cause mass unemployment is as unpersuasive as the argument that threshing machines, machine tools, dishwashers or computers would cause mass unemployment. These technologies simply free people to do other things and fulfill other needs. And they make people more productive, which increases their ability to buy other forms of labour. ‘The bogeyman of automation consumes worrying capacity that should be saved for real problems,’ scoffed the economist Herbert Simon in the 1960s.“

As Ridley notes, the process of substituting capital for labor has been more or less continuous over the past 250 years, and there are now more jobs, and at far higher wages, than ever. Automation has generally involved replacement of strictly manual labor, but it has always required collaboration with human labor to one degree or another.

The tools and machines we use in performing all kinds of manual tasks become ever-more sophisticated, and while they change the human role in performing those tasks, the tasks themselves largely remain or are replaced by new, higher-order tasks. Will the combination of automation and AI change that? Will it make human labor obsolete? Call me an AI skeptic, but I do not believe it will have broad enough applicability to obviate a human role in the production of goods and services. We will perform tasks much better and faster, and AI will create new and more rewarding forms of human-machine collaboration.

Tyler Cowen believes that AI and  automation will bring powerful benefits in the long run, but he raises the specter of a transition to widespread automation involving a lengthy period of high unemployment and depressed wages. Cowen points to a 70-year period for England, beginning in 1760, covering the start of the industrial revolution. He reports one estimate that real wages rose just 22% during this transition, and that gains in real wages were not sustained until the 1830s. Evidently, Cowen views more recent automation of factories as another stage of the “great stagnation” phenomenon he has emphasized. Some commenters on Cowen’s blog, Marginal Revolution, insist that estimates of real wages from the early stages of the industrial revolution are basically junk. Others note that the population of England doubled during that period, which likely depressed wages.

David Henderson does not buy into Cowans’ pessimism about transition costs. For one thing, a longer perspective on the industrial revolution would undoubtedly show that average growth in the income of workers was dismal or nonexistent prior to 1760. Henderson also notes that Cowen hedges his description of the evidence of wage stagnation during that era. It should also be mentioned the share of the U.S. work force engaged in agricultural production was 40% in 1900, but is only 2% today, and the rapid transition away from farm jobs in the first half of the 20th century did not itself lead to mass unemployment nor declining wages (HT: Russ Roberts). Cowen cites more recent data on stagnant median income, but Henderson warns that even recent inflation adjustments are fraught with difficulties, that average household size has changed, and that immigration, by adding households and bringing labor market competition, has had at least some depressing effect on the U.S. median wage.

Even positive long-run effects and a smooth transition in the aggregate won’t matter much to any individual whose job is easily automated. There is no doubt that some individuals will fall on hard times, and finding new work might require a lengthy search, accepting lower pay, or retraining. Can something be done to ease the transition? This point is addressed by Don Boudreaux in another context in “Transition Problems and Costs“. Specifically, Boudreaux’s post is about transitions made necessary by changing patterns of international trade, but his points are relevant to this discussion. Most fundamentally, we should not assume that the state must have a role in easing those transitions. We don’t reflexively call for aid when workers of a particular firm lose their jobs because a competitor captures a greater share of the market, nor when consumers decide they don’t like their product. In the end, these are private problems that can and should be solved privately. However, the state certainly should take a role in improving the function of markets such that unemployed resources are absorbed more readily:

“Getting rid of, or at least reducing, occupational licensing will certainly help laid-off workers transition to new jobs. Ditto for reducing taxes, regulations, and zoning restrictions – many of which discourage entrepreneurs from starting new firms and from expanding existing ones. While much ‘worker transitioning’ involves workers moving to where jobs are, much of it also involves – and could involve even more – businesses and jobs moving to where available workers are.“

Boudreaux also notes that workers should never be treated as passive victims. They are quite capable of acting on their own behalf. They often act out of risk avoidance to save their funds against the advent of a job loss, invest in retraining, and seek out new opportunities. There is no question, however, that many workers will need new skills in an economy shaped by increasing automation and AI. This article discusses some private initiatives that can help close the so-called “skills gap”.

Crucially, government should not accelerate the process of automation beyond its natural pace. That means markets and prices must be allowed to play their natural role in directing resources to their highest-valued uses. Unfortunately, government often interferes with that process by imposing employment regulations and wage controls — i.e., the minimum wage. Increasingly, we are seeing that many jobs performed by low-skilled workers can be automated, and the expense of automation becomes more worthwhile as the cost of labor is inflated to artificial levels by government mandate. That point was emphasized in a 2015 post on Sacred Cow Chips entitled “Automate No Job Before Its Time“.

Another past post on Sacred Cow Chips called “Robots and Tradeoffs” covered several ways in which we will adjust to a more automated economy, none of which will require the intrusive hand of government. One certainty is that humans will always value human service, even when a robot is more efficient, so there will be always be opportunities for work. There will also be ways in which humans can compete with machines (or collaborate more effectively) via human augmentation. Moreover, we should not discount the potential for the ownership of machines to become more widely dispersed over time, mitigating the feared impact of automation on the distribution of income. The diffusion of specific technologies become more widespread as their costs decline. That phenomenon has unfolded rapidly with wireless technology, particularly the hardware and software necessary to make productive use of the wireless internet. The same is likely to occur with 3-D printing and other advances. For example, robots are increasingly entering consumer markets, and there is no reason to believe that the same downward cost pressures won’t allow them to be used in home production or small-scale business applications. The ability to leverage technology will require learning, but web-enabled instruction is becoming increasingly accessible as well.

Can the ownership of productive technologies become sufficiently widespread to assure a broad distribution of rewards? It’s possible that cost reductions will allow that to happen, but broadening the ownership of capital might require new saving constructs as well. That might involve cooperative ownership of capital by associations of private parties engaged in diverse lines of business. Stable family structures can also play a role in promoting saving.

It is often said that automation and AI will mean an end to scarcity. If that were the case, the implications for labor would be beside the point. Why would anyone care about jobs in a world without want? Of course, work might be done purely for pleasure, but that would make “labor” economically indistinguishable from leisure. Reaching that point would mean a prolonged process of falling prices, lifting real wages on a pace matching increases in productivity. But in a world without scarcity, prices must be zero, and that will never happen. Human wants are unlimited and resources are finite. We’ll use resources more productively, but we will always find new wants. And if prices are positive, including the cost of capital, it is certain that demands for labor will remain.

Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • Immigration and Merit As Fiscal Propositions
  • Tariff “Dividend” From An Indigent State
  • Almost Looks Like the Fed Has a 3% Inflation Target
  • Government Malpractice Breeds Health Care Havoc
  • A Tax On Imports Takes a Toll on Exports

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library
  • Scattered Showers and Quicksand

Blog at WordPress.com.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

Scattered Showers and Quicksand

Musings on science, investing, finance, economics, politics, and probably fly fishing.

  • Subscribe Subscribed
    • Sacred Cow Chips
    • Join 128 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...