Every now and then I grind my axe against the proposition that AI will put humans out of work. It’s a very fashionable view, along with the presumed need for government to impose “robot taxes” and provide everyone with a universal basic income for life. The thing is, I sense that my explanations for rejecting this kind of narrative have been a little abstruse, so I’m taking another crack at it now.
Will Human Workers Be Obsolete?
The popular account envisions a world in which AI replaces not just white-collar technocrats, but by pairing AI with advanced robotics, it replaces workers in the trades as well as manual laborers. We’ll have machines that cure, litigate, calculate, forecast, design, build, fight wars, make art, fix your plumbing, prune your roses, and replicate. They’ll be highly dextrous, strong, and smart, capable of solving problems both practical and abstract. In short, AI capital will be able to do everything better and faster than humans! The obvious fear is that we’ll all be out of work.
I’m here to tell you it will not happen that way. There will be disruptions to the labor market, extended periods of joblessness for some individuals, and ultimately different patterns of employment. However, the chief problem with the popular narrative is that AI capital will require massive quantities of resources to produce, train, and operate.
Even without robotics, today’s AIs require vast flows of energy and other resources, and that includes a tremendous amount of expensive compute. The needed resources are scarce and highly valued in a variety of other uses. We’ll face tradeoffs as a society and as individuals in allocating resources both to AI and across various AI applications. Those applications will have to compete broadly and amongst themselves for priority.
AI Use Cases
There are many high-value opportunities for AI and robotics, such as industrial automation, customer service, data processing, and supply chain optimization, to name a few. These are already underway to a significant extent. To that, however, we can add medical research, materials research, development of better power technologies and energy storage, and broad deployment in delivering services to consumers and businesses.
In the future, with advanced robotics, AI capital could be deployed in domains that carry high risks for human labor, such as construction of high rise buildings, underwater structures, and rescue operations. This might include such things as construction of solar platforms and large transports in space, or the preparation of space habitats for humans on other worlds.
Scarcity
There is no end to the list of potential applications of AI, but neither is there an end to the list of potential wants and aspirations of humanity. Human wants are insatiable, which sometimes provokes ham-fisted efforts by many governments to curtail growth. We have a long way to go before everyone on the planet lives comfortably. But even then, peoples’ needs and desires will evolve once previous needs are satisfied, or as technology changes lifestyles and practices. New approaches and styles drive fashions and aesthetics generally. There are always individuals who will compete for resources to experiment and to try new things. And the insatiability of human wants extends beyond the strictly private level. Everyone has an opinion about unsatisfied needs in the public sphere, such as infrastructure, maintenance, the environment, defense, space travel, and other dimensions of public activity.
Futurists have predicted that the human race will seek to become a so-called Type I civilization, capable of harnessing all of the energy on our planet. Then there will be the quest to harness all the energy within our solar system (a Type II civilization). Ultimately, we’ll seek to go beyond that by attempting to exploit all the energy in the Milky Way galaxy. Such an expansion of our energy demands would demonstrate how our wants always exceed the resources we have the ability to exploit.
In other words, scarcity will always be with us. The necessity of facing tradeoffs won’t ever be obviated, and prices will always remain positive. The question of dedicating resources to any particular application of AI will bring tradeoffs into sharper relief. The opportunity cost of many “lesser” AI and robotics applications will be quite high relative to their value to investors. Simply put, many of those applications will be rejected because there will be better uses for the requisite energy and other resources.
Tradeoffs
Again, it will be impossible for humans to accomplish many of the tasks that AI’s will perform, or to match the sheer productivity of AIs in doing so. Therefore, AI will have an absolute advantage over humans in all of those tasks.
However, there are many potential applications of AI that are of comparatively low value. These include a variety of low-skill tasks, but also tasks that require some dexterity or continuous judgement and adjustment. Operationalizing AI and robots to perform all these tasks, and diverting the necessary capital and energy away from other uses, would have a tremendously high opportunity cost. Human opportunity costs will not be so high. Thus, people will have a comparative advantage in performing the bulk if not all of these tasks.
Sure, there will be novelty efforts and test cases to train robots to do plumbing or install burglar alarm systems, and at some point buyers might wish to have robots prune their roses. Some people are already amenable to having humanoid robots perform sex work. Nevertheless, humans will remain competitive at these tasks due to the comparatively high opportunity costs faced by AI capital.
There will be many other domains in which humans will remain competitive. Once more, that’s because the opportunity costs for AI capital and other resources will be high. This includes many of the skilled trades, caregivers, and a great many management functions, especially at small companies. Their productivity will be enhanced by AI tools, but those jobs will not be decimated.
The key here is understanding that 1) capital and resources generally are scarce; 2) high value opportunities for AI are plentiful; and 3) the opportunity cost of funding AI in many applications will be very high. Humans will still have a comparative advantage in many areas.
Who’s the Boss?
There are still other ways in which human labor will always be required. One in particular involves the often complementary nature of AI and human inputs. People will have roles in instructing and supervising AIs, especially in tasks requiring customization and feedback. A key to assuring AI alignment with the objectives of almost any pursuit is human review. These kinds of roles are likely to be compensated in line with the complexity of the task. This extends to the necessity of human leadership of any organization.
That brings me to the subject of agentic and fully autonomous AI. No matter how sophisticated they get, AIs will always be the product of machines. They’ll be a kind of capital for which ownership should be confined to humans or organizations representing humans. We must be their masters. Disclaiming ownership and control of AIs, and granting agentic AIs the same rights and freedoms as people (as many have imagined) is unnecessary and possibly dangerous. AIs will do much productive work, but that work should be on behalf of human owners, and human labor will be deployed to direct and assess that work.
AIs (and People) Needing People
The collaboration between AIs and humans described above will manifest more broadly than anything task-specific, or anything we can imagine today. This is typical of technological advance. First-order effects often include job losses as new innovations enhance productivity or replace workers outright, but typically new jobs are created as innovations generate new opportunities for complementary products and services both upstream in production or downstream among ultimate users. In the case of AI, while much of this work might be performed by other AIs, at a minimum these changes will require guidance and supervision by humans.
In addition, consumers tend to have an aesthetic preference for goods and services produced by humans: craftsmen, artists, and entertainers. For example, if you’ve ever shopped for an oriental rug, you know that hand-knotted rugs are more expensive than machine-weaved rugs. Durability is a factor as well as uniqueness, the latter being a hallmark of human craftspeople. AI might narrow these differences over time, but the “human touch” will always have value relative to “comparable” AI output, even at a significant disadvantage in terms of speed and uncertainty regarding performance. The same is true of many other forms, such as sports, dance, music, and the visual arts. People prefer to be entertained by talented people, rather than highly-engineered machines. The “human touch” also has advantages in customer-facing transactions, including most forms of service and high-level sales/financial negotiations.
Owning the Machines
Finally, another word about AI ownership. An extension of the fashionable narrative that AIs will wholly replace human workers is that government will be called upon to tax AI and provide individuals with a universal basic income (UBI). Even if human labor were to be replaced by AIs, I believe that a “classic” UBI would be the wrong approach. Instead, all humans should have an ownership stake in the capital stock. This is wealth that yields compound growth over time and produces returns that make humans less reliant on streams of labor income.
Savings incentives (and negative consumption incentives) are a big step in encouraging more widespread ownership of capital. However, if direct intervention is necessary, early endowments of capital would be far preferable to a UBI because they will largely be saved, fostering economic growth, and they would create better incentives than a UBI. Along those lines, President Trump’s Big Beautiful Bill, which is now law, has established “Baby Bonds” for all American children born in 2025 – 2028, initially funded by the federal government with $1,000. Of course, this is another unfunded federal obligation on top of the existing burden of a huge public debt and ongoing deficits. Given my doubts about the persistence of AI-induced job losses, I reject government establishment of both a UBI and universal endowments of capital.
Summary
Capital and energy are scarce, so the tremendous resource requirements of AI and robotics means that the real world opportunity costs of many AI applications will remain impractically high. The tradeoffs will be so steep that they’ll leave humans with comparative advantages in many traditional areas of employment. Partly, these will come down to a difference in perceived quality owing to a preference for human interaction and human performance in a variety of economic interactions, including patronization of the art and athleticism of human beings. In addition, AIs will open up new occupations never before contemplated. We won’t be out of work. Nevertheless, it’s always a good idea to accumulate ownership in productive assets, including AI capital, and public policy should do a better job of supporting the private initiative to do so.
Artificial intelligence (AI) has become a very hot topic with incredible recent advances in AI performance. It’s very promising technology, and the expectations shown in thechart above illustrate what would be a profound economic impact. Like many new technologies, however, many find it threatening and are reacting with great alarm, There’s a movement within the tech industry itself, partly motivated by competitive self-interest, calling for a “pause”, or a six-month moratorium on certain development activities. Politicians in Washington are beginning to clamor for legislation that would subject AI to regulation. However, neither a voluntary pause nor regulatory action are likely to be successful. In fact, either would likely do more harm than good.
Leaps and Bounds
The pace of advance in AI has been breathtaking. From ChatGPT 3.5 to ChatGPT 4, in a matter of just a few months, the tool went from relatively poor performance on tests like professional and graduate entrance exams (e.g., bar exams, LSAT, GRE) to very high scores. Using these tools can be a rather startling experience, as I learned for myself recently when I allowed one to write the first draft of a post. (Despite my initial surprise, my experience with ChatGPT 3.5 was somewhat underwhelming after careful review, but I’ve seen more impressive results with ChatGPT 4). They seem to know so much and produce it almost instantly, though it’s true they sometimes “hallucinate”, reflect bias, or invent sources, so thorough review is a must.
Nevertheless, AIs can write essays and computer code, solve complex problems, create or interpret images, sounds and music, simulate speech, diagnose illnesses, render investment advice, and many other things. They can create subroutines to help themselves solve problems. And they can replicate!
As a gauge of the effectiveness of models like ChatGPT, consider that today AI is helping promote “over-employment”. That is, there are a number of ambitious individuals who, working from home, are holding down several different jobs with the help of AI models. In fact, some of these folks say AIs are doing 80% of their work. They are the best “assistants” one could possibly hire, according to a man who has four different jobs.
Economist Bryan Caplan is an inveterate skeptic of almost all claims that smack of hyperbole, and he’s won a series of bets he’s solicited against others willing to take sides in support of such claims. However, Caplan thinks he’s probably lost his bet on the speed of progress on AI development. Needless to say, it has far exceeded his expectations.
Naturally, the rapid progress has rattled lots of people, including many experts in the AI field. Already, we’re witnessing the emergence of “agency” on the part of AI Learning Language Models (LLMs), or so called “agentic” behavior. Here’s an interesting thread on agentic AI behavior. Certain models are capable of teaching themselves in pursuit of a specified goal, gathering new information and recursively optimizing their performance toward that goal. Continued gains may lead to an AI model having artificial generative intelligence (AGI), a superhuman level of intelligence that would go beyond acting upon an initial set of instructions. Some believe this will occur suddenly, which is often described as the “foom” event.
Team Uh-Oh
Concern about where this will lead runs so deep that a letter was recently signed by thousands of tech industry employees, AI experts, and other interested parties calling for a six-month worldwide pause in AI development activity so that safety protocols can be developed. One prominent researcher in machine intelligence, Eliezer Yudkowsky, goes much further: he believes that avoiding human extinction requires immediate worldwide limits on resources dedicated to AI development. Is this a severely overwrought application of the precautionary principle? That’s a matter I’ll consider at greater length below, but like Caplan, I’m congenitally skeptical of claims of impending doom, whether from the mouth of Yudkowsky, Greta Thunberg, Paul Ehrlich, or Nassim Taleb.
As I mentioned at the top, I suspect competition among AI developers played a role in motivating some of the signatories of the “AI pause” letter, and some of the non-signatories as well. Robin Hanson points out that Sam Altman, the CEO of OpenAI, did not sign the letter. OpenAI (controlled by a nonprofit foundation) owns ChatGPT and is the current leader in rolling out AI tools to the public. ChatGPT 4 can be used with the Microsoft search engine Bing, and Microsoft’s Bill Gates also did not sign the letter. Meanwhile, Google was caught flat-footed by the ChatGPT rollout, and its CEO signed. Elon Musk (who signed) wants to jump in with his own AI development: TruthGPT. Of course, the pause letter stirred up a number of members of Congress, which I suspect was the real intent. It’s reasonable to view the letter as a means of leveling the competitive landscape. Thus, it looks something like a classic rent-seeking maneuver, buttressed by the inevitable calls for regulation of AIs. However, I certainly don’t doubt that a number of signatories did so out of a sincere belief that the risks of AI must be dealt with before further development takes place.
The vast dimensions of the supposed AI “threat” may have some libertarians questioning their unequivocal opposition to public intervention. If so, they might just as well fear the potential that AI already holds for manipulation and control by central authorities in concert with their tech and media industry proxies. But realistically, broad compliance with any precautionary agreement between countries or institutions, should one ever be reached, is pretty unlikely. On that basis, a “scout’s honor” temporary moratorium or set of permanent restrictions might be comparable to something like the Paris Climate Accord. China and a few other nations are unlikely to honor the agreement, and we really won’t know whether they’re going along with it except for any traceable artifacts their models might leave in their wake. So we’ll have to hope that safeguards can be identified and implemented broadly.
Likewise, efforts to regulate by individual nations are likely to fail, and for similar reasons. One cannot count on other powers to enforce the same kinds of rules, or any rules at all. Putting our faith in that kind of cooperation with countries who are otherwise hostile is a prescription for ceding them an advantage in AI development and deployment. Regulation of the evolution of AI will likely fail. As Robert Louis Stevenson once wrote, “Thus paternal laws are made, thus they are evaded”. And if it “succeeds, it will leave us with a technology that will fall short of its potential to benefit consumers and society at large. That, unfortunately, is usually the nature of state intrusion into a process of innovation, especially when devised by a cadre of politicians with little expertise in the area.
Again, according to experts like Yudkowsky, AGI would pose serious risks. He thinks the AI Pause letter falls far short of what’s needed. For this reason, there’s been much discussion of somehow achieving an alignment between the interests of humanity and the objectives of AIs. Here is a good discussion by Seth Herd on the LessWrong blog about the difficulties of alignment issues.
Some experts feel that alignment is an impossibility, and that there are ways to “live and thrive” with unalignment (and see here). Alignment might also be achieved through incentives for AIs. Those are all hopeful opinions. Others insist that these models still have a long way to go before they become a serious threat. More on that below. Of course, the models do have their shortcomings, and current models get easily off-track into indeterminacy when attempting to optimize toward an objective.
But there’s an obvious question that hasn’t been answered in full: what exactly are all these risks?As Tyler Cowen has said, it appears that no one has comprehensively catalogued the risks or specified precise mechanisms through which those risks would present. In fact, AGI is such a conundrum that it might be impossible to know precisely what threats we’ll face. But even now, with deployment of AIs still in its infancy, it’s easy to see a few transition problems on the horizon.
White Collar Wipeout
Job losses seem like a rather mundane outcome relative to extinction. Those losses might come quickly, particularly among white collar workers like programmers, attorneys, accountants, and a variety of administrative staffers. According to a survey of 1,000 businesses conducted in February:
“Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year. … When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say ‘definitely,’ while 26% say ‘probably.’ … Within 5 years, 63% of business leaders say ChatGPT will ‘definitely’ (32%) or ‘probably’ (31%) lead to workers being laid off.”
A rapid rate of adoption could well lead to widespread unemployment and even social upheaval. For perspective, that implies a much more rapid rate of technological diffusion than we’ve ever witnessed, so this outcome is viewed with skepticism in some quarters. But in fact, the early adoption phase of AI models is proceeding rather quickly. You can use ChatGPT 4 easily enough on the Bing platform right now!
Contrary to the doomsayers, AI will not just enhance human productivity. Like all new technologies, it will lead to opportunities for human actors that are as yet unforeseen. AI is likely to identify better ways for humans to do many things, or do wonderful things that are now unimagined. At a minimum, however, the transition will be disruptive for a large number of workers, and it will take some time for new opportunities and roles for humans to come to fruition.
Robin Hanson has a unique proposal for meeting the kind of challenge faced by white collar workers vulnerable to displacement by AI, or for blue collar workers who are vulnerable to displacement by robots (the deployment of which has been hastened by minimum wage and living wage activism). This treatment of Hanson’s idea will be inadequate, but he suggests a kind of insurance or contract sold to both workers and investors by owners of assets likely to be insensitive to AI risks. The underlying assets are paid out to workers if automation causes some defined aggregate level of job loss. Otherwise, the assets are paid out to investors taking the other side of the bet. Workers could buy these contracts themselves, or employers could do so on their workers’ behalf. The prices of the contracts would be determined by a market assessment of the probability of the defined job loss “event”. Governmental units could buy the assets for their citizens, for that matter. The “worker contracts” would be cheap if the probability of the job-loss event is low. Sounds far-fetched, but perhaps the idea is itself an entrepreneurial opportunity for creative players in the financial industry.
The threat of job losses to AI has also given new energy to advocates of widespread adoption of universal basic income payments by government. Hanson’s solution is far preferable to government dependence, but perhaps the state could serve as an enabler or conduit through which workers could acquire AI and non-AI capital.
Human Capital
Current incarnations of AI are not just a threat to employment. One might add the prospect that heavy reliance on AI could undermine the future education and critical thinking skills of the general population. Essentially allowing machines to do all the thinking, research, and planning won’t inure to the cognitive strength of the human race, especially over several generations. Already people suffer from an inability to perform what were once considered basic life skills, to say nothing of tasks that were fundamental to survival in the not too distant past. In other words, AI could exaggerate a process of “dumbing down” the populace, a rather undesirable prospect.
Fraud and Privacy
AI is responsible for still more disruptions already taking place, in particular violations of privacy, security, and trust. For example, a company called Clearview AI has scraped 30 billion photos from social media and used them to create what its CEO proudly calls a “perpetual police lineup”, which it has provided for the convenience of law enforcement and security agencies.
AI is also a threat to encryption in securing data and systems. Conceivably, AI could be of value in perpetrating identity theft and other kinds of fraud, but it can also be of value in preventing them. AI is also a potential source of misleading information. It is often biased, reflecting specific portions of the on-line terrain upon which it is trained, including skewed model weights applied to information reflecting particular points of view. Furthermore, misinformation can be spread by AIs via “synthetic media” and the propagation of “fake news”. These are fairly clear and present threats of social, economic, and political manipulation. They are all foreseeable dangers posed by AI in the hands of bad actors, and I would include certain nudge-happy and politically-motivated players in that last category.
The Sky-Already-Fell Crowd
Certain ethicists with extensive experience in AI have condemned the signatories of the “Pause Letter” for a focus on “longtermism”, or risks as yet hypothetical, rather than the dangers and wrongs attributable to AIs that are already extant:TechCrunch quotes a rebuke penned by some of these dissenting ethicists to supporters of the “Pause Letter”:
“‘Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,’ they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.”
So these ethicists bemoan AI’s presumed contribution to the strength and concentration of “existing power structures”. In that, I detect just a whiff of distaste for private initiative and private rewards, or perhaps against the sovereign power of states to allow a laissez faire approach to AI development (or to actively sponsor it). I have trouble taking this “rebuke” too seriously, but it will be fruitless in any case. Some form of cooperation between AI developers on safety protocols might be well advised, but competing interests also serve as a check on bad actors, and it could bring us better solutions as other dilemmas posed by AI reveal themselves.
ImaginingAI Catastrophes
What are the more consequential (and completely hypothetical) risks feared by the “pausers” and “stoppers”. Some might have to do with the possibility of widespread social upheaval and ultimately mayhem caused by some of the “mundane” risks described above. But the most noteworthy warnings are existential: the end of the human race! How might this occur when AGI is something confined to computers? Just how does the supposed destructive power of AGIs get “outside the box”? It must do so either by tricking us into doing something stupid, hacking into dangerous systems (including AI weapons systems or other robotics), and/or through the direction and assistance of bad human actors. Perhaps all three!
The first question is this: why would an AGI do anything so destructive? No matter how much we might like to anthropomorphize an “intelligent” machine, it would still be a machine. It really wouldn’t like or dislike humanity. What it would do, however, is act on its objectives. It would seek to optimize a series of objective functions toward achieving a goal or a set of goals it is given. Hence the role for bad actors. Let’s face it, there are suicidal people who might like nothing more than to take the whole world with them.
Otherwise, if humanity happens to be an obstruction to solving an AGI’s objective, then we’d have a very big problem. Humanity could be an aid to solving an AGI’s optimization problem in ways that are dangerous. As Yudkowsky says, we might represent mere “atoms it could use somewhere else.” And if an autonomous AGI were capable of setting it’s own objectives, without alignment, the danger would be greatly magnified. An example might be the goal of reducing carbon emissions to pre-industrial levels. How aggressively would an AGI act in pursuit of that goal? Would killing most humans contribute to the achievement of that goal?
Here’s one that might seem far-fetched, but the imagination runs wild: some individuals might be so taken with the power of vastly intelligent AGI as to make it an object of worship. Such an “AGI God” might be able to convert a sufficient number of human disciples to perpetrate deadly mischief on its behalf. Metaphorically speaking, the disciples might be persuaded to deliver poison kool-aid worldwide before gulping it down themselves in a Jim Jones style mass suicide. Or perhaps the devoted will survive to live in a new world mono-theocracy. Of course, these human disciples would be able to assist the “AGI God” in any number of destructive ways. And when brain-wave translation comes to fruition, they better watch out. Only the truly devoted will survive.
An AGI would be able to create the illusion of emergency, such as a nuclear launch by an adversary nation. In fact, two or many adversary nations might each be fooled into taking actions that would assure mutual destruction and a nuclear winter. If safeguards such as human intermediaries were required to authorize strikes, it might still be possible for an AGI to fool those humans. And there is no guarantee that all parties to such a manufactured conflict could be counted upon to have adequate safeguards, even if some did.
Yudkowsky offers at least one fairly concrete example of existential AGI risk:
“A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”
There are many types of physical infrastructure or systems that an AGI could conceivably compromise, especially with the aid of machinery like robots or drones to which it could pass instructions. Safeguards at nuclear power plants could be disabled before steps to trigger melt down. Water systems, rivers, and bodies of water could be poisoned. The same is true of food sources, or even the air we breathe. In any case, complete social disarray might lead to a situation in which food supply chains become completely dysfunctional. So, a super-intelligence could probably devise plenty of “imaginative” ways to rid the earth of human beings.
Back To Earth
Is all this concern overblown? Many think so. Bryan Caplan now has a $500 bet with Eliezer Yudkowsky that AI will not exterminate the human race by 2030. He’s already paid Yudkowsky, who will pay him $1,000 if we survive. Robin Hanson says “Most AI Fear Is Future Fear”, and I’m inclined to agree with that assessment. In a way, I’m inclined to view the AI doomsters as highly sophisticated, change-fearing Luddites, but Luddites nevertheless.
Ben Hayum is very concerned about the dangers of AI, butwriting at LessWrong, he recognizes some real technical barriers that must be overcome for recursive optimization to be successful. He also notes that the big AI developers are all highly focused on safety. Nevertheless, he says it might not take long before independent users are able to bootstrap their own plug-ins or modules on top of AI models to successfully optimize without running off the rails. Depending on the specified goals, he thinks that will be a scary development.
James Pethokoukis raises a point that hasn’t had enough recognition: successful innovations are usually dependent on other enablers, such as appropriate infrastructure and process adaptations. What this means is that AI, while making spectacular progress thus far, won’t have a tremendous impact on productivity for at least several years, nor will it pose a truly existential threat. The lag in the response of productivity growth would also limit the destructive potential of AGI in the near term, since installation of the “social plant” that a destructive AGI would require will take time. This also buys time for attempting to solve the AI alignment problem.
In another Robin Hanson piece, he expresses the view that the large institutions developing AI have a reputational Al stake and are liable for damages their AI’s might cause. He notes that they are monitoring and testing AIs in great detail, so he thinks the dangers are overblown.:
“So, the most likely AI scenario looks like lawful capitalism…. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives.”
In the longer term, the chief focus of the AI doomsters, Hanson is truly an AI optimist. He thinks AGIs will be “designed and evolved to think and act roughly like humans, in order to fit smoothly into our many roughly-human-shaped social roles.” Furthermore, he notes that AI owners will have strong incentives to monitor and “delimit” AI behavior that runs contrary to its intended purpose. Thus, a form of alignment is achieved by virtue of economic and legal incentives. In fact, Hanson believes the “foom” scenario is implausible because:
“… it stacks up too many unlikely assumptions in terms of our prior experiences with related systems. Very lumpy tech advances, techs that broadly improve abilities, and powerful techs that are long kept secret within one project are each quite rare. Making techs that meet all three criteria even more rare. In addition, it isn’t at all obvious that capable AIs naturally turn into agents, or that their values typically change radically as they grow. Finally, it seems quite unlikely that owners who heavily test and monitor their very profitable but powerful AIs would not even notice such radical changes.”
As smart as AGIs would be, Hanson asserts that the problem of AGI coordination with other AIs, robots, and systems would present insurmountable obstacles to a bloody “AI revolution”. This is broadly similar to Pethokoukis’ theme. Other AIs or AGIs are likely to have competing goals and “interests”. Conflicting objectives and competition of this kind will do much to keep AGIs honest and foil malign AGI behavior.
The kill switch is a favorite response of those who think AGI fears are exaggerated. Just shut down an AI if its behavior is at all aberrant, or if a user attempts to pair an AI model with instructions or code that might lead to a radical alteration in an AI’s level of agency. Kill switches would indeed be effective at heading off disaster if monitoring and control is incorruptible. This is the sort of idea that begs for a general solution, and one hopes that any advance of that nature will be shared broadly.
One final point about AI agency is whether autonomous AGIs might ever be treated as independent factors of production. Could they be imbued with self-ownership?Tyler Cowen asks whether an AGI created by a “parent” AGI could legitimately be considered an independent entity in law, economics, and society. And how should income “earned” by such an AGI be treated for tax purposes. I suspect it will be some time before AIs, including AIs in a lineage, are treated separately from their “controlling” human or corporate entities. Nevertheless, as Cowen says, the design of incentives and tax treatment of AI’s might hold some promise for achieving a form of alignment.
Letting It Roll
There’s plenty of time for solutions to the AGI threat to be worked out. As I write this, the consensus forecast for the advent of real AGI on the Metaculus online prediction platform is July 27, 2031. Granted, that’s more than a year sooner than it was 11 days ago, but it still allows plenty of time for advances in controlling and bounding agentic AI behavior. In the meantime, AI is presenting opportunities to enhance well being through areas like medicine, nutrition, farming practices, industrial practices, and productivity enhancement across a range of processes. Let’s not forego these opportunities. AI technology is far too promising to hamstring with a pause, moratoria, or ill-devised regulations. It’s also simply impossible to stop development work on a global scale.
Nevertheless, AI issues are complex for all private and public institutions. Without doubt, it will change our world. This AI Policy Guide from Mercatus is a helpful effort to lay out issues at a high-level.
In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun