• About

Sacred Cow Chips

Sacred Cow Chips

Tag Archives: ChatGPT

A, But Not-So-I: Altman’s Plan To Tax Wealth and Redistribute Capital

09 Tuesday Jul 2024

Posted by Nuetzel in Artificial Intelligence, Wealth Distribution, Wealth Taxes

≈ 2 Comments

Tags

Absolute Advantage, AGI, Alignment, American Equity Fund, Antitrust, ChatGPT, Chris Edwards, Comparative advantage, consumption tax, David Schizer, Defense Production Act, Direct Taxes, Inequality, Maxwell Tabarrok, Michael Munger, Michael Strain, Moore v. United States, Moore’s Law, Open AI, Patrick Hedger, Sam Altman, Scarcity, Scott Sumner, Sixteenth Amendment, Steven Calabresi, Tax Incidence, ULTRA Tax, Wealth Tax

In this case, the “A” stands for Altman. Now Sam Altman is no slouch, but he’s taken a few ill-considered positions on public policy. Altman, the CEO of Open AI, wrote a blog post back in 2021 entitled “Moore’s Law For Everything” in which he predicted that AI will feed an explosion of economic growth. He also said AI will put a great many people out of work and drive down the price of certain kinds of labor. Furthermore, he fears that the accessibility of AI will be heavily skewed against the lowest socioeconomic classes. In later interviews (see here and here), Altman is somewhat demure about those predictions, but the general outline is the same: despite exceptional growth of GDP and wealth, he envisions job losses, an underclass of AI-illiterates, and a greater degree of income and wealth inequality.

Not Quite Like That

We’ve yet to see an explosion of growth, but it’s still very early in the AI revolution. The next several years will be telling. AI holds the potential to vastly increase our production possibilities over the course of the next few decades. For that and other reasons, I don’t buy the more dismal aspects of Altman’s scenario, as my last two posts make clear (here and here).

There will be plenty of jobs for people because humans will have comparative advantages in various areas of production. AI agents might have absolute advantages across most or even all jobs, but a rational deployment would have AI agents specialize only where they have a comparative advantage.

Scarcity will not be the sort of anachronism envisioned by some AI futurists, Altman included, and scarcity of AI agents (and their inputs) will necessitate their specialization in certain tasks. The demand for AI agents will be quite high, and their energy and “compute” requirements will be massive. AI agents will face extremely high opportunity costs in other tasks, leaving many occupations open for human labor, to say nothing of abundant opportunities for human-AI collaboration.

However, I don’t dismiss the likelihood of disruptions in markets for certain kinds of labor if the AI revolution proceeds as rapidly as Altman thinks it will. Many workers would be displaced, and it would take time, training, and a willingness to adapt for them to find new opportunities. But new kinds of jobs for people will emerge with time as AI is embedded throughout the economy.

Altman’s Rx

Altman’s somewhat pessimistic outlook for human employment and inequality leads him to make a couple of recommendations:

1) Ownership of capital must be more broadly distributed.

2) Capital and land must be taxed, potentially replacing income taxes, but primarily to fund equity investments for all Americans.

Here I agree with the spirit of #1. Broad ownership of capital is desirable. It allows greater participation in the capitalist system, which fosters political and economic stability. And wider access to capital, whether owned or not, allows a greater release of entrepreneurial energy. It also diversifies incomes and reduces economic dependency.

Altman proposes the creation of an American Equity Fund (AEF) to hold the proceeds of taxes on land and corporate assets for the benefit of all Americans. I’ll get to the taxes in a moment, but in discussing the importance of educating the public on the benefits of compounding, Altman seems to imply that assets in AEF would be held in individual accounts, as opposed to a single “public” account controlled by the federal government. Individual accounts would be far preferable, but it’s not clear how much control Altman would grant individuals in managing their accounts.

To Kill a Golden Goose

Taxes on capital are problematic. Capital can only be accumulated over time by saving out of income. Thus, as Michael Munger points out, as a general proposition under an income tax, all capital has already been taxed once. And we tax the income from capital at both the corporate and individual level. So corporate income is already double taxed: corporate profits are taxed along with dividend payments to shareholders.

Altman proposed in his 2021 blog post to levy a tax of 2.5% on the market value of publicly-traded corporations each year. The tax would be payable in cash or in corporate shares to be placed into the AEF. The latter would establish a kind of UnLiquidated Tax Reserve Accounts (ULTRA), which Munger discusses in the article linked above (my bracketed x% in the quote here):

“Instead of taking [x%] of the liquidated value of the wealth, the state would simply take ownership of the wealth, in place. An ULTRA is a ‘notional equity interest.’ The government literally takes a portion of the value of the asset; that value will be paid to the state when the asset is sold. Now, it is only a ‘notional’ stake, in the sense that no shared right of control or voting rights exists. But for those who advocate for ULTRAs, in any situation where tax agencies are authorized to tax an asset today, but cannot because there is no evaluation event, the taxpayer could be made to pay with an ULTRA rather than with cash.”

This solves all sorts of administrative problems associated with wealth taxes, but it is draconian nevertheless. Munger quotes an example of a successful, privately-held business subject to a 2% wealth tax every year in the form of an ULTRA. After 20 years, the government owns more than a third of the company’s value. That represents a substantial penalty for success! However, the incidence of such a tax might fall more on workers and customers and less on business owners. And Altman would tax corporations more heavily than in Munger’s example.

A tax on wealth essentially penalizes thrift, reduces capital accumulation, and diminishes productivity and real wages. But another fundamental reason that taxes on capital should be low is that the supply of capital is elastic. A tax on capital discourages saving and encourages capital flight. The use of avoidance schemes will proliferate, and there will be intense pressure to carve out special exemptions.

A Regressive Dimension

Another drawback of a wealth tax is its regressivity with respect to returns on capital. To see this, we can convert a tax on wealth to an equivalent income tax on returns. Here is Chris Edwards on that point:

“Suppose a person received a pretax return of 6 percent on corporate equities. An annual wealth tax of 2 percent would effectively reduce that return to 4 percent, which would be like a 33 percent income tax—and that would be on top of the current federal individual income tax, which has a top rate of 37 percent.”

… The effect is to impose lower effective tax rates on higher‐yielding assets, and vice versa. If equities produced returns of 8 percent, a 2 percent wealth tax would be like a 25 percent income tax. But if equities produced returns of 4 percent, the wealth tax would be like a 50 percent income tax. People with the lowest returns would get hit with the highest tax rates, and even people losing money would have to pay the wealth tax.“

Edwards notes the extreme inefficiency of wealth taxes demonstrated by the experience of a number of OECD countries. There are better ways to increase revenue and the progressivity of taxes. The best alternative is a tax on consumption, which rewards saving and capital accumulation, promoting higher wages and economic growth. Edwards dedicates a lengthy section of his paper to the superiority of a consumption tax.

Is a Wealth Tax Constitutional?

The constitutionality of a wealth tax is questionable as well. Steven Calabresi and David Schizer (C&S) contend that a federal wealth tax would qualify as a direct tax subject to the rule of apportionment, which would also apply to a federal tax on land. That is, under the U.S. Constitution, these kinds of taxes would have to be the same amount per capita in every state. Thus, higher tax rates would be necessary in less wealthy states.

C&S also note a major distinction between taxes on the value of wealth relative to income, excise, import, and consumption taxes. The latter are all triggered by transactions entered into voluntarily. They are avoidable in that sense, but not wealth taxes. Moreover, C&S believe the founders’ intent was to rely on direct taxes only as a backstop during wartime.

The recent Supreme Court decision in Moore v. United States created doubt as to whether the Court had set a precedent in favor of a potential wealth tax. According to earlier precedent, the Constitution forbade the “laying of taxes” on “unrealized” income or changes in wealth. However, in Moore, the Court ruled that undistributed profits from an ownership interest in a foreign business are taxable under the mandatory repatriation tax, signed into law by President Trump in 2017 as part of his tax overhaul package. But Justice Kavanaugh, who wrote the majority opinion, stated that the ruling was based on the foreign company’s status as a pass-through entity. The Wall Street Journal says of the decision:

“Five Justices open the door to taxing unrealized gains in assets. Democrats will walk through it.”

In a brief post, Calabrisi laments Justice Ketanji Brown Jackson’s expansive view of the federal government’s taxing authority under the Sixteenth Amendment, which might well be shared by the Biden Administration. But the Wall Street Journal piece also describes Kavanaugh’s admonition regarding any expectation of a broader application of the Moore opinion:

“Justice Kavanaugh does issue a warning that ‘the Due Process Clause proscribes arbitrary attribution’ of undistributed income to shareholders. And he writes that his opinion should not ‘be read to authorize any hypothetical congressional effort to tax both an entity and its shareholders or partners on the same undistributed income realized by the entity.’”

Growth Is the Way, Not Taxes

AI growth will lead to rapid improvements in labor productivity and real wages in many occupations, despite a painful transition for some workers requiring occupational realignment and periods of unemployment and training. However, people will retain comparative advantages over AI agents in a number of existing occupations. Other workers will find that AI allows them to shift their efforts toward higher-value or even new aspects of their jobs. Along the same lines, there will be a huge variety of new occupations made possible by AI of which we’re only now catching the slightest glimpse. Michael Strain has emphasized this aspect of technological diffusion, noting that 60% of the jobs performed in 2018 did not exist in 1940. In fact, few of those “new” jobs could have been imagined in 1940.

AI entrepreneurs and AI investors will certainly capture a disproportionate share of gains from an AI revolution. Of course, they’ll have created a disproportionate share of that wealth. It might well skew the distribution of wealth in their favor, but that does not reflect negatively on the market process driving the outcome, especially because it will also give rise to widespread gains in living standards.

Altman goes wrong in proposing tax-funded redistribution of equity shares. Those taxes would slow AI development and deployment, reduce economic growth, and produce fewer new opportunities for workers. The surest way to effect a broader distribution of equity capital, and of equity in AI assets, is to encourage innovation, economic growth, and saving. Taxing capital more heavily is a very bad way to do that, whether from heavier taxes on income from capital, new taxes on unrealized gains, or (worst of all) from taxes on the value of capital, including ULTRA taxes.

Altman is right, however, to bemoan the narrow ownership of capital. As I mentioned above, he’s also on-target in saying that most people do not fully appreciate the benefits of thrift and the miracle of compounding. That represents both a failure of education and our calamitously high rate of time preference as a society. Perhaps the former can be fixed! However, thrift is a decision best left in private hands, especially to the extent that AI stimulates rapid income growth.

Killer Regulation

Altman also supports AI regulation, and I’ll cut him some slack by noting that his motives might not be of the usual rent-seeking variety. Maybe. Anyway, he’ll get some form of his wish, as legislators are scrambling to draft a “roadmap” for regulating AI. Some are calling for billions of federal outlays to “support” AI development, with a likely and ill-advised effort to “direct” that development as well. That is hardly necessary given the level of private investment AI is already attracting. Other “roadmap” proposals call for export controls on AI and protections for the film and recording industries.

These proposals are fueled by fears about AI, which run the gamut from widespread unemployment to existential risks to humanity. Considerable attention has been devoted to the alignment of AI agents with human interests and well being, but this has emerged largely within the AI development community itself. There are many alignment optimists, however, and still others who decry any race between tech giants to bring superhuman generative AI to market.

The Biden Administration stepped in last fall with an executive order on AI under emergency powers established by the Defense Production Act. The order ranges more broadly than national defense might necessitate, and it could have damaging consequences. Much of the order is redundant with respect to practices already followed by AI developers. It requires federal oversight over all so-called “foundation models” (e.g., ChatGPT), including safety tests and other “critical information”. These requirements are to be followed by the establishment of additional federal safety standards. This will almost certainly hamstring investment and development of AI, especially by smaller competitors.

Patrick Hedger discusses the destructive consequences of attempts to level the competitive AI playing field via regulation and antitrust actions. Traditionally, regulation tends to entrench large players who can best afford heavy compliance costs and influence regulatory decisions. Antitrust actions also impose huge costs on firms and can result in diminished value for investors in AI start-ups that might otherwise thrive as takeover targets.

Conclusion

Sam Altman’s vision of funding a redistribution of equity capital via taxes on wealth suffers from serious flaws. For one thing, it seems to view AI as a sort of exogenous boon to productivity, wholly independent of investment incentives. Taxing capital would inhibit investment in new capital (and in AI), diminish growth, and thwart the very goal of broad ownership Altman wishes to promote. Any effort to tax capital at a global level (which Altman supports) is probably doomed to failure, and that’s a good thing. The burden of taxes on capital at the corporate level would largely be shifted to workers and consumers, pushing real wages down and prices up relative to market outcomes.

Low taxes on income and especially on capital, together with light regulation, promote saving, capital investment, economic growth, higher real wages, and lower prices. For AI, like all capital investment, public policy should focus on encouraging “aligned” development and deployment of AI assets. A consumption tax would be far more efficient than wealth or capital taxes in that respect, and more effective in generating revenue. Policies that promote growth are the best prescription for broadening the distribution of capital ownership.

The Scary Progress and Hairy Promise of AI

18 Tuesday Apr 2023

Posted by Nuetzel in Artificial Intelligence, Existential Threats, Growth

≈ Leave a comment

Tags

Agentic Behavior, AI Bias, AI Capital, AI Risks, Alignment, Artificial Intelligence, Ben Hayum, Bill Gates, Bryan Caplan, ChatGPT, Clearview AI, Dumbing Down, Eliezer Yudkowsky, Encryption, Existential Risk, Extinction, Foom, Fraud, Generative Intelligence, Greta Thunberg, Human capital, Identity Theft, James Pethokoukis, Jim Jones, Kill Switch, Labor Participation Insurance, Learning Language Models, Lesswrong, Longtermism, Luddites, Mercatus Center, Metaculus, Nassim Taleb, Open AI, Over-Employment, Paul Ehrlich, Pause Letter, Precautionary Principle, Privacy, Robert Louis Stevenson, Robin Hanson, Seth Herd, Synthetic Media, TechCrunch, TruthGPT, Tyler Cowen, Universal Basic Income

Artificial intelligence (AI) has become a very hot topic with incredible recent advances in AI performance. It’s very promising technology, and the expectations shown in the chart above illustrate what would be a profound economic impact. Like many new technologies, however, many find it threatening and are reacting with great alarm, There’s a movement within the tech industry itself, partly motivated by competitive self-interest, calling for a “pause”, or a six-month moratorium on certain development activities. Politicians in Washington are beginning to clamor for legislation that would subject AI to regulation. However, neither a voluntary pause nor regulatory action are likely to be successful. In fact, either would likely do more harm than good.

Leaps and Bounds

The pace of advance in AI has been breathtaking. From ChatGPT 3.5 to ChatGPT 4, in a matter of just a few months, the tool went from relatively poor performance on tests like professional and graduate entrance exams (e.g., bar exams, LSAT, GRE) to very high scores. Using these tools can be a rather startling experience, as I learned for myself recently when I allowed one to write the first draft of a post. (Despite my initial surprise, my experience with ChatGPT 3.5 was somewhat underwhelming after careful review, but I’ve seen more impressive results with ChatGPT 4). They seem to know so much and produce it almost instantly, though it’s true they sometimes “hallucinate”, reflect bias, or invent sources, so thorough review is a must.

Nevertheless, AIs can write essays and computer code, solve complex problems, create or interpret images, sounds and music, simulate speech, diagnose illnesses, render investment advice, and many other things. They can create subroutines to help themselves solve problems. And they can replicate!

As a gauge of the effectiveness of models like ChatGPT, consider that today AI is helping promote “over-employment”. That is, there are a number of ambitious individuals who, working from home, are holding down several different jobs with the help of AI models. In fact, some of these folks say AIs are doing 80% of their work. They are the best “assistants” one could possibly hire, according to a man who has four different jobs.

Economist Bryan Caplan is an inveterate skeptic of almost all claims that smack of hyperbole, and he’s won a series of bets he’s solicited against others willing to take sides in support of such claims. However, Caplan thinks he’s probably lost his bet on the speed of progress on AI development. Needless to say, it has far exceeded his expectations.

Naturally, the rapid progress has rattled lots of people, including many experts in the AI field. Already, we’re witnessing the emergence of “agency” on the part of AI Learning Language Models (LLMs), or so called “agentic” behavior. Here’s an interesting thread on agentic AI behavior. Certain models are capable of teaching themselves in pursuit of a specified goal, gathering new information and recursively optimizing their performance toward that goal. Continued gains may lead to an AI model having artificial generative intelligence (AGI), a superhuman level of intelligence that would go beyond acting upon an initial set of instructions. Some believe this will occur suddenly, which is often described as the “foom” event.

Team Uh-Oh

Concern about where this will lead runs so deep that a letter was recently signed by thousands of tech industry employees, AI experts, and other interested parties calling for a six-month worldwide pause in AI development activity so that safety protocols can be developed. One prominent researcher in machine intelligence, Eliezer Yudkowsky, goes much further: he believes that avoiding human extinction requires immediate worldwide limits on resources dedicated to AI development. Is this a severely overwrought application of the precautionary principle? That’s a matter I’ll consider at greater length below, but like Caplan, I’m congenitally skeptical of claims of impending doom, whether from the mouth of Yudkowsky, Greta Thunberg, Paul Ehrlich, or Nassim Taleb.

As I mentioned at the top, I suspect competition among AI developers played a role in motivating some of the signatories of the “AI pause” letter, and some of the non-signatories as well. Robin Hanson points out that Sam Altman, the CEO of OpenAI, did not sign the letter. OpenAI (controlled by a nonprofit foundation) owns ChatGPT and is the current leader in rolling out AI tools to the public. ChatGPT 4 can be used with the Microsoft search engine Bing, and Microsoft’s Bill Gates also did not sign the letter. Meanwhile, Google was caught flat-footed by the ChatGPT rollout, and its CEO signed. Elon Musk (who signed) wants to jump in with his own AI development: TruthGPT. Of course, the pause letter stirred up a number of members of Congress, which I suspect was the real intent. It’s reasonable to view the letter as a means of leveling the competitive landscape. Thus, it looks something like a classic rent-seeking maneuver, buttressed by the inevitable calls for regulation of AIs. However, I certainly don’t doubt that a number of signatories did so out of a sincere belief that the risks of AI must be dealt with before further development takes place.

The vast dimensions of the supposed AI “threat” may have some libertarians questioning their unequivocal opposition to public intervention. If so, they might just as well fear the potential that AI already holds for manipulation and control by central authorities in concert with their tech and media industry proxies. But realistically, broad compliance with any precautionary agreement between countries or institutions, should one ever be reached, is pretty unlikely. On that basis, a “scout’s honor” temporary moratorium or set of permanent restrictions might be comparable to something like the Paris Climate Accord. China and a few other nations are unlikely to honor the agreement, and we really won’t know whether they’re going along with it except for any traceable artifacts their models might leave in their wake. So we’ll have to hope that safeguards can be identified and implemented broadly.

Likewise, efforts to regulate by individual nations are likely to fail, and for similar reasons. One cannot count on other powers to enforce the same kinds of rules, or any rules at all. Putting our faith in that kind of cooperation with countries who are otherwise hostile is a prescription for ceding them an advantage in AI development and deployment. Regulation of the evolution of AI will likely fail. As Robert Louis Stevenson once wrote, “Thus paternal laws are made, thus they are evaded”. And if it “succeeds, it will leave us with a technology that will fall short of its potential to benefit consumers and society at large. That, unfortunately, is usually the nature of state intrusion into a process of innovation, especially when devised by a cadre of politicians with little expertise in the area.

Again, according to experts like Yudkowsky, AGI would pose serious risks. He thinks the AI Pause letter falls far short of what’s needed. For this reason, there’s been much discussion of somehow achieving an alignment between the interests of humanity and the objectives of AIs. Here is a good discussion by Seth Herd on the LessWrong blog about the difficulties of alignment issues.

Some experts feel that alignment is an impossibility, and that there are ways to “live and thrive” with unalignment (and see here). Alignment might also be achieved through incentives for AIs. Those are all hopeful opinions. Others insist that these models still have a long way to go before they become a serious threat. More on that below. Of course, the models do have their shortcomings, and current models get easily off-track into indeterminacy when attempting to optimize toward an objective.

But there’s an obvious question that hasn’t been answered in full: what exactly are all these risks? As Tyler Cowen has said, it appears that no one has comprehensively catalogued the risks or specified precise mechanisms through which those risks would present. In fact, AGI is such a conundrum that it might be impossible to know precisely what threats we’ll face. But even now, with deployment of AIs still in its infancy, it’s easy to see a few transition problems on the horizon.

White Collar Wipeout

Job losses seem like a rather mundane outcome relative to extinction. Those losses might come quickly, particularly among white collar workers like programmers, attorneys, accountants, and a variety of administrative staffers. According to a survey of 1,000 businesses conducted in February:

“Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year. … When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say ‘definitely,’ while 26% say ‘probably.’ … Within 5 years, 63% of business leaders say ChatGPT will ‘definitely’ (32%) or ‘probably’ (31%) lead to workers being laid off.”

A rapid rate of adoption could well lead to widespread unemployment and even social upheaval. For perspective, that implies a much more rapid rate of technological diffusion than we’ve ever witnessed, so this outcome is viewed with skepticism in some quarters. But in fact, the early adoption phase of AI models is proceeding rather quickly. You can use ChatGPT 4 easily enough on the Bing platform right now!

Contrary to the doomsayers, AI will not just enhance human productivity. Like all new technologies, it will lead to opportunities for human actors that are as yet unforeseen. AI is likely to identify better ways for humans to do many things, or do wonderful things that are now unimagined. At a minimum, however, the transition will be disruptive for a large number of workers, and it will take some time for new opportunities and roles for humans to come to fruition.

Robin Hanson has a unique proposal for meeting the kind of challenge faced by white collar workers vulnerable to displacement by AI, or for blue collar workers who are vulnerable to displacement by robots (the deployment of which has been hastened by minimum wage and living wage activism). This treatment of Hanson’s idea will be inadequate, but he suggests a kind of insurance or contract sold to both workers and investors by owners of assets likely to be insensitive to AI risks. The underlying assets are paid out to workers if automation causes some defined aggregate level of job loss. Otherwise, the assets are paid out to investors taking the other side of the bet. Workers could buy these contracts themselves, or employers could do so on their workers’ behalf. The prices of the contracts would be determined by a market assessment of the probability of the defined job loss “event”. Governmental units could buy the assets for their citizens, for that matter. The “worker contracts” would be cheap if the probability of the job-loss event is low. Sounds far-fetched, but perhaps the idea is itself an entrepreneurial opportunity for creative players in the financial industry.

The threat of job losses to AI has also given new energy to advocates of widespread adoption of universal basic income payments by government. Hanson’s solution is far preferable to government dependence, but perhaps the state could serve as an enabler or conduit through which workers could acquire AI and non-AI capital.

Human Capital

Current incarnations of AI are not just a threat to employment. One might add the prospect that heavy reliance on AI could undermine the future education and critical thinking skills of the general population. Essentially allowing machines to do all the thinking, research, and planning won’t inure to the cognitive strength of the human race, especially over several generations. Already people suffer from an inability to perform what were once considered basic life skills, to say nothing of tasks that were fundamental to survival in the not too distant past. In other words, AI could exaggerate a process of “dumbing down” the populace, a rather undesirable prospect.

Fraud and Privacy

AI is responsible for still more disruptions already taking place, in particular violations of privacy, security, and trust. For example, a company called Clearview AI has scraped 30 billion photos from social media and used them to create what its CEO proudly calls a “perpetual police lineup”, which it has provided for the convenience of law enforcement and security agencies.

AI is also a threat to encryption in securing data and systems. Conceivably, AI could be of value in perpetrating identity theft and other kinds of fraud, but it can also be of value in preventing them. AI is also a potential source of misleading information. It is often biased, reflecting specific portions of the on-line terrain upon which it is trained, including skewed model weights applied to information reflecting particular points of view. Furthermore, misinformation can be spread by AIs via “synthetic media” and the propagation of “fake news”. These are fairly clear and present threats of social, economic, and political manipulation. They are all foreseeable dangers posed by AI in the hands of bad actors, and I would include certain nudge-happy and politically-motivated players in that last category.

The Sky-Already-Fell Crowd

Certain ethicists with extensive experience in AI have condemned the signatories of the “Pause Letter” for a focus on “longtermism”, or risks as yet hypothetical, rather than the dangers and wrongs attributable to AIs that are already extant: TechCrunch quotes a rebuke penned by some of these dissenting ethicists to supporters of the “Pause Letter”:

“‘Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,’ they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.”

So these ethicists bemoan AI’s presumed contribution to the strength and concentration of “existing power structures”. In that, I detect just a whiff of distaste for private initiative and private rewards, or perhaps against the sovereign power of states to allow a laissez faire approach to AI development (or to actively sponsor it). I have trouble taking this “rebuke” too seriously, but it will be fruitless in any case. Some form of cooperation between AI developers on safety protocols might be well advised, but competing interests also serve as a check on bad actors, and it could bring us better solutions as other dilemmas posed by AI reveal themselves.

Imagining AI Catastrophes

What are the more consequential (and completely hypothetical) risks feared by the “pausers” and “stoppers”. Some might have to do with the possibility of widespread social upheaval and ultimately mayhem caused by some of the “mundane” risks described above. But the most noteworthy warnings are existential: the end of the human race! How might this occur when AGI is something confined to computers? Just how does the supposed destructive power of AGIs get “outside the box”? It must do so either by tricking us into doing something stupid, hacking into dangerous systems (including AI weapons systems or other robotics), and/or through the direction and assistance of bad human actors. Perhaps all three!

The first question is this: why would an AGI do anything so destructive? No matter how much we might like to anthropomorphize an “intelligent” machine, it would still be a machine. It really wouldn’t like or dislike humanity. What it would do, however, is act on its objectives. It would seek to optimize a series of objective functions toward achieving a goal or a set of goals it is given. Hence the role for bad actors. Let’s face it, there are suicidal people who might like nothing more than to take the whole world with them.

Otherwise, if humanity happens to be an obstruction to solving an AGI’s objective, then we’d have a very big problem. Humanity could be an aid to solving an AGI’s optimization problem in ways that are dangerous. As Yudkowsky says, we might represent mere “atoms it could use somewhere else.” And if an autonomous AGI were capable of setting it’s own objectives, without alignment, the danger would be greatly magnified. An example might be the goal of reducing carbon emissions to pre-industrial levels. How aggressively would an AGI act in pursuit of that goal? Would killing most humans contribute to the achievement of that goal?

Here’s one that might seem far-fetched, but the imagination runs wild: some individuals might be so taken with the power of vastly intelligent AGI as to make it an object of worship. Such an “AGI God” might be able to convert a sufficient number of human disciples to perpetrate deadly mischief on its behalf. Metaphorically speaking, the disciples might be persuaded to deliver poison kool-aid worldwide before gulping it down themselves in a Jim Jones style mass suicide. Or perhaps the devoted will survive to live in a new world mono-theocracy. Of course, these human disciples would be able to assist the “AGI God” in any number of destructive ways. And when brain-wave translation comes to fruition, they better watch out. Only the truly devoted will survive.

An AGI would be able to create the illusion of emergency, such as a nuclear launch by an adversary nation. In fact, two or many adversary nations might each be fooled into taking actions that would assure mutual destruction and a nuclear winter. If safeguards such as human intermediaries were required to authorize strikes, it might still be possible for an AGI to fool those humans. And there is no guarantee that all parties to such a manufactured conflict could be counted upon to have adequate safeguards, even if some did.

Yudkowsky offers at least one fairly concrete example of existential AGI risk:

“A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”

There are many types of physical infrastructure or systems that an AGI could conceivably compromise, especially with the aid of machinery like robots or drones to which it could pass instructions. Safeguards at nuclear power plants could be disabled before steps to trigger melt down. Water systems, rivers, and bodies of water could be poisoned. The same is true of food sources, or even the air we breathe. In any case, complete social disarray might lead to a situation in which food supply chains become completely dysfunctional. So, a super-intelligence could probably devise plenty of “imaginative” ways to rid the earth of human beings.

Back To Earth

Is all this concern overblown? Many think so. Bryan Caplan now has a $500 bet with Eliezer Yudkowsky that AI will not exterminate the human race by 2030. He’s already paid Yudkowsky, who will pay him $1,000 if we survive. Robin Hanson says “Most AI Fear Is Future Fear”, and I’m inclined to agree with that assessment. In a way, I’m inclined to view the AI doomsters as highly sophisticated, change-fearing Luddites, but Luddites nevertheless.

Ben Hayum is very concerned about the dangers of AI, but writing at LessWrong, he recognizes some real technical barriers that must be overcome for recursive optimization to be successful. He also notes that the big AI developers are all highly focused on safety. Nevertheless, he says it might not take long before independent users are able to bootstrap their own plug-ins or modules on top of AI models to successfully optimize without running off the rails. Depending on the specified goals, he thinks that will be a scary development.

James Pethokoukis raises a point that hasn’t had enough recognition: successful innovations are usually dependent on other enablers, such as appropriate infrastructure and process adaptations. What this means is that AI, while making spectacular progress thus far, won’t have a tremendous impact on productivity for at least several years, nor will it pose a truly existential threat. The lag in the response of productivity growth would also limit the destructive potential of AGI in the near term, since installation of the “social plant” that a destructive AGI would require will take time. This also buys time for attempting to solve the AI alignment problem.

In another Robin Hanson piece, he expresses the view that the large institutions developing AI have a reputational Al stake and are liable for damages their AI’s might cause. He notes that they are monitoring and testing AIs in great detail, so he thinks the dangers are overblown.:

“So, the most likely AI scenario looks like lawful capitalism…. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives.”

In the longer term, the chief focus of the AI doomsters, Hanson is truly an AI optimist. He thinks AGIs will be “designed and evolved to think and act roughly like humans, in order to fit smoothly into our many roughly-human-shaped social roles.” Furthermore, he notes that AI owners will have strong incentives to monitor and “delimit” AI behavior that runs contrary to its intended purpose. Thus, a form of alignment is achieved by virtue of economic and legal incentives. In fact, Hanson believes the “foom” scenario is implausible because:

“… it stacks up too many unlikely assumptions in terms of our prior experiences with related systems. Very lumpy tech advances, techs that broadly improve abilities, and powerful techs that are long kept secret within one project are each quite rare. Making techs that meet all three criteria even more rare. In addition, it isn’t at all obvious that capable AIs naturally turn into agents, or that their values typically change radically as they grow. Finally, it seems quite unlikely that owners who heavily test and monitor their very profitable but powerful AIs would not even notice such radical changes.”

As smart as AGIs would be, Hanson asserts that the problem of AGI coordination with other AIs, robots, and systems would present insurmountable obstacles to a bloody “AI revolution”. This is broadly similar to Pethokoukis’ theme. Other AIs or AGIs are likely to have competing goals and “interests”. Conflicting objectives and competition of this kind will do much to keep AGIs honest and foil malign AGI behavior.

The kill switch is a favorite response of those who think AGI fears are exaggerated. Just shut down an AI if its behavior is at all aberrant, or if a user attempts to pair an AI model with instructions or code that might lead to a radical alteration in an AI’s level of agency. Kill switches would indeed be effective at heading off disaster if monitoring and control is incorruptible. This is the sort of idea that begs for a general solution, and one hopes that any advance of that nature will be shared broadly.

One final point about AI agency is whether autonomous AGIs might ever be treated as independent factors of production. Could they be imbued with self-ownership? Tyler Cowen asks whether an AGI created by a “parent” AGI could legitimately be considered an independent entity in law, economics, and society. And how should income “earned” by such an AGI be treated for tax purposes. I suspect it will be some time before AIs, including AIs in a lineage, are treated separately from their “controlling” human or corporate entities. Nevertheless, as Cowen says, the design of incentives and tax treatment of AI’s might hold some promise for achieving a form of alignment.

Letting It Roll

There’s plenty of time for solutions to the AGI threat to be worked out. As I write this, the consensus forecast for the advent of real AGI on the Metaculus online prediction platform is July 27, 2031. Granted, that’s more than a year sooner than it was 11 days ago, but it still allows plenty of time for advances in controlling and bounding agentic AI behavior. In the meantime, AI is presenting opportunities to enhance well being through areas like medicine, nutrition, farming practices, industrial practices, and productivity enhancement across a range of processes. Let’s not forego these opportunities. AI technology is far too promising to hamstring with a pause, moratoria, or ill-devised regulations. It’s also simply impossible to stop development work on a global scale.

Nevertheless, AI issues are complex for all private and public institutions. Without doubt, it will change our world. This AI Policy Guide from Mercatus is a helpful effort to lay out issues at a high-level.

Fix TikTok? Or Nix It? The Authoritarian RESTRICT Act

08 Saturday Apr 2023

Posted by Nuetzel in anti-Semitism, Big Government, Liberty, Technology

≈ 1 Comment

Tags

AI, Artificial Intelligence, Attention Span, ByteDance, CATO Institute, Caveat Emptor, ChatGPT, Community Standards, Data Privacy, Elon Musk, First Amendment, Free Speech, Hate Speech, L. Frank Baum, Munger Test, National Security, Open Source, PATRIOT Act, People’s Republic of China, Philip Hamburger, Protectionism, RESTRICT Act, Scott Lincicome, Separation of Powers, The Land of Oz, TikTok, Twitter

There’s justifiable controversy surrounding TikTok, the social media app. I find much to dislike about TikTok but also much to dislike about the solutions some have proposed, such as a complete ban on the app in the United States. Such proposals would grant the federal executive branch powers that most of us wouldn’t grant to our worst enemy (i.e., they fail the “Munger test”).

Congressional Activity

The proposed RESTRICT Act (Restricting the Emergence of Security Threats that Risk Information and Communications Technology) is a bipartisan effort to eliminate the perceived threats to national security posed by technologies like TikTok. That would include a ban on the app. Proponents of a ban go further than national security concerns, arguing that TikTok represents a threat to the health and productivity of users. However, an outright ban on the app would be a drastic abridgment of free speech rights, and it would limit Americans’ access to a popular platform for creativity and entertainment. In addition, the proposed legislation would authorize intrusions into the privacy of Americans and extend new executive authority into the private sphere, such as tampering with trade and commerce in ways that could facilitate protectionist actions. In fact, so intrusive is the RESTRICT Act that it’s been called a “Patriot Act for the digital age.” From Scott Lincicome and several coauthors at CATO:

“… the proposal—at least as currently written—raises troubling and far‐reaching concerns for the First Amendment, international commerce, technology, privacy, and separation of powers.”

Bad Company

TikTok is owned by a Chinese company, ByteDance, and there is understandable concern about the app’s data collection practices and the potential for the Chinese government to access user data for nefarious purposes. The Trump administration cited these concerns when it attempted to ban TikTok in 2020, and while the ban was ultimately blocked by a federal judge, the Biden administration has also expressed concerns about the app’s data security.

TikTok has also been accused of promoting harmful content, including hate speech, misinformation, and sexually explicit material. Critics argue that the app’s algorithm rewards provocative and controversial content, which can lead to the spread of harmful messages and the normalization of inappropriate behavior. Of course, those are largely value judgements, including labels like “provocative”, “inappropriate”, and many interpretations of content as “hate speech”. With narrow exceptions, such content is protected under the First Amendment.

Unlike L. Frank Baum’s Tik-Tok machine in the land of Oz, the TikTok app might not always qualify as a “faithful servant”. There are some well-founded health and performance concerns related to TikTok, however. Some experts have expressed reservations about the effects of the app on attention span. The short-form videos typical of TikTok, and endless scrolling, suggest that the app is designed to be addictive, though I’m not aware of studies that purport to prove its “addictive nature. Of course, it can easily become a time sink for users, but so can almost all social media platforms. Nevertheless, some experts contend that heavy use of TikTok may lead to a decrease in attention span and an increase in distraction, which can have negative implications for productivity, learning, and mental health.

Bad Government

The RESTRICT Act, or a ban on TikTok, would drastically violate free speech rights and limit Americans’ access to a popular platform for creativity and self-expression. TikTok has become a cultural phenomenon, with millions of users creating and sharing content on the app every day. This is particularly true of more youthful individuals, who are less likely to be persuaded by their elders’ claims that the content available on TikTok is “inappropriate”. And they’re right! At the very least, “appropriateness” depends on an individual’s age, and it is generally not an area over which government should have censorship authority, “community standards” arguments notwithstanding. Furthermore, allowing access for children is a responsibility best left in the hands of parents, not government.

Likewise, businesses should be free to operate without undue interference from government. The RESTRICT Act would violate these principles, as it would limit individual choice and potentially harm innovation within the U.S. tech industry.

A less compelling argument against banning TikTok is that it could harm U.S.-China relations and have broader economic consequences. China has already warned that a TikTok ban could prompt retaliation, and such a move could escalate tensions between the two countries. That’s all true to one degree or another, but China has already demonstrated a willingness and intention to harm U.S.-China relations. As for economic repercussions, do business with China at your own risk. According to this piece, U.S. investment in the PRC’s tech industry has fallen by almost 80% since 2018, so the private sector is already taking strong steps to reduce that risk.

Like it or not, however, many software companies are subject to at least partial Chinese jurisdiction. The means the RESTRICT Act would do far more than simply banning TikTok in the U.S. First, it would subject on-line activity to much greater scrutiny. Second, it would threaten users of a variety of information or communications products and services with severe penalties for speech deemed to be “unsafe”. According to Columbia Law Professor Philip Hamburger:

“Under the proposed statute, the commerce secretary could therefore take ‘any mitigation measure to address any risk’ arising from the use of the relevant communications products or services, if the secretary determines there is an ‘undue or unacceptable risk to the national security of the United States or the safety of United States persons.’

We live in an era in which dissenting speech is said to be violence. In recent years, the Federal Bureau of Investigation has classified concerned parents and conservative Catholics as violent extremists. So when the TikTok bill authorizes the commerce secretary to mitigate communications risks to ‘national security’ or ‘safety,’ that means she can demand censorship.”

A Lighter Touch

The RESTRICT Act is unreasonably broad and intrusive and an outright ban of TikTok is unnecessarily extreme. There are less draconian alternatives, though all may involve some degree of intrusion. For example, TikTok could be compelled to allow users to opt out of certain types of data collection, and to allow independent audits of its data handling practices. TikTok could also be required to store user data within the U.S. or in other countries that have strong data privacy laws. While this option would represent stronger regulation of TikTok, it could also be construed as strengthening the property rights of users.

To address concerns about TikTok’s ownership by a Chinese company, its U.S. operations could be required to partner with a U.S. company. Perhaps this could satisfied by allowing a U.S. company to acquire a stake in TikTok, or by having TikTok spin off its U.S. operations into a separate company that is majority-owned by a U.S. entity.

Finally, perhaps political or regulatory pressure could persuade TikTok to switch to using open-source software, as Elon Musk has done with Twitter. Then, independent developers would have the ability to audit code and identify security vulnerabilities or suspicious data handling practices. From there, it’s a matter of caveat emptor.

Restrain the Restrictive Impulse

The TikTok debate raises important questions about the role of government in regulating technology and free speech. Rather than impulsively harsh legislation like the RESTRICT Act or an outright ban on TikTok, an enlightened approach would encourage transparency and competition in the tech industry. That, in turn, could help address concerns about data security and promote innovation. Additionally, individuals should take personal responsibility for their use of technology by being mindful of the content they consume and what they reveal about themselves on social media. That includes parental responsibility and supervision of the use of social media by children. Ultimately, the TikTok debate highlights tensions between national security, technological innovation, and individual liberty. and it’s important to find a balance that protects all three.

Note: The first draft of this post was written by ChatGPT, based on an initial prompt and sequential follow-ups. It was intended as an experiment in preparation for a future post on artificial intelligence (AI). While several vestiges of the first draft remain, what appears above bears little resemblance to what ChatGPT produced. There were many deletions, rewrites, and supplements in arriving at the final draft.

My first impression of the ChatGPT output was favorable. It delineated a few of the major issues surrounding a TikTok ban, but later I was struck by its repetition of bland generalities and its lack of information on more recent developments like the RESTRICT Act. The latter shortfall was probably due to my use of ChatGPT 3.5 rather than 4.0. On the whole, the exercise was fascinating, but I will limit my use of AI tools like ChatGPT to investigation of background on certain questions.

Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • Immigration and Merit As Fiscal Propositions
  • Tariff “Dividend” From An Indigent State
  • Almost Looks Like the Fed Has a 3% Inflation Target
  • Government Malpractice Breeds Health Care Havoc
  • A Tax On Imports Takes a Toll on Exports

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library
  • Scattered Showers and Quicksand

Blog at WordPress.com.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

Scattered Showers and Quicksand

Musings on science, investing, finance, economics, politics, and probably fly fishing.

  • Subscribe Subscribed
    • Sacred Cow Chips
    • Join 128 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...