• About

Sacred Cow Chips

Sacred Cow Chips

Tag Archives: Paul Ehrlich

The Scary Progress and Hairy Promise of AI

18 Tuesday Apr 2023

Posted by Nuetzel in Artificial Intelligence, Existential Threats, Growth

≈ Leave a comment

Tags

Agentic Behavior, AI Bias, AI Capital, AI Risks, Alignment, Artificial Intelligence, Ben Hayum, Bill Gates, Bryan Caplan, ChatGPT, Clearview AI, Dumbing Down, Eliezer Yudkowsky, Encryption, Existential Risk, Extinction, Foom, Fraud, Generative Intelligence, Greta Thunberg, Human capital, Identity Theft, James Pethokoukis, Jim Jones, Kill Switch, Labor Participation Insurance, Learning Language Models, Lesswrong, Longtermism, Luddites, Mercatus Center, Metaculus, Nassim Taleb, Open AI, Over-Employment, Paul Ehrlich, Pause Letter, Precautionary Principle, Privacy, Robert Louis Stevenson, Robin Hanson, Seth Herd, Synthetic Media, TechCrunch, TruthGPT, Tyler Cowen, Universal Basic Income

Artificial intelligence (AI) has become a very hot topic with incredible recent advances in AI performance. It’s very promising technology, and the expectations shown in the chart above illustrate what would be a profound economic impact. Like many new technologies, however, many find it threatening and are reacting with great alarm, There’s a movement within the tech industry itself, partly motivated by competitive self-interest, calling for a “pause”, or a six-month moratorium on certain development activities. Politicians in Washington are beginning to clamor for legislation that would subject AI to regulation. However, neither a voluntary pause nor regulatory action are likely to be successful. In fact, either would likely do more harm than good.

Leaps and Bounds

The pace of advance in AI has been breathtaking. From ChatGPT 3.5 to ChatGPT 4, in a matter of just a few months, the tool went from relatively poor performance on tests like professional and graduate entrance exams (e.g., bar exams, LSAT, GRE) to very high scores. Using these tools can be a rather startling experience, as I learned for myself recently when I allowed one to write the first draft of a post. (Despite my initial surprise, my experience with ChatGPT 3.5 was somewhat underwhelming after careful review, but I’ve seen more impressive results with ChatGPT 4). They seem to know so much and produce it almost instantly, though it’s true they sometimes “hallucinate”, reflect bias, or invent sources, so thorough review is a must.

Nevertheless, AIs can write essays and computer code, solve complex problems, create or interpret images, sounds and music, simulate speech, diagnose illnesses, render investment advice, and many other things. They can create subroutines to help themselves solve problems. And they can replicate!

As a gauge of the effectiveness of models like ChatGPT, consider that today AI is helping promote “over-employment”. That is, there are a number of ambitious individuals who, working from home, are holding down several different jobs with the help of AI models. In fact, some of these folks say AIs are doing 80% of their work. They are the best “assistants” one could possibly hire, according to a man who has four different jobs.

Economist Bryan Caplan is an inveterate skeptic of almost all claims that smack of hyperbole, and he’s won a series of bets he’s solicited against others willing to take sides in support of such claims. However, Caplan thinks he’s probably lost his bet on the speed of progress on AI development. Needless to say, it has far exceeded his expectations.

Naturally, the rapid progress has rattled lots of people, including many experts in the AI field. Already, we’re witnessing the emergence of “agency” on the part of AI Learning Language Models (LLMs), or so called “agentic” behavior. Here’s an interesting thread on agentic AI behavior. Certain models are capable of teaching themselves in pursuit of a specified goal, gathering new information and recursively optimizing their performance toward that goal. Continued gains may lead to an AI model having artificial generative intelligence (AGI), a superhuman level of intelligence that would go beyond acting upon an initial set of instructions. Some believe this will occur suddenly, which is often described as the “foom” event.

Team Uh-Oh

Concern about where this will lead runs so deep that a letter was recently signed by thousands of tech industry employees, AI experts, and other interested parties calling for a six-month worldwide pause in AI development activity so that safety protocols can be developed. One prominent researcher in machine intelligence, Eliezer Yudkowsky, goes much further: he believes that avoiding human extinction requires immediate worldwide limits on resources dedicated to AI development. Is this a severely overwrought application of the precautionary principle? That’s a matter I’ll consider at greater length below, but like Caplan, I’m congenitally skeptical of claims of impending doom, whether from the mouth of Yudkowsky, Greta Thunberg, Paul Ehrlich, or Nassim Taleb.

As I mentioned at the top, I suspect competition among AI developers played a role in motivating some of the signatories of the “AI pause” letter, and some of the non-signatories as well. Robin Hanson points out that Sam Altman, the CEO of OpenAI, did not sign the letter. OpenAI (controlled by a nonprofit foundation) owns ChatGPT and is the current leader in rolling out AI tools to the public. ChatGPT 4 can be used with the Microsoft search engine Bing, and Microsoft’s Bill Gates also did not sign the letter. Meanwhile, Google was caught flat-footed by the ChatGPT rollout, and its CEO signed. Elon Musk (who signed) wants to jump in with his own AI development: TruthGPT. Of course, the pause letter stirred up a number of members of Congress, which I suspect was the real intent. It’s reasonable to view the letter as a means of leveling the competitive landscape. Thus, it looks something like a classic rent-seeking maneuver, buttressed by the inevitable calls for regulation of AIs. However, I certainly don’t doubt that a number of signatories did so out of a sincere belief that the risks of AI must be dealt with before further development takes place.

The vast dimensions of the supposed AI “threat” may have some libertarians questioning their unequivocal opposition to public intervention. If so, they might just as well fear the potential that AI already holds for manipulation and control by central authorities in concert with their tech and media industry proxies. But realistically, broad compliance with any precautionary agreement between countries or institutions, should one ever be reached, is pretty unlikely. On that basis, a “scout’s honor” temporary moratorium or set of permanent restrictions might be comparable to something like the Paris Climate Accord. China and a few other nations are unlikely to honor the agreement, and we really won’t know whether they’re going along with it except for any traceable artifacts their models might leave in their wake. So we’ll have to hope that safeguards can be identified and implemented broadly.

Likewise, efforts to regulate by individual nations are likely to fail, and for similar reasons. One cannot count on other powers to enforce the same kinds of rules, or any rules at all. Putting our faith in that kind of cooperation with countries who are otherwise hostile is a prescription for ceding them an advantage in AI development and deployment. Regulation of the evolution of AI will likely fail. As Robert Louis Stevenson once wrote, “Thus paternal laws are made, thus they are evaded”. And if it “succeeds, it will leave us with a technology that will fall short of its potential to benefit consumers and society at large. That, unfortunately, is usually the nature of state intrusion into a process of innovation, especially when devised by a cadre of politicians with little expertise in the area.

Again, according to experts like Yudkowsky, AGI would pose serious risks. He thinks the AI Pause letter falls far short of what’s needed. For this reason, there’s been much discussion of somehow achieving an alignment between the interests of humanity and the objectives of AIs. Here is a good discussion by Seth Herd on the LessWrong blog about the difficulties of alignment issues.

Some experts feel that alignment is an impossibility, and that there are ways to “live and thrive” with unalignment (and see here). Alignment might also be achieved through incentives for AIs. Those are all hopeful opinions. Others insist that these models still have a long way to go before they become a serious threat. More on that below. Of course, the models do have their shortcomings, and current models get easily off-track into indeterminacy when attempting to optimize toward an objective.

But there’s an obvious question that hasn’t been answered in full: what exactly are all these risks? As Tyler Cowen has said, it appears that no one has comprehensively catalogued the risks or specified precise mechanisms through which those risks would present. In fact, AGI is such a conundrum that it might be impossible to know precisely what threats we’ll face. But even now, with deployment of AIs still in its infancy, it’s easy to see a few transition problems on the horizon.

White Collar Wipeout

Job losses seem like a rather mundane outcome relative to extinction. Those losses might come quickly, particularly among white collar workers like programmers, attorneys, accountants, and a variety of administrative staffers. According to a survey of 1,000 businesses conducted in February:

“Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year. … When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say ‘definitely,’ while 26% say ‘probably.’ … Within 5 years, 63% of business leaders say ChatGPT will ‘definitely’ (32%) or ‘probably’ (31%) lead to workers being laid off.”

A rapid rate of adoption could well lead to widespread unemployment and even social upheaval. For perspective, that implies a much more rapid rate of technological diffusion than we’ve ever witnessed, so this outcome is viewed with skepticism in some quarters. But in fact, the early adoption phase of AI models is proceeding rather quickly. You can use ChatGPT 4 easily enough on the Bing platform right now!

Contrary to the doomsayers, AI will not just enhance human productivity. Like all new technologies, it will lead to opportunities for human actors that are as yet unforeseen. AI is likely to identify better ways for humans to do many things, or do wonderful things that are now unimagined. At a minimum, however, the transition will be disruptive for a large number of workers, and it will take some time for new opportunities and roles for humans to come to fruition.

Robin Hanson has a unique proposal for meeting the kind of challenge faced by white collar workers vulnerable to displacement by AI, or for blue collar workers who are vulnerable to displacement by robots (the deployment of which has been hastened by minimum wage and living wage activism). This treatment of Hanson’s idea will be inadequate, but he suggests a kind of insurance or contract sold to both workers and investors by owners of assets likely to be insensitive to AI risks. The underlying assets are paid out to workers if automation causes some defined aggregate level of job loss. Otherwise, the assets are paid out to investors taking the other side of the bet. Workers could buy these contracts themselves, or employers could do so on their workers’ behalf. The prices of the contracts would be determined by a market assessment of the probability of the defined job loss “event”. Governmental units could buy the assets for their citizens, for that matter. The “worker contracts” would be cheap if the probability of the job-loss event is low. Sounds far-fetched, but perhaps the idea is itself an entrepreneurial opportunity for creative players in the financial industry.

The threat of job losses to AI has also given new energy to advocates of widespread adoption of universal basic income payments by government. Hanson’s solution is far preferable to government dependence, but perhaps the state could serve as an enabler or conduit through which workers could acquire AI and non-AI capital.

Human Capital

Current incarnations of AI are not just a threat to employment. One might add the prospect that heavy reliance on AI could undermine the future education and critical thinking skills of the general population. Essentially allowing machines to do all the thinking, research, and planning won’t inure to the cognitive strength of the human race, especially over several generations. Already people suffer from an inability to perform what were once considered basic life skills, to say nothing of tasks that were fundamental to survival in the not too distant past. In other words, AI could exaggerate a process of “dumbing down” the populace, a rather undesirable prospect.

Fraud and Privacy

AI is responsible for still more disruptions already taking place, in particular violations of privacy, security, and trust. For example, a company called Clearview AI has scraped 30 billion photos from social media and used them to create what its CEO proudly calls a “perpetual police lineup”, which it has provided for the convenience of law enforcement and security agencies.

AI is also a threat to encryption in securing data and systems. Conceivably, AI could be of value in perpetrating identity theft and other kinds of fraud, but it can also be of value in preventing them. AI is also a potential source of misleading information. It is often biased, reflecting specific portions of the on-line terrain upon which it is trained, including skewed model weights applied to information reflecting particular points of view. Furthermore, misinformation can be spread by AIs via “synthetic media” and the propagation of “fake news”. These are fairly clear and present threats of social, economic, and political manipulation. They are all foreseeable dangers posed by AI in the hands of bad actors, and I would include certain nudge-happy and politically-motivated players in that last category.

The Sky-Already-Fell Crowd

Certain ethicists with extensive experience in AI have condemned the signatories of the “Pause Letter” for a focus on “longtermism”, or risks as yet hypothetical, rather than the dangers and wrongs attributable to AIs that are already extant: TechCrunch quotes a rebuke penned by some of these dissenting ethicists to supporters of the “Pause Letter”:

“‘Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,’ they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.”

So these ethicists bemoan AI’s presumed contribution to the strength and concentration of “existing power structures”. In that, I detect just a whiff of distaste for private initiative and private rewards, or perhaps against the sovereign power of states to allow a laissez faire approach to AI development (or to actively sponsor it). I have trouble taking this “rebuke” too seriously, but it will be fruitless in any case. Some form of cooperation between AI developers on safety protocols might be well advised, but competing interests also serve as a check on bad actors, and it could bring us better solutions as other dilemmas posed by AI reveal themselves.

Imagining AI Catastrophes

What are the more consequential (and completely hypothetical) risks feared by the “pausers” and “stoppers”. Some might have to do with the possibility of widespread social upheaval and ultimately mayhem caused by some of the “mundane” risks described above. But the most noteworthy warnings are existential: the end of the human race! How might this occur when AGI is something confined to computers? Just how does the supposed destructive power of AGIs get “outside the box”? It must do so either by tricking us into doing something stupid, hacking into dangerous systems (including AI weapons systems or other robotics), and/or through the direction and assistance of bad human actors. Perhaps all three!

The first question is this: why would an AGI do anything so destructive? No matter how much we might like to anthropomorphize an “intelligent” machine, it would still be a machine. It really wouldn’t like or dislike humanity. What it would do, however, is act on its objectives. It would seek to optimize a series of objective functions toward achieving a goal or a set of goals it is given. Hence the role for bad actors. Let’s face it, there are suicidal people who might like nothing more than to take the whole world with them.

Otherwise, if humanity happens to be an obstruction to solving an AGI’s objective, then we’d have a very big problem. Humanity could be an aid to solving an AGI’s optimization problem in ways that are dangerous. As Yudkowsky says, we might represent mere “atoms it could use somewhere else.” And if an autonomous AGI were capable of setting it’s own objectives, without alignment, the danger would be greatly magnified. An example might be the goal of reducing carbon emissions to pre-industrial levels. How aggressively would an AGI act in pursuit of that goal? Would killing most humans contribute to the achievement of that goal?

Here’s one that might seem far-fetched, but the imagination runs wild: some individuals might be so taken with the power of vastly intelligent AGI as to make it an object of worship. Such an “AGI God” might be able to convert a sufficient number of human disciples to perpetrate deadly mischief on its behalf. Metaphorically speaking, the disciples might be persuaded to deliver poison kool-aid worldwide before gulping it down themselves in a Jim Jones style mass suicide. Or perhaps the devoted will survive to live in a new world mono-theocracy. Of course, these human disciples would be able to assist the “AGI God” in any number of destructive ways. And when brain-wave translation comes to fruition, they better watch out. Only the truly devoted will survive.

An AGI would be able to create the illusion of emergency, such as a nuclear launch by an adversary nation. In fact, two or many adversary nations might each be fooled into taking actions that would assure mutual destruction and a nuclear winter. If safeguards such as human intermediaries were required to authorize strikes, it might still be possible for an AGI to fool those humans. And there is no guarantee that all parties to such a manufactured conflict could be counted upon to have adequate safeguards, even if some did.

Yudkowsky offers at least one fairly concrete example of existential AGI risk:

“A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”

There are many types of physical infrastructure or systems that an AGI could conceivably compromise, especially with the aid of machinery like robots or drones to which it could pass instructions. Safeguards at nuclear power plants could be disabled before steps to trigger melt down. Water systems, rivers, and bodies of water could be poisoned. The same is true of food sources, or even the air we breathe. In any case, complete social disarray might lead to a situation in which food supply chains become completely dysfunctional. So, a super-intelligence could probably devise plenty of “imaginative” ways to rid the earth of human beings.

Back To Earth

Is all this concern overblown? Many think so. Bryan Caplan now has a $500 bet with Eliezer Yudkowsky that AI will not exterminate the human race by 2030. He’s already paid Yudkowsky, who will pay him $1,000 if we survive. Robin Hanson says “Most AI Fear Is Future Fear”, and I’m inclined to agree with that assessment. In a way, I’m inclined to view the AI doomsters as highly sophisticated, change-fearing Luddites, but Luddites nevertheless.

Ben Hayum is very concerned about the dangers of AI, but writing at LessWrong, he recognizes some real technical barriers that must be overcome for recursive optimization to be successful. He also notes that the big AI developers are all highly focused on safety. Nevertheless, he says it might not take long before independent users are able to bootstrap their own plug-ins or modules on top of AI models to successfully optimize without running off the rails. Depending on the specified goals, he thinks that will be a scary development.

James Pethokoukis raises a point that hasn’t had enough recognition: successful innovations are usually dependent on other enablers, such as appropriate infrastructure and process adaptations. What this means is that AI, while making spectacular progress thus far, won’t have a tremendous impact on productivity for at least several years, nor will it pose a truly existential threat. The lag in the response of productivity growth would also limit the destructive potential of AGI in the near term, since installation of the “social plant” that a destructive AGI would require will take time. This also buys time for attempting to solve the AI alignment problem.

In another Robin Hanson piece, he expresses the view that the large institutions developing AI have a reputational Al stake and are liable for damages their AI’s might cause. He notes that they are monitoring and testing AIs in great detail, so he thinks the dangers are overblown.:

“So, the most likely AI scenario looks like lawful capitalism…. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives.”

In the longer term, the chief focus of the AI doomsters, Hanson is truly an AI optimist. He thinks AGIs will be “designed and evolved to think and act roughly like humans, in order to fit smoothly into our many roughly-human-shaped social roles.” Furthermore, he notes that AI owners will have strong incentives to monitor and “delimit” AI behavior that runs contrary to its intended purpose. Thus, a form of alignment is achieved by virtue of economic and legal incentives. In fact, Hanson believes the “foom” scenario is implausible because:

“… it stacks up too many unlikely assumptions in terms of our prior experiences with related systems. Very lumpy tech advances, techs that broadly improve abilities, and powerful techs that are long kept secret within one project are each quite rare. Making techs that meet all three criteria even more rare. In addition, it isn’t at all obvious that capable AIs naturally turn into agents, or that their values typically change radically as they grow. Finally, it seems quite unlikely that owners who heavily test and monitor their very profitable but powerful AIs would not even notice such radical changes.”

As smart as AGIs would be, Hanson asserts that the problem of AGI coordination with other AIs, robots, and systems would present insurmountable obstacles to a bloody “AI revolution”. This is broadly similar to Pethokoukis’ theme. Other AIs or AGIs are likely to have competing goals and “interests”. Conflicting objectives and competition of this kind will do much to keep AGIs honest and foil malign AGI behavior.

The kill switch is a favorite response of those who think AGI fears are exaggerated. Just shut down an AI if its behavior is at all aberrant, or if a user attempts to pair an AI model with instructions or code that might lead to a radical alteration in an AI’s level of agency. Kill switches would indeed be effective at heading off disaster if monitoring and control is incorruptible. This is the sort of idea that begs for a general solution, and one hopes that any advance of that nature will be shared broadly.

One final point about AI agency is whether autonomous AGIs might ever be treated as independent factors of production. Could they be imbued with self-ownership? Tyler Cowen asks whether an AGI created by a “parent” AGI could legitimately be considered an independent entity in law, economics, and society. And how should income “earned” by such an AGI be treated for tax purposes. I suspect it will be some time before AIs, including AIs in a lineage, are treated separately from their “controlling” human or corporate entities. Nevertheless, as Cowen says, the design of incentives and tax treatment of AI’s might hold some promise for achieving a form of alignment.

Letting It Roll

There’s plenty of time for solutions to the AGI threat to be worked out. As I write this, the consensus forecast for the advent of real AGI on the Metaculus online prediction platform is July 27, 2031. Granted, that’s more than a year sooner than it was 11 days ago, but it still allows plenty of time for advances in controlling and bounding agentic AI behavior. In the meantime, AI is presenting opportunities to enhance well being through areas like medicine, nutrition, farming practices, industrial practices, and productivity enhancement across a range of processes. Let’s not forego these opportunities. AI technology is far too promising to hamstring with a pause, moratoria, or ill-devised regulations. It’s also simply impossible to stop development work on a global scale.

Nevertheless, AI issues are complex for all private and public institutions. Without doubt, it will change our world. This AI Policy Guide from Mercatus is a helpful effort to lay out issues at a high-level.

Grow Or Collapse: Stasis Is Not a Long-Term Option

18 Wednesday Jan 2023

Posted by Nuetzel in Climate, Environment, Growth

≈ 1 Comment

Tags

Asymptotic Burnout, Benjamin Friedman, Climate Change, Dead Weight Loss, Degrowth, Fermi Paradox, Lewis M. Andrews, Limits to Growth, NIMBYism, Paul Ehrlich, Population Bomb, Poverty, regulation, Robert Colvile, Stakeholder Capitalism, State Capacity, Stubborn Attachments, Subsidies, Tax Distortions, Thomas Malthus, Tyler Cowan, Veronique de Rugy, Zero Growth

Growth is a human imperative and a good thing in every sense. We’ve long heard from naysayers, however, that growth will exhaust our finite resources, ending in starvation and the collapse of human civilization. They say, furthermore, that the end is nigh! It’s an old refrain. Thomas Malthus lent it credibility over 200 years ago (perhaps unintentionally), and we can pick on poor Paul Ehrlich’s “Population Bomb” thesis as a more modern starting point for this kind of hysteria. Lewis M. Andrews puts Ehrlich’s predictions in context:

“A year after the book’s publication, Ehrlich went on to say that this ‘utter breakdown’ in Earth’s capacity to support its bulging population was just fifteen years away. … For those of us still alive today, it is clear that nothing even approaching what Ehrlich predicted ever happened. Indeed, in the fifty-four years since his dire prophesy, those suffering from starvation have gone from one in four people on the planet to just one in ten, even as the world’s population has doubled.”

False Limits

The “limits” argument comes from the environmental Left, but it creates for them an uncomfortable tradeoff between limiting growth and the redistribution of a fixed (they hope) or shrinking (more likely) pie. That’s treacherous ground on which to build popular support. It’s also foolish to stake a long-term political agenda on baldly exaggerated claims (and see here) about the climate and resource constraints. Ultimately, people will recognize those ominous forecasts as manipulative propaganda.

Last year, an academic paper argued that growing civilizations must eventually reach a point of “asymptotic burnout” due to resource constraints, and must undergo a “homeostatic awakening”: no growth. The authors rely on a “superlinear scaling” argument based on cross-sectional data on cities, and they offer their “burnout” hypothesis as an explanation for the Fermi Paradox: the puzzling quiet we observe in the universe while we otherwise expect it to be teeming with life… civilizations reach their “awakenings” before finding ways to communicate with, or even detect, their distant neighbors. I addressed this point and it’s weaknesses last year, but here I mention it only to demonstrate that the “limits to growth” argument lives on in new incarnations.

Growth-limiting arguments are tenuous on at least three fundamental grounds: 1) failure to consider the ability of markets to respond to scarcity; 2) underestimating the potential of human ingenuity not only to adapt to challenges, but to invent new solutions, exploit new resources, and use existing resources more efficiently; and 3) homeostasis is impossible because zero growth cannot be achieved without destructive coercion, suspension of cooperative market mechanisms, and losses from non-market (i.e., political and non-political) competition for the fixed levels of societal wealth and production.

The zero-growth world is one that lacks opportunities and rewards for honest creation of value, whether through invention or simple, hard work. That value is determined through the interaction of buyers and sellers in markets, the most effective form of voluntary cooperation and social organization ever devised by mankind. Those preferring to take spoils through the political sphere, or who otherwise compete on the basis of force, either have little value to offer or simply lack the mindset to create value to exchange with others at arms length.

Zero-Growth Mentality

As Robert Colvile writes in a post called “The Morality of Growth”:

“A society without growth is not just politically far more fragile. It is hugely damaging to people’s lives – and in particular to the young, who will never get to benefit from the kind of compounding, increasing prosperity their parents enjoyed.”

Expanding on this theme is commenter Slocum at the Marginal Revolution site, where Colvile’s essay was linked:

“Humans behave poorly when they perceive that the pie is fixed or shrinking, and one of the main drivers for behaving poorly is feelings of envy coming to the forefront. The way we encourage people not to feel envy (and to act badly) is not to try to change human nature, or ‘nudge’ them, but rather to maintain a state of steady improvement so that they (naturally) don’t feel envious, jealous, tribal, xenophobic etc. Don’t create zero-sum economies and you won’t bring out the zero-sum thinking and all the ills that go with it.”

And again, this dynamic leads not to zero growth (if that’s desired), but to decay. Given the political instability to which negative growth can lead, collapse is a realistic possibility.

I liked Colville’s essay, but it probably should have been titled “The Immorality of Non-Growth”. It covers several contemporary obstacles to growth, including the rise of “stakeholder capitalism”, the growth of government at the expense of the private sector, strangling regulation, tax disincentives, NIMBYism, and the ease with which politicians engage in populist demagoguery in establishing policy. All those points have merit. But if his ultimate purpose was to shed light on the virtues of growth, it seems almost as if he lost his focus in examining only the flip side of the coin. I came away feeling like he didn’t expend much effort on the moral virtues of growth as he intended, though I found this nugget well said:

“It is striking that the fastest-growing societies also tend to be by far the most optimistic about their futures – because they can visibly see their lives getting better.”

Compound Growth

A far better discourse on growth’s virtues is offered by Veronique de Rugy in “The Greatness of Growth”. It should be obvious that growth is a potent tonic, but its range as a curative receives strangely little emphasis in popular discussion. First, de Rugy provides a simple illustration of the power of long-term growth, compound growth, in raising average living standards:

This is just a mechanical exercise, but it conveys the power of growth. At 2% real growth, real GDP per capital would double in 35 years and quadruple in 70 years. At 4% growth, real GDP would double in 18 years… less than a generation! It would quadruple in 35 years. If you’re just now starting a career, imagine nearing retirement at a standard of living four times as lavish as today’s senior employees (who make a lot more than you do now). We’ll talk a little more about how such growth rates might be achieved, but first, a little more on what growth can achieve.

The Rewards of Growth

Want to relieve poverty? There is no better and more permanent solution than economic growth. Here are some illustrations of this phenomenon:

Want to rein-in the federal budget deficit? Growth reduces the burden of the existing debt and shrinks fiscal deficits, though it might interfere with what little discipline spendthrift politicians currently face. We’ll have to find other fixes for that problem, but at least growth can insulate us from their profligacy.

And who can argue with the following?

“All the stuff an advocate anywhere on the political spectrum claims to value—good health, clean environment, safety, families and quality of life—depends on higher growth. …

There are other well-documented material consequences of modern economic growth, such as lower homicide rates, better health outcomes (babies born in the U.S. today are expected to live into their upper 70s, not their upper 30s as in 1860), increased leisure, more and better clothing and shelter, less food insecurity and so on.”

De Rugy argues convincingly that growth might well entail a greater boost in living standards for lower ranges of the socioeconomic spectrum than for the well-to-do. That would benefit not just those impoverished due to a lack of skills, but also those early in their careers as well as seniors attempting to earn extra income. For those with a legitimate need of a permanent safety net, growth allows society to be much more generous.

What de Rugy doesn’t mention is how growth can facilitate greater saving. In a truly virtuous cycle, saving is transformed into productivity-enhancing additions to the stock of capital. And not just physical capital, but human capital through investment in education as well. In addition, growth makes possible additional research and development, facilitating the kind of technical innovation that can sustain growth.

Getting Out of the Way of Growth

Later in de Rugy’s piece, she evaluates various ways to stimulate growth, including deregulation, wage and price flexibility, eliminating subsidies, less emphasis on redistribution, and simplifying the tax code. All these features of public policy are stultifying and involve dead-weight losses to society. That’s not to deny the benefits of adequate state capacity for providing true public goods and a legal and judicial system to protect individual rights. The issue of state capacity is a major impediment to growth in the less developed world, whereas countries in the developed world tend to have an excess of state “capacity”, which often runs amok!

In the U.S., our regulatory state imposes huge compliance costs on the private sector and effectively prohibits or destroys incentives for a great deal of productive (and harmless) activity. Interference with market pricing stunts growth by diverting resources from their most valued uses. Instead, it directs them toward uses that are favored by political elites and cronies. Subsidies do the same by distorting tradeoffs at a direct cost to taxpayers. Our system of income taxes is rife with behavioral distortions and compliance costs, bleeding otherwise productive gains into the coffers of accountants, tax attorneys, and bureaucrats. Finally, redistribution often entails the creation of disincentives, fostering a waste of human potential and a pathology of dependence.

Growth and Morality

Given the unequivocally positive consequences of growth to humanity, could the moral case for growth be any clearer? De Rugy quotes Benjamin Friedman’s “The Moral Consequences of Economic Growth”:

“Growth is valuable not only for our material improvement but for how it affects our social attitudes and our political institutions—in other words, our society’s moral character, in the term favored by the Enlightenment thinkers from whom so many of our views on openness, tolerance and democracy have sprung.”

De Rugy also paraphrases Tyler Cowen’s position on growth from his book “Stubborn Attachments”:

“… economic growth, properly understood, should be an essential element of any ethical system that purports to care about universal human well-being. In other words, the benefits are so varied and important that nearly everyone should have a pro-growth program at or near the top of their agenda.”

Conclusion

Agitation for “degrowth” is often made in good faith by truly frightened people. Better education would help them, but our educational establishment has been corrupted by the same ignorant narrative. When it comes to rulers, the fearful are no less tyrannical than power-hungry authoritarians. In fact, fear can be instrumental in enabling that kind of transformation in the personalities of activists. A basic failing is their inability to recognize the many ways in which growth improves well-being, including the societal wealth to enable adaptation to changing conditions and the investment necessary to enhance our range of technological solutions for mitigating existential risks. Not least, however, is the failure of the zero-growth movement to understand the cruelty their position condones in exchange for their highly speculative assurances that we’ll all be better off if we just do as they say. A terrible downside will be unavoidable if and when growth is outlawed.

Cassandras Feel An Urgent Need To Crush Your Lifestyle

12 Thursday Jan 2023

Posted by Nuetzel in Climate science, Environmental Fascism

≈ 1 Comment

Tags

Atmospheric Aerosols, Capacity Factors, Carbon Emissions, Carbon-Free Buildings, Chicken Little, Climate Alarmism, Coercion, Electric Vehicles, Elon Musk, Extreme Weather Events, Fossil fuels, Gas Stoves, Judith Curry, Land Use, Model Bias, Nuclear power, Paul Ehrlich, Renewable energy, rent seeking, Sea Levels, Settled Science, Solar Irradience, Solar Panels, Subsidies, Temperature Manipulation, Toyota Motors, Urban Heat Islands, Volcanic activity, Wind Turbines

Appeals to reason and logic are worthless in dealing with fanatics, so it’s too bad that matters of public policy are so often subject to fanaticism. Nothing is more vulnerable on this scale than climate policy. Why else would anyone continue to listen to prognosticators of such distinguished failure as Paul Ehrlich? Perhaps most infamously, his 1970s forecasts of catastrophe due to population growth were spectacularly off-base. He’s a man without any real understanding of human behavior and how markets deal efficiently and sustainably with scarcity. Here’s a little more detail on his many misfires. And yet people believe him! That’s blind faith.

The foolish acceptance of chicken-little assertions leads to coercive and dangerous policy prescriptions. These are both unnecessary and very costly in direct and hidden ways. But we hear a frantic chorus that we’d better hurry or… we’re all gonna die! Ironically, the fate of the human race hardly matters to the most radical of the alarmists, who are concerned only that the Earth itself be in exactly the same natural state that prevailed circa 1800. People? They don’t belong here! One just can’t take this special group of fools too seriously, except that they seem to have some influence on an even more dangerous group of idiots called policymakers.

Judith Curry, an esteemed but contrarian climate expert, writes of the “faux urgency” of climate action, and how the rush to implement supposed climate mitigations is a threat to our future:

“Rapid deployment of wind and solar power has invariably increased electricity costs and reduced reliability, particularly with increasing penetration into the grid. Allegations of human rights abuses in China’s Xinjiang region, where global solar voltaic supplies are concentrated, are generating political conflicts that threaten the solar power industry. Global supply chains of materials needed to produce solar and wind energy plus battery storage are spawning new regional conflicts, logistical problems, supply shortages and rising costs. The large amount of land use required for wind and solar farms plus transmission lines is causing local land use conflicts in many regions.”

Curry also addresses the fact that international climate authorities have “moved the goalposts” in response to the realization that the so-called “crisis” is not nearly as severe as we were told not too long ago. And she has little patience for delusions that authorities can reliably force adjustments in human behavior so as to to reduce weather disasters:

“Looking back into the past, including paleoclimatic data, there has been more extreme weather [than today] everywhere on the planet. Thinking that we can minimize severe weather through using atmospheric carbon dioxide as a control knob is a fairy tale.”

The lengths to which interventionists are willing to go should make consumer/taxpayers break out their pitchforks. It’s absurd to entertain mandates forcing vehicles powered by internal combustion engines (ICEs) off the road, and automakers know it. Recently, the head of Toyota Motors acknowledged his doubts that electric vehicles (EVs) can meet our transportation demands any time soon:

“People involved in the auto industry are largely a silent majority. That silent majority is wondering whether EVs are really OK to have as a single option. But they think it’s the trend so they can’t speak out loudly. Because the right answer is still unclear, we shouldn’t limit ourselves to just one option.”

In the same article, another Toyota executive says that neither the market nor the infrastructure is ready for a massive transition to EVs, a conclusion only a dimwit could doubt. Someone should call the Big 3 American car companies!

No one is a bigger cheerleader for EVs than Elon Musk. In the article about Toyota, he is quoted thusly:

“At this time, we actually need more oil and gas, not less. Realistically I think we need to use oil and gas in the short term, because otherwise civilization will crumble. One of the biggest challenges the world has ever faced is the transition to sustainable energy and to a sustainable economy. That will take some decades to complete.”

Of course, for the foreseeable future, EVs will be powered primarily by electricity generated from burning fossil fuels. So why the fuss? But as one wag said, that’s only until the government decides to shut down those power plants. After that, good luck with your EV!

Gas stoves are a new target of our energy overlords, but this can’t be about fuel efficiency, and it’s certainly not about the quality of food preparation. The claim by an environmental think tank called “Carbon-Free Buildings” is that gas stoves are responsible for dangerous indoor pollutants. Of course, the Left was quick to rally around this made-up problem, despite the fact that they all seem to use gas stoves and didn’t know anything about the issue until yesterday! And, they insist, racial minorities are hardest hit! Well, they might consider using exhaust fans, but the racialist rejoinder is that minorities aren’t adequately informed about the dangers and mitigants. Okay, start a safe-use info campaign, but keep government away from an embedded home technology that is arguably superior to the electric alternative in several respects.

Renewable energy mandates are a major area of assault. If we were to fully rely on today’s green energy technologies, we’d not just threaten our future, but our immediate health and welfare. Few people, including politicians, have any awareness of the low rates at which green technologies are actually utilized under real-world conditions.

“Worldwide average solar natural capacity factor (CF) reaches about ~11-13%. Best locations in California, Australia, South Africa, Sahara may have above 25%, but are rare. (see www.globalsolaratlas.info, setting direct normal solar irradiance)

Worldwide average wind natural capacity factors (CF) reach about ~21-24%. Best off-shore locations in Northern Europe may reach above 40%. Most of Asia and Africa have hardly any usable wind and the average CF would be below 15%, except for small areas on parts of the coasts of South Africa and Vietnam. (see www.globalwindatlas.info, setting mean power density)”

Those CFs are natural capacity factors (i.e., the wind doesn’t always blow or blow at “optimal” speeds, and the sun doesn’t always shine or shine at the best angle), The CFs don’t even account for “non-natural” shortfalls in actual utilization and other efficiency losses. It would be impossible for investors to make these technologies profitable without considerable assistance from taxpayers, but they couldn’t care less about whether their profits are driven by markets or government fiat. You see, they really aren’t capitalists. They are rent seekers playing a negative-sum game at the expense of the broader society.

There are severe environmental costs associated with current wind and solar technologies. Awful aesthetics and the huge inefficiencies of land use are bad enough. Then there are deadly consequences for wildlife. Producing inputs to these technologies requires resource-intensive and environmentally degrading mining activities. Finally, the costs of disposing of spent, toxic components of wind turbines and solar panels are conveniently ignored in most public discussions of renewables.

There is still more hypocritical frosting on the cake. Climate alarmists are largely opposed to nuclear power, a zero-carbon and very safe energy source. They also fight to prevent development of fossil fuel energy plant for impoverished peoples around the world, which would greatly aid in economic development efforts and in fostering better and safer living conditions. Apparently, they don’t care. Climate activists can only be counted upon to insist on wasteful and unreliable renewable energy facilities.

Before concluding, it’s good to review just a few facts about the “global climate”:

1) the warming we’ve seen in forecasts and in historical surface temperature data has been distorted by urban heat island effects, and weather instruments are too often situated in local environments rich in concrete and pavement.

2) Satellite temperatures are only available for the past 43 years, and they have to be calibrated to surface measurements, so they are not independent measures. But the trend in satellite temperatures over the past seven years has been flat or negative at a time when global carbon emissions are at all-time highs.

3) There have been a series of dramatic adjustments to historical data that have “cooled the past” relative to more recent temperatures.

4) The climate models producing catastrophic long-term forecasts of temperatures have proven to be biased to the high side, having drastically over-predicted temperature trends over the past two- to three decades.

5) Sea levels have been rising for thousands of years, and we’ve seen an additional mini-rebound since the mini-ice age of a few hundred years ago. Furthermore, the rate of increase in sea levels has not accelerated in recent decades, contrary to the claims of climate alarmists.

6) Storms and violent weather have shown no increase in frequency or severity, yet models assure us that they must!

Despite these facts, climate change fanatics will only hear of climate disaster. We should be unwilling to accept the climatological nonsense now passing for “settled science”, itself a notion at odds with the philosophy of science. I’m sad to say that climate researchers are often blinded by the incentives created by publication bias and grant money from power-hungry government bureaucracies and partisan NGOs. They are so blinded, in fact, that research within the climate establishment now almost completely ignores the role of other climatological drivers such as the solar irradiance, volcanic activity, and the role and behavior of atmospheric aerosols. Yes, only the global carbon dial seems to matter!

No one is more sympathetic to “the kids” than me, and I’m sad that so much of the “fan base” for climate action is dominated by frightened members of our most youthful generations. It’s hard to blame them, however. Their fanaticism has been inculcated by a distinctly non-scientific community of educators and journalists who are willing to accept outrageous assertions based on “toy models” concocted on weak empirical grounds. That’s not settled science. It’s settled propaganda.

Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • The Impotence of AI for the Socialist Calculation Debate
  • No Radar, No Rudder: Fiscal & Monetary Destabilization
  • Health Care & Education: Slow Productivity Growth + Subsidies = Jacked Prices
  • Debt Ceiling Stopgaps and a Weak Legal Challenge
  • Some Critical Issues In the Gun Rights Debate

Archives

  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Ominous The Spirit
  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library

Blog at WordPress.com.

Ominous The Spirit

Ominous The Spirit is an artist that makes music, paints, and creates photography. He donates 100% of profits to charity.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

  • Follow Following
    • Sacred Cow Chips
    • Join 123 other followers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...