Big Spending, Explosive Debt, and the Inflation Tax

Tags

, , , , , , , , , , , , , , , , , , , , , ,

The chart above makes a convincing case that we have a spending problem at the federal level. Really, we’ve had a spending problem for a long time. But at least tax revenue today remains reasonably well-aligned with its 50-year historical average as a share of GDP. Not spending. Even larger deficits opened up during the pandemic and they haven’t returned to pre-pandemic levels.

We’ve seen Joe Biden break spending records. His initiatives, often with questionable merit, have included the $1.8 trillion American Rescue Plan and the nearly $0.8 trillion Infrastructure Investment and Jobs Act, along with several other significant spending initiatives such as the Promise to Address Comprehensive Toxics Act and the subsidy-laden CHIPS Act. Meanwhile, emergency spending has become a regular occurrence on Biden’s watch. More recently, he’s made repeated efforts to forgive massive amounts of student loans despite the Supreme Court’s clear ruling that such gifts are unconstitutional.

Indeed, while Biden keeps pretty busy spinning tales of his days driving an 18-wheeler, cannibals devouring his Uncle Bosie Finnegan, his upbringing in black churches, synagogues, or in the Puerto Rican community, he still finds time to dream up ways for the government to spend money it doesn’t have. Or his kindly puppeteers do.

Biden’s New Budget

Eric Boehm expressed wonderment at Biden’s fiscal 2025 budget not long after its release in March. He was also mystified by the gall it took to produce a “fact sheet” in which the White House congratulated itself on fiscal responsibility. That’s how this Administration characterizes deficits projected at $16 trillion over the next ten years. No joke!

Furthermore, the Administration says the record spending will be “paid for”. Well, yes, with tax increases and lots of borrowing! There are a great many fabulist claims made by the White House about the budget. This link from the Office of Management and Budget includes a handy list of propaganda sheets they’ve managed to produce on the virtues of their proposal.

The Congressional Budget Office (CBO) projects ten-year deficits under current law that are $3 trillion higher than Biden’s proposed budget. That’s the basis of the White House’s boast of fiscal restraint. But the difference is basically paid for with a couple of accounting tricks (see below). More charitably, one could say it’s paid for with higher taxes, aided by the assumption of slightly faster economic growth. The latter will be a good trick while undercutting incentives and wages with a big boost to the corporate tax rate.

The revenue projected by the While House from those taxes does not come anywhere close to eliminating the gap shown in the CBO’s chart above. Federal spending under Biden’s budget grows at about 4% annually, just a bit slower than nominal GDP. Thus, the federal share of GDP remains roughly constant and only slightly higher than the CBO’s current projection for 2034. Nevertheless, spending relative to GDP would continue at an historically high rate. Over the next decade, it would average more than 3% higher than its 50-year average. That would be about $1.3 trillion in 2034!

Meanwhile, the ratio of tax revenue to GDP under Biden’s proposal, as they project it, would average slightly higher than its 50-year average, reaching a full percentage point above by 2034 (and higher than the CBO baseline). That’s probably optimistic.

There is little real effort in this budget to reduce federal deficits, with Treasury borrowing rates now near 15-year highs. Interest expense has grown to an alarming share of spending. In fact, it’s expected to exceed spending on defense in 2024! Perhaps not coincidentally, the White House assumes a greater decline in interest rates than CBO over the next 10 years.

Treats or Tricks?

The situation is likely worse than the White House depicts, given that its budget incorporates assumptions that look generous to their claim of fiscal restraint. First, they frontload nondefense discretionary spending, allowing Biden to make extravagant promises for the near-term while pushing off steep declines in budget commitments to the out-years. The sharp reductions in this category of spending pares more than $2 trillion from the 10-year deficit. From the link above:

Biden also proposes to restore the expanded the child tax credit — for one year! How handy from a budget perspective: heroically call for an expanded credit (for a year) while avoiding, for the time being, the addition of a couple of trillion to the 10-year deficit.

Code Red

So where does this end? The ratio of federal debt to GDP will resume its ascent after a slight decline from the pandemic high. Here is the CBO’s projection:

The Biden budget shows a relatively stable debt to GDP ratio through 2034 due to the assumptions of slightly faster GDP growth, lower Treasury borrowing rates, and the aforementioned “fiscal restraint”. But don’t count on it!

The government’s growing dominance over real resources will have negative consequences for growth in the long-term. Purely as a fiscal matter, however, it must be paid for in one of three ways: revenue from explicit taxes, federal borrowing, or an implicit tax on the public more commonly known as the inflation tax. The last two are intimately related.

Bond investors always face at least a small measure of default risk even when lending to the U.S. Treasury. There is almost no chance the government would ever default outright by failing to pay interest or principal when due. However, investors hold an expectation that the value of their bonds will erode in real terms due to inflation. To compensate, they demand an “inflation premium” in the interest rate they earn on Treasury bonds. But an upside surprise to inflation would constitute a “soft default” on the real value of their bonds. This occurred during and after the pandemic, and it was triggered by a burgeoning federal deficit.

Brief Mechanics

John Cochrane has explained the mechanism by which acts of fiscal profligacy can be transmitted to the price of goods. The real value of outstanding federal debt cannot exceed the expected real value of future surpluses (a present value summed across positive and negative surpluses). If expected surpluses are reduced via some emergency or shock such that repayment in real terms is less likely, then the real value of government debt must fall. That means either interest rates or the price level must rise, or some combination of the two.

The Federal Reserve can prevent interest rates from rising (by purchasing bonds and increasing the money supply), but that leaves a higher price level as the only way the real value of debt can come into line. In other words, an unexpected increase in the path of federal deficits would be financed by money printing and an inflation tax. The incidence of this unexpected “implicit” tax falls not only to bondholders, but also on the public at large, who suffer an unexpected decline in the purchasing power of their nominal assets and incomes. This in turn tends to free-up real resources for government absorption.

Government Debt Is Risky

It appears that investors expect the future deficits now projected by the CBO (and the White House) to be paid down someday, to some extent, by future surpluses. That might seem preposterous, but markets apparently aren’t surprised by the projected deficits. After all, fiscal policy decisions can change tremendously over the course of a few years. But it still feels like excessive optimism. Whatever the case, Cochrane cautions that the next fiscal emergency, be it a new pandemic, a war, a recession, or some other crisis, is likely to create another huge expansion in debt and a substantial increase price level. Joe Biden doesn’t seem inclined to put us in a position to deal with that risk very effectively. Unfortunately, it’s not clear that Donald Trump will either. And neither seems inclined to seriously address the insolvencies of Social Security and Medicare. If unaddressed, those mandatory obligations will become real crises over the next decade.

AntiSemitic Left Tests Limits of Free Speech

Tags

, , , , , , , , , , , , , , ,

The current protests on college campuses across the nation bring into focus differing opinions on the limits of free speech and assembly. Particular questions seem to defy resolution. Nevertheless, there is some misunderstanding regarding the settled breadth of the First Amendment.

The protestors have acted as if they have constitutional carte blanche to gather anywhere to say anything in opposition to Israel and its war against Hamas terrorists; a subset thinks this encompasses “occupation” of any space for any duration; a still smaller subset believes this includes a right to condemn Jews, all Jews.

I strongly doubt, however, that many of the protestors truly believe their constitutional protections extend to intimidation and bullying of Jewish students attempting to go about their business on campus (scroll to a few of the articles here), destruction of property, or the use of “fighting words”, or physical attacks on Jews or other “oppressors”.

It’s well known that the Constitution does not protect “fighting words”, including threats. Furthermore, Eugene Volokh explains that there is no constitutional right to “occupy” a college campus, either public or private.

Of course, private schools are not legally bound to respect free speech or assembly rights. They can regulate activity on their private campuses in any way they see fit. Some explicitly abide the same rights as public universities, which seems reasonable for any institution dedicated to the free spirit of inquiry.

Volokh, however, cites Supreme Court precedents in which a majority held that government can prohibit camping in certain parks, for example, and that public colleges and universities can impose restrictions on campus activities:

There is no First Amendment right to camp out in any university, public or private. Indeed, there is no First Amendment right to camp out even in public parks (see Clark v. CCNV (1984)), and the government’s power to limit the use of property used for a public university is even greater than its power as to parks (Widmar v. Vincent (1981)):

“‘A university differs in significant respects for public forums such as streets or parks or even municipal theaters. A university’s mission is education, and decisions of this Court have never denied a university’s authority to impose reasonable regulations compatible with that mission upon the use of its campus and facilities. We have not held, for example, that a campus must make all of its facilities equally available to students and nonstudents alike, or that a university must grant free access to all of its grounds or buildings.’

Likewise, if UC Berkeley had held a law student party in the law school building rather than at Dean Chemerinsky’s house, it could have stopped students from using the party as an occasion to orate to the audience (especially with their own sound amplification devices, which the student brought to Chemerinsky’s house). See Spears v. Arizona Bd. of Regents (D. Ariz. 2019)(upholding public university’s right to stop people from speaking with sound amplification at an on-campus book fair).

Volokh also notes, however, that public universities cannot restrict mere “offensive” expression, which would include certain antisemitic statements or even swastikas (for example), as long as the expression falls short of “fighting words” or explicit threats. Do calls for the “extermination of Jews” qualify as fighting words? That deserves a resounding yes. It’s clearly hate speech, and it’s exactly the sort of expression that might be deemed so offensive to counterprotestors (for example) as to constitute an immediate threat to public order.

Does the meaning of “fighting words” include such chants as “From the river to the sea…”? Some say that depends on the speaker, but that can’t provide a sound basis of distinction. It is clearly associated with calls to eliminate the state of Israel. Some believe it also implies the genocide of Jews in Israel, and Jews can’t be blamed for finding it threatening. Okay, how about “Intifada”? I doubt all of the students involved in the current protests understand the genocidal implications of these words. The agitators understand them well enough.

This is a grey area in our understanding of the First Amendment. The “River to the Sea” chant, and Intifada, seem like fighting words to me, but they might not qualify as direct threats to anyone on campus. By comparison, the swastika is “just” a party emblem, whatever policies it stands for, and apparently the Court did not deem it a direct threat to anyone in Skokie, Illinois. The legal distinctions here feel inadequate. Still, we say the “mere” expression of offensive ideas or symbols is protected speech, provided that it does not directly threaten harm to any party.

Many libertarians, with whom I usually agree, urge tolerance of the protests and encampments, including at least cautious tolerance of the protests. The Foundation for Individual Rights and Expression (FIRE) has strenuously objected to the actions of police in Austin, Texas in dispersing demonstrators at the University of Texas. Alex Tabarrak has reposted a tweet or two apparently critical of the government’s response to protestors in Texas and at Emory University in Atlanta, though it should be noted that the economics professor who was taken down and handcuffed on video had actually hit a police officer. Michael Munger, in a variation of his “worst enemy test” of government power, says that giving campus authorities “the power to crush us, at their discretion” is probably a bad idea. But they have that power if they choose to exercise it, for better or worse. (By “us”, I don’t think Munger intended to take sides).

I’m highly skeptical of the motives and incentives of some of the “occupiers” of campus spaces, not to mention their status as students. More importantly, there is ample evidence that “fighting words” and threats against Jews have been used by many of the protesters. This violates the codes of conduct at many schools, and should not only be censured, but any student identified as guilty of this sort of hate speech should be expelled, not merely suspended. There should be severe consequences for professors choosing to participate in these protests as well.

This behavior should have long-term consequences, and that is happening at some schools. I saw the following quote from P.J. O’Rourke on Instapundit, which seems appropriate here:

There’s only one basic human right, the right to do as you damn well please. And with it comes the only basic human duty, the duty to take the consequences.

The kids are wearing masks for a reason, and it ain’t Covid! Now, the protestors’ demands include “amnesty” for their participation in the protests. That shouldn’t play well if you’re provably guilty of calling for the extermination of a race of people. But here’s the thing: certain institutions like Columbia University have allowed the aberrant behavior to go on with little challenge, showing that the real limits to free speech and assembly are whatever acquiescent campus administrators are willing to put up with.

Removing these encampments is more than justified on constitutional grounds at any school, public or private. The arrest of some of the more intransigent elements among the protesters may be well justified. Insulting hate speech is one thing, but eliminationist hate speech constitutes fighting words and should not be tolerated. Of course, forcibly removing the encampments is risky in terms of public safety because some of the protestors will physically challenge the police. Comparatively innocent (though naive) students might get caught up in a conflict with law enforcement, but ignorance is no defense. They should not be there. Those risks must be taken to end the “hate encampments”, which are a direct threat to the rights of others wishing only to go about their business.

Wind, Solar, and the Five Circles of Dormant Capital

Tags

, , , , , , , , , , , , , , , , , , , , ,

This is a first for me…. The following is partly excerpted from a post of two weeks ago, but I’ve made a number of edits and additions. The original post was way too long. This is a bit shorter, and I hope it distills a key message.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Failures of industrial policies are nothing new, but the current manipulation of electric power generation by government in favor of renewable energy technologies is egregious. These interventions are a reaction to an overwrought climate crisis narrative, but they have many shortcomings and risks of their own. Chief among them is whether the power grid will be capable of meeting current and future demand for power while relying heavily on variable resources, namely wind and sunshine. The variability implies idle and drastically underutilized hours every day without any ability to call upon the assets to produce when needed.

The variability is vividly illustrated by the chart above showing a representative daily profile of power demand versus wind and solar output. Below, with apologies to Dante, I describe the energy hellscape into which we’re being driven on the horns of irrational capital outlays. These projects would be flatly rejected by any rational investor but for the massive subsidies afforded by government.

The First Circle of Dormancy: Low Utilization

Wind and solar power assets have relatively low rates of utilization due to the intermittency of wind and sunshine. Capacity factors for wind turbines averaged almost 36% in the U.S. in 2022, while solar facilities averaged only about 24%. This compared with nuclear power at almost 93%, natural gas (66%), and coal (48%).

Despite their low rates of utilization, new wind and solar facilities are always touted at their full nameplate capacity. We hear a great deal about “additions to capacity”, which overstate the actual power-generating potential by factors of three to four times. More importantly, this also means wind and solar power costs per unit of output are often vastly understated. These assets contribute less economic value to the electric grid than more heavily utilized generating assets.

Sometimes wind and solar facilities are completely idle or dormant. Sometimes they operate at just a fraction of capacity. I will use the terms “idle” and dormant” euphemistically in what follows to mean assets operating not just at low levels of utilization, but for those prone to low utilization and also falling within the Second Circle of Dormancy.

The Second Circle of Dormancy: Non-Dispatchability

The First Circle of Dormancy might be more like a Purgatory than a Hell. That’s because relatively low average utilization of an asset could be justifiable if demand is subject to large fluctuations. This is the often case, as with assets like roads, bridges, restaurants, amusement parks, and many others. However, capital invested in wind and solar facilities is idle on an uncontrollable basis, which is more truly condemnable. Wind and solar do not provide “dispatchable” power, meaning they are not “on call” in any sense during idle or less productive periods. Not only is their power output uncontrollable, it is not entirely predictable.

Again, variable but controllable utilization allows flexibility and risk mitigation in many applications. But when utilization levels are uncontrollable, the capital in question has greatly diminished value to the power grid and to power customers relative to dispatchable sources having equivalent capacity and utilization. It’s no wonder that low utilization, variability, and non-dispatchability are underemphasized or omitted by promoters of wind and solar energy. This sort of uncontrollable down-time is a drain on real economic returns to capital.

The Third Circle of Dormancy: Transmission Infrastructure

The idleness that besets the real economic returns to wind and solar power generation extends to the transmission facilities necessary for getting power to the grid. Transmission facilities are costly, but that cost is magnified by the broad spatial distribution of wind and solar generating units. Transmission from offshore facilities is particularly complex. When wind turbines and solar panels are dormant, so are the transmission facilities needed to reach them. Thus, low utilization and the non-dispatchability of those units diminishes the value of the capital that must be committed for both power generation and its transmission.

The Fourth Circle of Dormancy: Backup Power Assets

The reliability of the grid requires that any commitment to variable wind and solar power must also include a commitment to back-up capacity. As another example, consider shipping concerns that are now experimenting with sails on cargo ships. What is the economic value of such a ship without back-up power? Can you imagine these vessels drifting in the equatorial calms for days on end? Even light winds would slow the transport of goods significantly. Idle, nondispatchable capital, is unproductive capital.

Likewise, solar-powered signage can underperform or fail over the course of several dark, wintry days, even with battery backup. The signage is more reliable and valuable when it is backed-up by another power source. But again, idle, non-dispatchable capital is unproductive capital.

The needed provision of backup power sources represents an imposed cost of wind and solar, which is built into the cost estimates shown in a section below. But here’s another case of dormancy: some part of the capital commitment, either primary energy sources or the needed backups, will be idle regardless of wind and solar conditions… all the time. Of course, back-up power facilities should be dispatchable because they must serve an insurance function. Backup power therefore has value in preserving the stability of the grid even while completely idle. However, at best that value offsets a small part of the social loss inherent in primary reliance on variable and non-dispatchable power sources.

We can’t wholly “replace” dispatchable generating capacity with renewables without serious negative consequences. At the same time, maintaining existing dispatchable power sources as backup carries a considerable cost at the margin for wind and solar. At a minimum, it requires normal maintenance on dispatchable generators, periodic replacement of components, and an inventory of fuel. If renewables are intended to meet growth in power demand, the imposed cost is far greater because backup sources for growth would require investment in new dispatchable capacity.

The Fifth Circle of Dormancy: Outages

The pursuit of net-zero carbon emissions via wind and solar power creates uncontrollably dormant capital, which increasingly lacks adequate backup power. Providing that backup should be a priority, but it’s not.

Perhaps much worse than the cost of providing backup power sources is the risk and imposed cost of grid instability in their absence. That cost would be borne by users in the form of outages. Users are placed at increasing risk of losing power at home, at the office and factories, at stores, in transit, and at hospitals. This can occur at peak hours or under potentially dangerous circumstances like frigid or hot weather.

Outage risks include another kind of idle capital: the potential for economy-wide shutdowns across a particular region of all electrified physical capital. Not only can grid failure lead to economy-wide idle capital, but this risk transforms all capital powered by electricity into non-dispatchable productive capacity.

Reliance on wind and solar power makes backup capacity an imperative. Better still, just scuttle the wind and solar binge and provide for growth with reliable sources of power!

Quantifying Infernal Costs

A grid report card“ from the Mackinac Center for Public Policy gets right to the crux of the imposed-cost problem:

“… the more renewable generation facilities you build, the more it costs the system to make up for their variability, and the less value they provide to electricity markets.

The report card uses cost estimates for Michigan from the Center of the American Experiment. Here are the report’s average costs per MWh through 2050, including the imposed costs of backup power:

—Existing coal plant: $33/MWh

—Existing gas-powered: $22

— New wind: $180

—New solar: $278

—New nuclear reactor (light water): $74

—Small modular reactor: $185

—New coal plant: $106 with carbon capture and storage (CCS)

—New natural gas: $64 with CCS

It’s should be no surprise that existing coal and gas facilities are the most cost effective. Preserve them! Of the new installations, natural gas is the least costly, followed by the light water reactor and coal. New wind and solar capacity are particularly costly.

Proponents of net zero are loath to recognize the imposed cost of backup power for two reasons. First, it is a real cost that can be avoided by society only at the risk of grid instability, something they’d like to ignore. To them, it represents something of an avoidable external cost. Second, at present, backup dispatchable power would almost certainly entail CO2 emissions, violating the net zero dictum. But in attempting to address a presumed externality (climate warming) by granting generous subsidies to wind and solar investors, the government and NGOs induce an imposed cost on society with far more serious and immediate consequences.

Deadly Sin: Subsidizing Dormant Capital

Wind and solar capital outlays are funded via combinations of private investment and public subsidies, and the former is very much contingent on the latter. That’s because the flood of subsidies is what allows private investors a chance to profit from uncontrollably dormant capital. Wind and solar power are far more heavily subsidized than fossil fuels, as noted by Mitch Rolling and Isaac Orr:

“In 2022, wind and solar generators received three and eighteen times more subsidies per MWh, respectively, than natural gas, coal, and nuclear generators combined. Solar is the clear leader, receiving anywhere from $50 to $80 per MWh over the last five years, whereas wind is a distant second at $8 to $10 per MWh …. Renewable energy sources like wind and solar are largely dependent on these subsidies, which have been ongoing for 30 years with no end in sight.

But even generous subsidies often aren’t enough to ensure financial viability. Rent-enabled malinvestments like these crowd out genuinely productive capital formation. Those lost opportunities span the economy and are not limited to power plants that might otherwise have used fossil fuels.

Despite billions of dollars in “green energy” subsidies, bankruptcy has been all too common among wind and solar firms. That financial instability demonstrates the uneconomic nature of many wind and solar investments. Bankruptcy pleadings represent yet another way investors are insulated against wind and solar losses.

Subsidized Off-Hour (Wasted) Output

This almost deserves a sixth circle, except that it’s not about dormancy. Wind and solar power are sometimes available when they’re not needed, in which case the power goes unused because we lack effective power storage technology. Battery technology has a long way to go before it can overcome this problem.

When wind and solar facilities generate unused and wasted power during off-hours, their operators are nevertheless paid for that power by selling it into the grid where it goes unused. It’s another subsidy to wind and solar power producers, and one that undermines incentives for investment in batteries.

A Path To Redemption

Space-based solar power beamed to earth may become a viable alternative to terrestrial wind and solar production within a decade or so. The key advantages would be constancy and the lack of an atmospheric filter on available solar energy, producing power 13 times as efficiently as earth-bound solar panels. From the last link:

The intermittent nature of terrestrial renewable power generation is a major concern, as other types of energy generation are needed to ensure that lights stay on during unfavorable weather. Currently, electrical grids rely either on nuclear plants or gas and coal fired power stations as a backup…. “

Construction of collection platforms in geostationary orbit will take time, of course, but development of space-based solar should be a higher priority than blanketing vast tracts of land with inefficient solar panels while putting power users at risk of outages.

No Sympathy for Malinvestment

This post identified five ways in which investments in wind and solar power create frequent and often extended periods of damnably dormant physical capital:

  • Low Utilization
  • Nondispatchable Utilization
  • Idle Transmission Infrastructure
  • Idle Backup Generators
  • Outages of All Electrified Capital

Power demand is expected to soar given the coming explosion in AI applications, and especially if the heavily-subsidized and mandated transition to EVs comes to pass. But that growth in demand will not and cannot be met by relying solely on renewable energy sources. Their variability implies substantial idle capacity, higher costs, and service interruptions. Such a massive deployment of dormant capital represents an enormous waste of resources, and the sad fact is it’s been underway for some time.

In the years ahead, the net-zero objective will motivate more bungled industrial planning as a substitute for market-driven forces. Costs will be driven higher by the imposed costs of backup capacity and/or outages. Ratepayers, taxpayers, and innocents will all share these burdens.

Creating idle, non-dispatchable physical capital is malinvestment which diminishes future economic growth. The boom in wind and solar activity began in earnest during the era of negative real interest rates. Today’s higher rates might slow the malinvestment, but they won’t bring it to an end without a substantial shift in the political landscape. Instead, taxpayers will shoulder an even greater burden, as will ratepayers whose power providers are guaranteed returns on their regulated rate bases.

Demand, Disinflation, and Fed Gradualism

Tags

, , , , , , , , , , , , ,

The Fed’s “higher for longer” path for short-term interest rates lingers on, and so does inflation in excess of the Fed’s 2% target. No one should be surprised that rate cuts aren’t yet on the table, but the markets freaked out a little with the release of the February CPI numbers last week, which were higher than expected. For now, it only means the Fed will remain patient with the degree of monetary restraint already achieved.

Dashed Hopes

As I’ve said before, there was little reason for the market to have expected the Fed to cut rates aggressively this year. Just a couple of months ago, the market expected as many as six quarter-point cuts in the Fed’s target for the federal funds rate. The only rationale for that reaction would have been faster disinflation or the possibility of an economic “hard landing”. A downturn is not out of the question, especially if the Fed feels compelled to raise its rate target again in an effort to stem a resurgence in inflation. Maybe some traders felt the Fed would act politically, cutting rates aggressively as the presidential election approaches. Not yet anyway, and it seems highly unlikely.

There is no assurance that the Fed can succeed in engineering a “soft landing”, i.e., disinflation to its 2% goal without a recession. No one can claim any certainty on that point — it’s too early to call, though the odds have improved somewhat. As Scott Sumner succinctly puts it, a soft landing basically depends on whether the Fed can disinflate gradually enough.

It’s a Demand-Side Inflation

I’d like to focus a little more on Sumner’s perspective on Fed policy because it has important implications for the outlook. Sumner is a so-called market monetarist and a leading proponent of nominal GDP level targeting by the Fed. He takes issue with those ascribing the worst of the pandemic inflation to supply shocks. There’s no question that disruptions occurred on the supply side, but the Fed did more than accommodate those shocks in attempting to minimize their impact on real output and jobs. In fact, it can fairly be said that a Fed / Treasury collaboration managed to execute the biggest “helicopter drop” of money in the history of the world, by far!

That “helicopter drop” consisted of pandemic relief payments, a fiscal maneuver amounting to a gigantic monetary expansion and stimulus to demand. The profligacy has continued on the fiscal side since then, with annual deficits well in excess of $1 trillion and no end in sight. This reflects government demand against which the Fed can’t easily act to countervail, making the job of achieving a soft landing that much more difficult.

The Treasury, however, is finding a more limited appetite among investors for the flood of bonds it must regularly sell to fund the deficit. Recent increases in long-term Treasury rates reflect these large funding needs as well as the “higher-for-longer” outlook for short-term rates, inflation expectations, and of course better perceived investment alternatives.

The Nominal GDP Proof

There should be no controversy that inflation is a demand-side problem. As Summer says, supply shocks tend to reverse themselves over time, and that was largely the case as the pandemic wore on in 2021. Furthermore, advances in both real and nominal GDP have continued since then. The difference between the two is inflation, which again, has remained above the Fed’s target.

So let’s see… output and prices both growing? That combination of gains demonstrates that demand has been the primary driver of inflation for three-plus years. Restrictive monetary policy is the right prescription for taming excessive demand growth and inflation.

Here’s Sumner from early March (emphasis his), where he references flexible average inflation targeting (FAIT), a policy the Fed claims to be following, and nominal GDP level targeting (NGDPLT):

Over the past 4 years, the PCE price index is up 16.7%. Under FAIT it should have risen by 8.2% (i.e., 2%/year). Thus we’ve had roughly 8.5% excess inflation (a bit less due to compounding.)

Aggregate demand (NGDP) is up by 27.6%. Under FAIT targeting (which is similar to NGDPLT) it should have been up by about 17% (i.e., 4%/year). So we’ve had a bit less than 10.6% extra demand growth.  That explains all of the extra inflation.

Is Money “Tight”?

The Fed got around to tightening policy in the spring of 2022, but that doesn’t necessarily mean that policy ever advanced to the “tight” stage. Sumner has been vocal in asserting that the Fed’s policy hasn’t looked especially restrictive. Money growth feeds demand and ultimately translates into nominal GDP growth (aggregate demand). The latter is growing too rapidly to bring inflation into line with the 2% target. But wait! Money growth has been moderately negative since the Fed began tightening. How does that square with Sumner’s view?

In fact, the M2 money supply is still approximately 35% greater than at the start of the pandemic. There’s still a lot of M2 sloshing around out there, and the Fed’s portfolio of securities acquired during the pandemic via “quantitative easing” remains quite large ($7.5 trillion). Does this sound like tight money?

Again, Sumner would say that with nominal GDP ripping ahead at 5.7%, the Fed can’t be credibly targeting 2% inflation given an allowance for real GDP growth at trend of around 1.8% (or even somewhat greater than that). It’s an even bigger stretch if M2 velocity (V — turnover) continues to rebound with higher interest rates.

Wage growth also exceeds a level consistent with the Fed’s target. The chart below shows the gap between price inflation and wage inflation that left real wages well below pre-pandemic levels. Since early 2023, wages have made up part of that decline, but stubborn wage inflation can impede progress against price inflation.

Just Tight Enough?

Despite Sumner’s doubts, there are arguments to be made that Fed policy qualifies as restrictive. Even moderate declines in liquidity can come as a shock to markets grown accustomed to torrents from the money supply firehose. And to the extent that inflation expectations have declined, real interest rates may be higher now than they were in early November. In any case, it’s clear the market was disappointed in the higher-than-expected CPI, and traders were not greatly assuaged by the moderate report on the PPI that followed.

However, the Fed pays closest attention to another price index: the core deflator for personal consumption expenditures (PCE). Inflation by this measure is trending much closer to the Fed’s target (see the second chart below). Still, from the viewpoint of traders, many of whom, not long ago, expected six rate cuts this year, the reality of “higher for longer” is a huge disappointment.

Danger Lurks

As I noted, many believe the odds of a soft landing have improved. However, the now-apparent “stickiness” of inflation and the knowledge that the Fed will standby or possibly hike rates again has rekindled fears that the economy could turn south before the Fed elects to cut its short-term interest rate target. That might surprise Sumner in the absence of more tightening, as his arguments are partly rooted in the continuing strength of aggregate demand and nominal GDP growth.

There’s a fair degree of consensus that the labor market remains strong, which underscores Sumner’s doubts as to the actual tenor of monetary policy. The March employment numbers were deceptive, however. The gain in civilian employment was just shy of 500,000, but that gain was entirely in part-time employment. Full-time employment actually declined slightly. In fact, the same is true over the prior 12 months. And over that period, the number of multiple jobholders increased by more than total employment. Increasing reliance on part-time work and multiple jobs is a sign of stress on household budgets and that firms may be reluctant to commit to full-time hires. From the establishment survey, the gain in nonfarm employment was dominated once again by government and health care. These numbers hardly support the notion that the economy is on solid footing.

There are other signs of stress: credit card delinquencies hit an all-time high in February. High interest rates are taking a toll on households and business borrowers. Retail sales were stronger than expected in March, but excess savings accumulated during the pandemic were nearly depleted as of February, so it’s not clear how long the spending can last. And while the index of leading indicators inched up in February, it was the first gain in two years and the index has shown year/over-year declines over that entire two-year period.

Conclusion

It feels a little hollow for me to list a series of economic red flags, having done so a few times over the past year or so. The risks of a hard landing are there, to be sure. The behavior of the core PCE deflator over the next few months will have much more influence on the Fed policy, as would any dramatic changes in the real economy. The “data dependence” of policy is almost a cliche at this point. The Fed will stand pat for now, and I doubt the Fed will raise its rate target without a dramatic upside surprise on the core deflator. Likewise, any downward rate moves won’t be forthcoming without more softening in the core deflator toward 2% or definitive signs of a recession. So rate cuts aren’t likely for some months to come.

Tangled Up In Green Industrial Policy II: Rewarding Idle Capital

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

A week ago I posted about electrification and particularly EV mandates, one strand of government industrial policy under which non-favored sectors of the economy must labor. This post examines a related industrial policy: manipulation of power generation by government policymakers in favor of renewable energy technologies, while fossil fuels are targeted for oblivion. These interventions are a reaction to an overwrought climate crisis narrative, but they present many obstacles, oversights and risks of their own. Chief among them is whether the power grid will be capable of meeting current and future demand for power while relying heavily on variable resources: wind and sunshine.

Like almost everything I write, this post is too long! Here is a guide to what follows. Scroll down to whatever sections might be of interest:

  • Malinvestment: Idle capital
  • Key Considerations to chew on
  • False Premises: zero CO2? Low cost?
  • Imposed Cost: what and how much?
  • Supporting Growth: with renewables?
  • Resource Constraints: they’re tight!
  • Technological Advance: patience!
  • The Presumed Elephant: CO2 costs
  • Conclusion

Malinvestment

The intermittency of wind and solar power creates a fundamental problem of physically idle capital, which leaves the economy short of its production possibilities. To clarify, capital invested in wind and solar facilities is often idle in two critical ways. First, wind and solar assets have relatively low rates of utilization because of their variability, or intermittency. Second, neither provides “dispatchable” power: it is not “on call” in any sense during those idle periods, which are not entirely predictable. Wind and solar assets therefore contribute less value to the electric grid than dispatchable sources of power having equivalent capacity and utilization.

Is “idle capital” a reasonable characterization? Consider the shipping concerns that are now experimenting with sails on cargo ships. What is the economic value of such a ship without back-up power? Can you imagine them drifting in the equatorial calms for days on end? Even light winds would slow the transport of goods significantly. Idle capital might be bad enough, but a degree of idleness allows flexibility and risk mitigation in many applications. Idle, nondispatchable capital, however, is unproductive capital.

Likewise, solar-powered signage can underperform or fail over the course of several dark, wintry days, even with battery backup. The signage is more reliable and valuable when it is backed-up by another power source. Again, idle, non-dispatchable capital is unproductive capital.

The pursuit of net-zero carbon emissions via wind and solar power creates idle capital, which increasingly lacks adequate backup power. That should be a priority, but it’s not. This misguided effort is funded from both private investment and public subsidies, but the former is very much contingent on the latter. That’s because the flood of subsidies is what allows private investors to profit from idle capital. Rent-enabled investments like these crowd out genuinely productive capital formation, which is not limited to power plants that might otherwise use fossil fuels.

Creating idle or unemployed physical capital is malinvestment, and it diminishes future economic growth. The surge in this activity began in earnest during the era of negative real interest rates. Today, in an era of higher rates, taxpayers can expect an even greater burden, as can ratepayers whose power providers are guaranteed returns on their regulated rate bases.

Key Consideration

The forced transition to net zero will be futile, but especially if wind and solar energy are the primary focus. Keep the following in mind:

  • The demand for electricity is expected to soar, and soon! Policymakers have high hopes for EVs, and while adoption rates might fall well short of their goals, they’re doing their clumsy best to force EVs down our throats with mandates. But facilitating EV charging presents difficulties. Lionel Shriver states the obvious: “Going Electric Requires Electricity”. Reliable electricity!
  • Perhaps more impressive than prospects for EVs is the expected growth in power demand from data centers required by the explosion of artificial intelligence applications across many industries. It’s happening now! This will be magnified with the advent of artificial general intelligence (AGI).
  • Dispatchable power sources are needed to back-up unreliable wind and solar power to ensure service continuity. Maintaining backup power carries a huge “imposed cost” at the margin for wind and solar. At present, that would entail CO2 emissions, violating the net zero dictum.
  • Perhaps worse than the cost of backup power would be the cost borne by users under the complete elimination of certain dispatchable power sources. An imposed cost then takes the form of outages. Users are placed at risk of losing power at home, at the office and factories, at stores, in transit, and at hospitals at peak hours or under potentially dangerous circumstances like frigid or hot weather.
  • Historically, dispatchable power has allowed utilities to provide reliable electricity on-demand. Just flip the switch! This may become a thing of the past.
  • Wind and solar power are sometimes available when they’re not needed, in which case the power goes unused because we lack effective power storage technology.
  • Wind and solar power facilities operate at low rates of utilization, yet new facilities are always touted at their full nameplate capacity. Capacity factors for wind turbines averaged almost 36% in the U.S. in 2022, while solar facilities averaged only about 24%. This compared with nuclear power at almost 93%, natural gas (66%), and coal (48%). Obviously, the low capacity factors for wind and solar reflect their variable nature, rather than dispatchable responses to fluctuations in power demand.
  • Low utilization and variability are underemphasized or omitted by those promoting wind and solar plant in the media and often in discussions of public policy, and no wonder! We hear a great deal about “additions to capacity”, which overstate the actual power-generating potential by factors of three to four times. Here is a typical example.
  • Wind and solar power are far more heavily subsidized than fossil fuels. This is true in absolute terms and especially on the basis of actual power output, which reveals their overwhelmingly uneconomic nature. From the link above, here are Mitch Rolling and Isaac Orr on this point:
    • In 2022, wind and solar generators received three and eighteen times more subsidies per MWh, respectively, than natural gas, coal, and nuclear generators combined. Solar is the clear leader, receiving anywhere from $50 to $80 per MWh over the last five years, whereas wind is a distant second at $8 to $10 per MWh …. Renewable energy sources like wind and solar are largely dependent on these subsidies, which have been ongoing for 30 years with no end in sight.
  • The first-order burden of subsidies falls on taxpayers. The second-order burdens manifest in an unstable grid and higher power costs. But just to be clear, subsidies are paid by governments to producers or consumers to reduce the cost of activity favored by policymakers. However, the International Monetary Fund frequently cites “subsidy” figures that include staff estimates of unaddressed externalities. These are based on highly-simplified models and subject to great uncertainty, of course, especially when dollar values are assigned to categories like “climate change”. Despite what alarmists would have us believe, the extent and consequences of climate change are not settled scientific issues, let alone the dollar cost.
  • Wind and solar power are extremely land- and/or sea-intensive. For example, Casey Handmer estimates that a one-Gigawatt data center, if powered by solar panels, would need a footprint of 20,000 acres. 
  • Solar installations are associated with a significant heat island effect: “We found temperatures over a PV plant were regularly 3–4 °C warmer than wildlands at night….
  • Wind and solar power both represent major hazards to wildlife both during and after construction.
    • In addition to the destruction of habitat both on- and offshore, turbine blades create noise, electromagnetism, and migration barriers. Wind farms have been associated with significant bird and bat fatalities. Collisions with moving blades are one thing, but changes to the winds and air pressure around turbines are also a danger to avian species.
    • There is a strong likelihood that offshore wind development is endangering whales and dolphins.
    • Solar farms present dangers to waterfowl. These creatures are tricked into diving toward what they believe to be bodies of water, only to crash into the panels.
  • The production of wind and solar equipment requires the intensive use of scarce resources, including environmentally-sensitive materials. Extracting these materials often requires the excavation of massive amounts of rock subject to extensive processing. Mining and processing rely heavily on diesel fuel. Net zero? No.
  • Wind and solar facilities often present major threats of toxicity at disposal, or even sooner. A recent hail storm in Texas literally destroyed a solar farm, and the smashed panels have prompted concerns not only about solar “sustainability”, but also that harsh chemicals may be leaking into the local environment.
  • The transmission of power is costly, but that cost is magnified by the broad spatial distribution of wind and solar generating units. Transmission from offshore facilities is particularly complex. And high voltage lines run into tremendous local opposition and regulatory scrutiny.
  • When wind turbines and solar panels are idle, so are the transmission facilities needed to reach them. Thus, low utilization and the variability of those units drives up the capital needed for power and power transmission.
  • There is also an acute shortage of transformers, which presents a major bottleneck to grid development and stability.
  • While zero carbon is the ostensible goal, zero carbon nuclear power has been neglected by our industrial planners. That neglect plays off exaggerated fears about safety. Fortunately, there is a growing realization that nuclear power may be surest way to carbon reductions while meeting growth in power demand. In fact, new data centers will go off-grid with their own modular reactors.
  • At the Shriver link, he notes the smothering nature of power regulation, which obstructs the objective of providing reliable power and any hope of achieving net zero.
  • The Biden administration has resisted the substitution of low CO2 emitting power sources for high CO2 emitting sources. For example, natural gas is more energy efficient in a variety of applications than other fuel sources. Yet policymakers seem determined to discourage the production and use of natural gas.

False Premises

Wind and solar energy are touted by the federal government as zero carbon and low-cost technologies, but both claims are false. Extracting the needed resources, fabricating, installing, connecting, and ultimately disposing of these facilities is high in carbon emissions.

The claim that wind and solar have a cost advantage over traditional power sources is based on misleading comparisons. First, putting claims about the cost of carbon aside, it goes without saying that the cost of replacing already operational coal or natural gas generating capacity with new wind and solar facilities is greater than doing nothing.

The hope among net zero advocates is that existing fossil fuel generating plant can be decommissioned as more renewables come on-line. Again, this thinking ignores the variable nature of renewable power. Dispatchable backup power is required to reliably meet power demand. Otherwise, fluctuating power supplies undermine the economy’s productive capacity, leading to declines in output, income, health, and well being. That is costly, but so is maintaining and adding back-up capacity. Costs of wind and solar should account for this necessity. It implies that wind and solar generating units carry a high cost at the margin.

Imposed Costs

A “grid report card” from the Mackinac Center for Public Policy notes the conceptual flaw in comparing the levelized cost (à la Lazard) of a variable resource with one capable of steady and dispatchable performance. From the report, here is the crux of the imposed-cost problem:

“… the more renewable generation facilities you build, the more it costs the system to make up for their variability, and the less value they provide to electricity markets.

A committment to variable wind and solar power along with back-up capacity also implies that some capital will be idle regardless of wind and solar conditions. This is part of the imposed cost of wind and solar built into the accounting below. But while back-up power facilities will have idle periods, it is dispatchable and serves an insurance function, so it has value even when idle in preserving the stability of the grid. For that matter, sole reliance on dispatchable power sources requires excess capacity to serve an insurance function of a similar kind.

The Mackinac report card uses estimates of imposed cost from an Institute for Energy Research to construct the following comparison (expand the view or try clicking the image for a better view):

The figures shown in this table are somewhat dated, but the Mackinac authors use updated costs for Michigan from the Center of the American Experiment. These are shown below in terms of average costs per MWh through 2050, but the labels require some additional explanation.

The two bars on the left show costs for existing coal ($33/MWh) and gas-powered ($22) plants. The third and fourth bars are for new wind ($180) and solar ($278) installations. The fifth and sixth bars are for new nuclear reactors (a light water reactor ($74) and a small modular reactor($185)). Finally, the last two bars are for a new coal plant ($106) and a natural gas plant ($64), both with carbon capture and storage (CCS). It’s no surprise that existing coal and gas facilities are the most cost effective. Natural gas is by far the least costly of the new installations, followed by the light water reactor and coal.

The Mackinac “report card” is instructive in several ways. It provides a detailed analysis of different types of power generation across five dimensions, including reliability, cost, cleanliness, and market feasibility (the latter because some types of power (hydro, geothermal) have geographic limits. Natural gas comes out the clear winner on the report card because it is plentiful, energy dense, dispatchable, clean burning, and low-cost.

Supporting Growth

Growth in the demand for power cannot be met with variable resources without dispatchable backup or intolerable service interruptions. Unreliable power would seriously undermine the case for EVs, which is already tenuous at best. Data centers and other large users will go off-grid before they stand for it. This would represent a flat-out market rejection of renewable investments, ESGs be damned!

Casey Handmer makes some interesting projections of the power requirements of data centers supporting not just AI, but AGI, which he discusses in “How To Feed the AIs”. Here is his darkly humorous closing paragraph, predicated on meeting power demands from AGI via solar:

It seems that AGI will create an irresistibly strong economic forcing function to pave the entire world with solar panels – including the oceans. We should probably think about how we want this to play out. At current rates of progress, we have about 20 years before paving is complete.

Resource Constraints

Efforts to force a transition to wind and solar power will lead to more dramatic cost disadvantages than shown in the Mackinac report. By “forcing” a transition, I mean aggressive policies of mandates and subsidies favoring these renewables. These policies would effectuate a gross misallocation of resources. Many of the commodities needed to fabricate the components of wind and solar installations are already quite scarce, particularly on the domestic U.S. front. Inflating the demand for these commodities will result in shortages and escalating costs, magnifying the disadvantages of wind and solar power in real economic terms.

To put a finer point on the infeasibility of the net zero effort, Simon P. Michaux produced a comparative analysis in 2022 of the existing power mix versus a hypothetical power mix of renewable energy sources performing an equal amount of work, but at net-zero carbon emissions (the link is a PowerPoint summary). In the renewable energy scenario, he calculated the total quantities of various resources needed to achieve the objective over one generation of the “new” grid (to last 20 -30 years). He then calculated the numbers of years of mining or extraction needed to produce those quantities based on 2019 rates of production. Take a look at the results in the right-most column:

Those are sobering numbers. Granted, they are based on 2019 wind and solar technology. However, it’s clear that phasing out fossil fuels using today’s wind and solar technology would be out of the question within the lifetime of anyone currently living on the planet. Michaux seems to have a talent for understatement:

“Current thinking has seriously underestimated the scale of the task ahead.

He also emphasizes the upward price pressure we’re likely to witness in the years ahead across a range of commodities.

Technological Breakthroughs

Michaux’s analysis assumes static technology, but there may come a time in the not-too-distant future when advances in wind and solar power and battery storage allow them to compete with hydrocarbons and nuclear power on a true economic basis. The best way to enable real energy breakthroughs is through market-driven economic growth. Energy production and growth is hampered, however, when governments strong-arm taxpayers, electricity buyers, and traditional energy producers while rewarding renewable developers with subsidies.

We know that improvements will come across a range of technologies. We’ve already seen reductions in the costs of solar panels themselves. Battery technology has a long way to go, but it has improved and might some day be capable of substantial smoothing in the delivery of renewable power. Collection of solar power in space is another possibility, as the feasibility of beaming power to earth has been demonstrated. This solution might also have advantages in terms of transmission depending on the locations and dispersion of collection points on earth, and it would certainly be less land intensive than solar power is today. Carbon capture and carbon conversion are advancing technologies, making net zero a more feasible possibility for traditional sources of power. Nuclear power is zero carbon, but like almost everything else, constructing plants is not. Nevertheless, fission reactors have made great strides in terms of safety and efficiency. Nuclear fusion development is still in its infancy, but there have been notable advances of late.

Some or all of these technologies will experience breakthroughs that could lead to a true, zero-carbon energy future. The timeline is highly uncertain, but it’s likely to be faster than anything like the estimates in Michaux’s analysis. Who knows? Perhaps AI will help lead us to the answers.

A Presumed Elephant

This post and my previous post have emphasized two glaring instances of government failure on their own terms: a headlong plunge into unreliable renewable energy, and forced electrification done prematurely and wrong. Some would protest that I left the veritable “elephant in the room”: the presumed external or spillover costs associated with CO2 emissions from burning fossil fuels. Renewables and electrification are both intended to prevent those costs.

External costs were not ignored, of course. Externalities were discussed explicitly in several different contexts such as the mining of new materials, EV tire wear, the substitution of “cleaner” fuels for others, toxicity at disposal, and the exaggerated reductions in CO2 from EVs when the “long tailpipe” problem is ignored. However, I noted explicitly that estimates of unaddressed externalities are often highly speculative and uncertain, and especially the costs of CO2 emissions. They should not be included in comparisons of subsidies.

Therefore, the costs of various power generating technologies shown above do not account for estimates of externalities. If you’re inclined, other SCC posts on the CO2 “elephant” can be found here.

Conclusion

Power demand is expected to soar given the coming explosion in AI applications, and especially if the heavily-subsidized and mandated transition to EVs comes to pass. But that growth in demand will not and cannot be met by relying on renewable energy sources. Their variability implies substantial idle capacity, higher costs, and service interruptions. Such a massive deployment of idle capital would represents an enormous waste of resources, but the sad fact is it’s been underway for some time.

In the years ahead, the net-zero objective will prove representative of a bumbling effort at industrial planning. Costs will be driven higher, including the cost inflicted by outages and environmental damage. Ratepayers, taxpayers, and innocents will share these burdens. Travis Fisher is spot on when he says the grid is becoming a “dangerous liability” thanks to wounds inflicted by subsidies, regulations, and mandates.

As Charles Glasser put it on Instapundit:

The National Electrical Grid is teetering on collapse. The shift away from full-time available power (like fossil fuels, LNG, etc.) to so-called ‘green’ sources has deeply impacted reliability.

Also, as more whale-killing off-shore wind farms are planned, the Biden administration forgot to plan for the thousands of miles of transmission lines that will be needed. And in a perfect example of leftist autophagy, there is considerable opposition from enviro-groups who will tie up the construction of wind farms and transmission lines in court for decades.

Meanwhile, better alternatives to wind and solar have been routinely discouraged. The substantial reductions in carbon emissions achieved in the U.S. over the past 15 years were caused primarily by the substitution of natural gas for coal in power generation. Much more of that is possible. The Biden Administration, however, wishes to prevent that substitution in favor of greater reliance on high-cost, unreliable renewables. And the Administration wishes to do so without adequately backing up those variable power sources with dispatchable capacity. Likewise, nuclear power has been shunted aside, despite its safety, low risk, and dispatchability. However, there are signs of progress in attitudes toward bringing more nuclear power on-line.

Industrial policy usually meets with failure, and net zero via wind and solar power will be no exception. Like forced electrification, unreliable power fails on its own terms. Net zero ain’t gonna happen any time soon, and not even by 2050. That is, it won’t happen unless net zero is faked through mechanisms like fraudulent carbon credits (and there might not be adequate faking capacity for that!). Full-scale net-zero investment in wind and solar power, battery capacity, and incremental transmission facilities will drive the cost of power upward, undermining economic growth. Finally, wind and solar are not the environmental panacea so often promised. Quite the contrary: mining of the necessary minerals, component fabrication, installation, and even operation all have negative environmental impacts. Disposal at the end of their useful lives might be even worse. And the presumed environmental gains … reduced atmospheric carbon concentrations and lower temperatures, are more scare story than science.

Postscript: here’s where climate alarmism has left us, and this is from a candidate for the U.S. Senate (she deleted the tweet after an avalanche of well-deserved ridicule):

Tangled Up In Green Industrial Policy: Joe Biden’s Electrification

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Industrial policy allows government planners to select favored and disfavored industries or sectors. It thereby bypasses and distorts impersonal market signals that would otherwise direct scarce resources to the uses most valued by market participants. Instead, various forms of aid and penalties are imposed on different sectors in order to accomplish the planners’ objectives, This includes interventions in foreign trade and attempts to steer technological development. Industrial policy often comes under the guise of enhanced national security. Of course, it can also be used to reward cronies. And it has a poor record of accomplishing its objectives and avoiding unintended consequences.

The Sausage Factory

The executive and legislative branches of the U.S. government are loaded with economic interventionists, regardless of party affiliation. In an age of (Chevron) judicial deference to “experts” within the administrative state, it is not uncommon for legislative language to give abundant leeway to those who implement policy within the executive branch (though a couple of upcoming Supreme Court decisions might change that balance). Increasingly, bills are stuffed so full of provisions that lawmakers find it all but impossible to read them in full, let alone make an accurate assessment of their virtues, drawbacks, and internal contradictions.

Even worse is the fact that bills are, in great part, written by relatively youthful legislative staffers with little real world experience in industry, and who harbor the naive belief that whatever is wished, government can make it so. But their work also proceeds under guidance from lawmakers, administration officials, consultants, and lobbyists who have their own agendas and axes to grind. This is how industrial policy is promulgated in the U.S., and it is through this ugly prism that we must view environmental policy.

The Left dictates environmental and energy policy in several states, especially California, where energy costs have soared under renewable energy initiatives. California households now pay almost triple the rate per kilowatt-hour paid in Washington, and more than double what’s paid in Oregon. Something similar may happen in New York, which has highly ambitious goals for renewable energy even as the costs of the state’s offshore wind projects are out of control. These and other state-level “laboratories” are demonstrating that a renewable energy agenda can carry very high costs to the populace. The same is true of the painful experience in Germany with its much-heralded Energiewende.

Net Zero

The Left is also pulling the strings within the federal bureaucracy and the Biden Administration. The objective is an industrial policy to achieve “net zero” CO2 emissions, a practical impossibility for at least several decades (unless it’s faked, of course). Nevertheless, that policy calls for phasing out the use of fossil fuels. Under this agenda, mandates and subsidies are bestowed upon the use of renewable electric power sources, while restrictions and penalties are imposed on the production and use of fossil fuels. A subsequent post on the subject of power generation will address this prototypical failure of central planning.

Electrification

Here, I discuss another key objective of our industrial planners: electrify whatever is not electrified in order to advance the net zero agenda. Of course, for some time to come, more than half of electric power will be generated using fossil fuels (currently about 60%, with another 18% nuclear), so the policy is largely a sham on its face, but we’ll return to that point below. The EV tailpipe is very long, as they say.

Electrification means, among other things, the forced adoption of electronic vehicles (EVs). President Biden’s EPA has issued rules on auto emissions that are expected to require, by 2032, that 60% or more of cars and light trucks sold will be EVs. The USA Today article at the link offers this rich aside:

“…the original proposal — which was always technology-neutral in theory, meaning automakers could sell any cars and light-duty trucks they wanted as long as they hit the fleetwide reductions….”

Technology neutral? Hahaha! We aren’t forcing you to choose technologies as long as you meet our technological requirements!

EV Doldrums

Anyway, the EPA’s targets are completely impractical, partly because the value for drivers is lacking. Not coincidentally, the market for EVs seems to have chilled of late. Hertz has soured on heavy use of EVs in its fleet, and Ford has announced reductions in EV production. The new UAW agreements will make it difficult for some domestic producers to turn a profit on EVs. Fisker is just about broke. Apple has cancelled development of its EV, and several other automakers have reduced their production plans. Toyota was the first producer to raise the red flag on the breakneck transition to EVs in favor of a measured reliance on hybrids. Of course, there are other prominent voices cautioning against rapid attempts at electrification in general.

To be fair, some EVs are marvelous machines, but they and their supporting infrastructure are not yet well-suited to the mass market.

A Tangled Web

Here are some drawbacks of EVs that have yet to be adequately addressed:

  • They are expensive, even with the rich-man’s subsidy to buyers paid by the government and carbon credit subsidies granted to producers.
  • Costly battery replacement is an eventuality that looms over the wallets of EV owners.
  • EVs have limited range given the state of battery technology, especially when the weather is cold.
  • There presently exist far too few charging stations to make EVs workable for many people. In any case, charging away from home can be extremely time consuming and the charges vary widely.
  • The purchase and installation of EV chargers at home is a separate matter, and can cost $4,000 or more if an upgrade to the service panel is necessary. Installed costs commonly range from $1,175 to $3,300, depending on the type of charger and the region.
  • EVs are much heavier than vehicles powered by internal combustion engines. As a result, EV tire wear can be a surprising cost causer and pollutant.
  • Used EVs are not in demand, given all of the above, so resale value is questionable.
  • Battery fires in EVs are extremely difficult to extinguish, creating a new challenge for emergency responders.
  • Reliance on EVs for local emergency services would be dangerous without duplicative investment by local jurisdictions to offset the down-time required for charging.
  • For decades to come, the power grid will be unable to handle the load required for widespread adoption of EVs. A rapid conversion would be impossible without a great expansion in generating and transmission capacity, including transformer availability.
  • Domestically we lack the natural resources to produce the batteries required by EVs in a quantity that would satisfy the Administration’s goals. This forces dependence on China, our chief foreign adversary.
  • The mining of those resources is destructive to the environment. Much of it is done in China due to the country’s abundance of rare earth minerals, but wherever the mining occurs, it relies heavily on diesel power.
  • Joel Kotkin points out that China now hosts the world’s largest EV producer, BYD. Biden’s mandates might very well allow China to dominate the U.S. auto market, even as its own CO2 emissions are soaring,,
  • Producers of EVs earn carbon credits for each vehicle sold, which they can sell to other auto producers who fall short of their required mix of EVs in total production. Tesla, for example, earned revenue of $1.8 billion from carbon credit sales in 2022. But note again that these so-called zero-emission vehicles use electricity generated with an average of 60% fossil fuels. Thus, the scheme is largely a sham.

The push for EVs has been hampered by the botched rollout of (non-Tesla) charging stations under a huge Biden initiative in the Infrastructure Investment and Jobs Act. Progress has been bogged down by sheer complexity and expense, including the cost of bringing adequate power supplies to the chargers as well as the difficulty of meeting contracting requirements and operating standards. This is exemplary of the failures that usually await government efforts to engineer outcomes contrary to market forces.

Electric Everything?

Like EVs, electric stoves have drawbacks that limit their popularity, including price and the nature of the heat needed for quality food preparation. In addition to autos and stoves, wholesale electrification would require the replacement or costly reconfiguration of a huge stock of business and household capital that is now powered by fossil fuels, like gas furnaces, tractors, chain saws, and many other tools and appliances. This set of legacy investment choices was guided by market prices that reflect the scarcity and efficiency of the resources, yet government industrial planners propose to lay much of it to waste.

Central Planning: a False Conceit

John Mozena quotes Adam Smith on the social and economic hazards of rejecting the market mechanism and instead accepting governmental authority over the allocation of resources:

All governments which thwart this natural course, which force things into another channel, or which endeavour to arrest the progress of society at a particular point, are unnatural, and to support themselves are obliged to be oppressive and tyrannical.

And Arnold Kling gives emphasis to the disadvantages faced by even the most benevolent central planner:

As Ludwig von Mises and Friedrich Hayek pointed out during the socialist calculation debate, central planners lack the information that is produced by markets. By over-riding market prices and substituting their own judgment, regulators incur the same loss of information.”

Advocates of EV industrial policy have failed to appreciate the large gaps between the technology they are determined to dictate and basic consumer requirements. These gaps are along such margins as range, charging time, tire and battery wear, and perhaps most importantly, affordability. The planners have failed to foresee the massive demands on the power grid of a forced replacement of the internal combustion auto stock with EVs. The planners elide the true nature of EV-driven emissions, which are never zero carbon but instead depend on the mix of power sources used to charge EV batteries. Finally, EV mandates show that the industrial planners are oblivious to other environmental burdens inherent in EVs, whatever their true carbon footprint might be.

Continue reading

Carbon Credits Are Still Largely Fake

Tags

, , , , , , , , , , , , , , , , ,

About a year ago I wrote about the sketchy nature of carbon credits (or “offsets”), which are purchased by people or entities whose actions generate CO2 emissions they’d like to offset. Those actions would include Taylor Swift’s private air travel, electric power generation, and many other activities whose participants wish to have “greenwashed”.

One short digression before I get started: see those black clouds of CO2 in the image above? Well, carbon dioxide doesn’t really look like that. In fact, CO2 is transparent. Trees breathe it! Visually, it’s less obvious than the greenhouse gas known as water vapor in those puffy white clouds, but virtually every image you’ll ever see on-line depicting CO2 emissions shows dark, roiling smoke. I just hate to spoil the scary effect, but there it is.

Back to carbon credits, which help fund projects that offset CO2 emissions (at least theoretically), such as planting new forest acreage (which would absorb CO2 … someday) or preventing deforestation. Other types of offset activities include investment in renewable energy projects and carbon capture technology. So, for example, if a utility’s power generation emits CO2, the creation or preservation of some amount of forested acreage can serve as a carbon sink adequate to offset the utility’s emissions. Net zero! Or so the utility might claim.

If only it were that simple! Paul Mueller explains that the incentive structure of these arrangements is perverse. What if credits are sold on the basis of supposed efforts to preserve forests that were never at risk to begin with? In fact, the promise of revenue from the sale of credits may be a powerful incentive to falsely present forested lands as targets for development. For that matter, cutting forestland for lumber makes more sense if it can be replanted immediately in exchange for revenue from the sale of carbon credits. And newly planted acreage won’t lead to absorption of much CO2 for many years, until the trees begin to mature. Then there are the risks of forest fires or disease that could compromise a forest’s ultimate value as a carbon sink.

Whether through fraud, calamity, or mismanagement, the sad truth is that projects serving as a basis for credits have done far less to reduce deforestation than promised. On top of that, another issue plaguing carbon markets for some time has been double counting of offsets, which can occur under several circumstances. Ultimately, CO2 emissions themselves may have done more to promote the growth of forests than purchases of carbon credits, because CO2 gives life to vegetation!

Obviously, the purchase of offsets raises the incremental cost of any project having CO2 emissions. The incidence of this added cost is borne to a large extent by consumers, especially because power demand is fairly inelastic. The craziness of offset logic may even dictate the purchase of offsets when a plant emitting more CO2 (e.g., coal) is replaced by a plant emitting less (natural gas), because the replacement would still emit carbon!

Some carbon offsets help pay for the construction of renewable power facilities like wind and solar farms. These renewable power facilities contribute to the power supply, of course, but wind turbines and solar farms typically operate at a small fraction of nameplate capacity due to the intermittency of wind and sunshine. Thus, these offsets are far less than complete. And from that low rate of renewable utilization we can deduct another fraction: periods of actual utilization often occur when no one wants the power, and while utilities can sell that excess power into the grid, it doesn’t replace other power at those times and it therefore doesn’t contribute to reductions in CO2 emissions.

Claims of achieving net zero are very much in vogue in the corporate world, and for a few related reasons. One is that they help keep activists and protesters away from the gates. There are, however, plenty of activists serving on corporate boards, in the executive suite, and among regulators.

The purchase of carbon offsets by “socially responsible corporations” might put stakeholder pressure on competitors who are “insufficiently green”. That would help to compensate for the higher costs imposed by offsets. After all, carbon credits are not cheap. In fact, smaller competitors might struggle to fund additional outlays for the credits.

Finally, claims of carbon neutrality also help with another constituency: “woke” investors. “Achieving” net zero boosts a firm’s so-called ESG score, presumed to reflect soundness in terms of environmental (E) and social (S) responsibility, as well as the quality of internal governance (G). With firms jockeying for ESG improvements, they help keep the offset charade going.

There is no common standard for calculating ESG, and there is considerable variance in ESG scores across rating firms. This should be cause for great skepticism, but too many investors are vulnerable to suggestions that screening on ESGs can enable both social responsibility and better returns. Sadly, they are sometimes paying higher fees for the privilege. The ESG fad among these investors might have helped fulfill hopes of greater returns for a while, but the imagined ESG advantage may have faded.

Carbon credits or offsets are plagued by bad incentives that often lead to wasteful outlays if not outright fraud. At present, they generally fail to reduce atmospheric CO2 as promised and they contribute to higher costs, which are passed on to consumers. They also serve as an unworthy basis for higher ESG scores, which are something of a sham in any case.

There have been efforts underway to improve the quality and legitimacy of carbon offsets. Some of this is voluntary due diligence on the part of purchasers. The effort also includes various NGOs and regulators. Ultimately, the push for quality is likely to push the price of offsets upward dramatically. Perhaps offsets will become more credible, but they won’t come cheap. The cost of achieving net zero targets will largely come out of consumers’ pockets, and those net zeros will still be nominal at best.

Lords of the Planetary Commons Insist We Banish Sovereignty, Growth

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , ,

We all share Planet Earth as our home, so there’s a strong sense in which it qualifies as a “commons”. That’s one sensible premise of a new paper entitled “The planetary commons: A new paradigm for safeguarding Earth-regulating systems in the Anthropocene”. The title is a long way of saying that the authors desire broad-based environmental regulation, and that’s what ultimately comes across.

First, a preliminary issue: many resources qualify as commons in the very broadest sense, yet free societies have learned over time that many resources are used much more productively when property rights are assigned to individuals. For example, modern agriculture owes much to defining exclusive property rights to land so that conflicting interests don’t have to compete (e.g,, the farmer and the cowman). Federal land is treated as a commons, however. There is a rich history on the establishment of property rights, but within limits, the legal framework in place can define whether a resource is treated as a commons, a club good, or private property. The point here is that there are substantial economic advantages to preserving strong property rights, rather than treating all resources as communal.

The authors of the planetary commons (PC) paper present a rough sketch for governance over use of the planet’s resources, given their belief that a planetary crisis is unfolding before our eyes. The paper has two main thrusts as I see it. One is to broadly redefine virtually all physical resources as common pool interests because their use, in the authors’ view, may entail some degree of external cost involving degradation of the biosphere. The second is to propose centralized, “planetary” rule-making over the amounts and ways in which those resources are used.

It’s an Opinion Piece

The PC paper is billed as the work product of a “collaborative team of 22 leading international researchers”. This group includes four attorneys (one of whom was a lead author) and one philosopher. Climate impact researchers are represented, who undoubtedly helped shape assumptions about climate change and its causes that drive the PC’s theses. (More on those assumptions in a section below.) There are a few social scientists of various stripes among the credited authors, one meteorologist, and a few “sustainability”, “resilience”, and health researchers. It’s quite a collection of signees, er… “research collaborators”.

Grabby Interventionists

The reasoning underlying a “planetary commons” (PC) is that the planet’s biosphere qualifies as a commons. The biosphere must include virtually any public good like air and sunshine, any common good like waterways, or any private good or club good. After all, any object can play host to tiny microbes regardless of ownership status. So the PC authors characterization of the planet’s biosphere as a commons is quite broad in terms of conventional notions of resource attributes.

We usually think of spillover or external costs as arising from some use of a private resource that imposes costs on others, such as air or water pollution. However, mere survival requires that mankind exploit both public and non-public resources, acts that can always be said to impact the biosphere in some way. Efforts to secure shelter, food, and water all impinge on the earth’s resources. To some extent, mankind must use and shape the biosphere to succeed, and it’s our natural prerogative to do so, just like any other creature in the food chain.

Even if we are to accept the PC paper’s premise that the entire biosphere should be treated is a commons, most spillovers are de minimus. From a public policy perspective, it makes little sense to attempt to govern over such minor externalities. Monitoring behavior would be costly, if not impossible, at such an atomistic level. Instead, free and civil societies rely on a high degree of self-governance and informal enforcement of ethical standards to keep small harms to a minimum.

Unfortunately, the identification and quantification of meaningful spillover costs is not always clear-cut. This has led to an increasingly complex regulatory environment, an increasingly litigious business environment, and efforts by policymakers to manage the detailed inputs and outputs of the industrial economy.

All of that is costly in its own right, especially because the activities giving rise to those spillovers often enable large welfare enhancements. Regulators and planners face great difficulties in estimating the costs and benefits of various “correctives”. The very undertaking creates risk that often exceeds the cost of the original spillover. Nevertheless, the PC paper expands on the murkiest aspects of spillover governance by including “… all critical biophysical Earth-regulating systems and their functions, irrespective of where they are located…” as part of a commons requiring “… additional governance arrangements….”

Adoption of the PC framework would authorize global interventions (and ultimately local interventions, including surveillance) on a massive scale based on guesswork by bureaucrats regarding the evolution of the biosphere.

Ostrom Upside Down

Not only would the PC framework represent an expansion of the grounds for intervention by public authorities, it seeks to establish international authority for intervention into public and private affairs within sovereign states. The authors attempt to rationalize such far-reaching intrusions in a rather curious way:

Drawing on the legacy of Elinor Ostrom’s foundational research, which validated the need for and effectiveness of polycentric approaches to commons governance (e.g., ref. 35, p. 528, ref. 36, p. 1910), we propose that a nested Earth system governance approach be followed, which will entail the creation of additional governance arrangements for those planetary commons that are not yet adequately governed.”

Anyone having a passing familiarity with Elinor Ostrom’s work knows that she focused on the identification of collaborative solutions to common goods problems. She studied voluntary and often strictly private efforts among groups or communities to conserve common pool resources, as opposed to state-imposed solutions. Ostrom accepted assigned rights and pricing solutions to managing common resources, but she counseled against sole reliance on market-based tools.

Surely the PC authors know they aren’t exactly channeling Ostrom:

An earth system governance approach will require an overarching global institution that is responsible for the entire Earth system, built around high-level principles and broad oversight and reporting provisions. This institution would serve as a universal point of aggregation for the governance of individual planetary commons, where oversight and monitoring of all commons come together, including annual reporting on the state of the planetary commons.”

Polycentricity was used by Ostrom to describe the involvement of different, overlapping “centers of authority”, such as individual consumers and producers, cooperatives formed among consumers and producers, other community organizations, local jurisdictions, and even state or federal regulators. Some of these centers of authority supersede others in various ways. For example, solutions developed by cooperatives or lower centers of authority must align with the legal framework within various government jurisdictions. However, as David Henderson has noted, Ostrom observed that management of pooled resources at lower levels of authority was generally superior to centralized control. Henderson quotes Ostrom and a co-author on this point:

When users are genuinely engaged in decisions regarding rules affecting their use, the likelihood of them following the rules and monitoring others is much greater than when an authority simply imposes rules.

The authors of the PC have something else in mind, and they bastardize the spirit of Ostrom’s legacy in the process. For example, the next sentence is critical for understanding the authors’ intent:

If excessive emissions and harmful activities in some countries affect planetary commons in other areas—for example, the melting of polar ice—strong political and legal restrictions for such localized activities would be needed.

Of course, there are obvious difficulties in measuring impacts of various actions on polar ice, assigning responsibility, and determining the appropriate “restrictions”. But in essence, the PC paper advocates for a top-down model of governance. Polycentrism is thus reduced to “you do as we say”, which is not in the spirit of Ostrom’s research.

Planetary Governance

Transcending national sovereignty on questions of the biosphere is key to the authors’ ambitions. At a bare minimum, the authors desire legally-binding commitments to international agreements on environmental governance, unlike the unenforceable promises made for the Paris Climate Accords:

“At present, the United Nations General Assembly, or a more specialized body mandated by the Assembly, could be the starting point for such an overarching body, even though the General Assembly, with its state-based approach that grants equal voting rights to both large countries and micronations, represents outdated traditions of an old European political order.”

But the votes of various “micronations” count for zilch when it comes to real “claims” on the resources of other sovereign nations! Otherwise, there is nothing “voluntary” about the regime proposed in the PC paper.

“A challenge for such regimes is to duly adapt and adjust notions of state sovereignty and self-determination, and to define obligations and reciprocal support and compensation schemes to ensure protection of the Earth system, while including comprehensive stewardship obligations and mandates aimed at protecting Earth-regulating systems in a just and inclusive way.

So there! The way forward is to adopt the broadest possible definition of market failure and global regulation of any and all private activity touching on nature in any way. And note here a similarity to the Paris Accords: achieving commitments would fall to national governments whose elites often demonstrate a preference for top-down solutions.

Ah Yes, Redistribution

It should be apparent by now that the PC paper follows a now well-established tradition in multi-national climate “negotiations” to serve as subterfuge for redistribution (which, incidentally, includes the achievement of interspecies justice):

For instance, a more equal sharing of the burdens of climate stabilization would require significant multilateral financial and technology transfers in order not to harm the poorest globally (116).”

The authors insist that participation in this governance would be “voluntary”, but the following sentence seems inconsistent with that assurance:

… considering that any move to strengthen planetary commons governance would likely be voluntarily entered into, the burdens of conservation must be shared fairly (115).

Wait, what? “Voluntary” at what level? Who defines “fairness”? The authors approvingly offer this paraphrase of the words of Brazilian President Lula da Silva,

… who affirmed the Amazon rainforest as a collective responsibility which Brazil is committed to protect on behalf of all citizens around the world, and that deserves and justifies compensation from other nations (117).

Let Them Eat Cake

Furthermore, PC would require de-growth and so-called “sufficiency” for thee (i.e., be happy with less), if not for those who’ll design and administer the regime.

“… new principles that align with novel Anthropocene dynamics and that could reverse the path-dependent course of current governance. These new principles are captured under a new legal paradigm designed for the Anthropocene called earth system law and include, among others, the principles of differentiated degrowth and sufficiency, the principle of interconnectivity, and a new planetary ethic (e.g., principle of ecological sustainability) (134).

If we’re to take the PC super-regulators at their word, the regulatory regime would impinge on fertility decisions as well. Just who might we trust to govern humanity thusly? If we’re wise enough to apply the Munger Test, we wouldn’t grant that kind of power to our worst enemy!

Global Warmism

The underlying premise of the PC proposal is that a global crisis is now unfolding before our eyes: anthropomorphic global warming (AGW). The authors maintain that emissions of carbon dioxide are the cause of rising temperatures, rapidly rising sea levels, more violent weather, and other imminent disasters.

It is now well established that human actions have pushed the Earth outside of the window of favorable environmental conditions experienced during the Holocene…

Earth system science now shows that there are biophysical limits to what existing organized human political, economic, and other social systems can appropriate from the planet.”

For a variety of reasons, both of these claims are more dubious than one might suppose based on popular narratives. As for the second of these, mankind’s limitless capacity for innovation is a more powerful force for sustainability than the authors would seem to allow. On the first claim, it’s important to note that the PC paper’s forebodings are primarily based on modeled, prospective outcomes, not historical data. The models are drastically oversimplified representations of the earth’s climate dynamics driven by exogenous carbon forcing assumptions. Their outputs have proven to be highly unreliable, overestimating warming trends almost without exception. These models exaggerate climate sensitivity to carbon forcings, and they largely ignore powerful natural forcings such as variations in solar irradiance, geological heating, and even geological carbon forcings. The models are also notorious for their inadequate treatment of feedback effects from cloud cover. Their predictions of key variables like water vapor are wildly in error.

The measurement of the so-called “global temperature” is itself subject to tremendous uncertainty. Weather stations come and go. They are distributed very unevenly across land masses, and measurement at sea is even sketchier. Averaging all these temperatures would be problematic even if there were no other issues… but there are. Individual stations are often sited poorly, including distortions from heat island effects. Aging of equipment creates a systematic upward bias, but correcting for that bias (via so-called homogenization) causes a “cooling the past” bias. It’s also instructive to note that the increase in global temperature from pre-industrial times actually began about 80 years prior to the onset of more intense carbon emissions in the 20th century.

Climate alarmists often speak in terms of temperature anomalies, rather than temperature levels. In other words, to what extent do temperatures differ from long-term averages? The magnitude of these anomalies, using the past several decades as a base, tend to be anywhere from zero degrees to well above one degree Celsius, depending on the year. Relative to temperature levels, the anomalies are a small fraction. Given the uncertainty in temperature levels, the anomalies themselves are dwarfed by the noise in the original series!

Pick Your Own Tipping Point

It seems that “tipping point” scares are heavily in vogue at the moment, and the PC proposal asks us to quaff deeply of these narratives. Everything is said to be at a tipping point into irrecoverable disaster that can be forestalled only by reforms to mankind’s unsustainable ways. To speak of the possibility of other causal forces would be a sacrilege. There are supposed tipping points for the global climate itself as well as tipping points for the polar ice sheets, the world’s forests, sea levels and coastal environments, severe weather, and wildlife populations. But none of this is based on objective science.

For example, the 1.5 degree limit on global warming is a wholly arbitrary figure invented by the IPCC for the Paris Climate Accords, yet the authors of the PC proposal would have us believe that it was some sort of scientific determination. And it does not represent a tipping point. Cliff Mass explains that climate models do not behave as if irreversible tipping points exist.

Consider also that there has been absolutely no increase in the frequency or intensity of severe weather.

Likewise, the rise of sea levels has not accelerated from prior trends, so it has nothing to do with carbon forcing.

One thing carbon forcings have accomplished is a significant greening of the planet, which if anything bodes well for the biosphere

What about the disappearance of the polar ice sheets? On this point, Cliff Mass quotes Chapter 3 of the IPCC’s Special Report on the implications of 1.5C or more warming:

there is little evidence for a tipping point in the transition from perennial to seasonal ice cover. No evidence has been found for irreversibility or tipping points, suggesting that year-round sea ice will return given a suitable climate.

The PC paper also attempts to connect global warming to increases in forest fires, but that’s incorrect: there has been no increasing trend in forest fires or annual burned acreage. If anything, trends in measures of forest fire activity have been negative over the past 80 years.

Concluding Thoughts

The alarmist propaganda contained in the PC proposal is intended to convince opinion leaders and the public that they’d better get on board with draconian and coercive steps to curtail economic activity. They appeal to the sense of virtue that must always accompany consent to authoritarian action, and that means vouching for sacrifice in the interests of environmental and climate equity. All the while, the authors hide behind a misleading version of Elinor Ostrom’s insights into the voluntary and cooperative husbandry of common pool resources.

One day we’ll be able to produce enough carbon-free energy to accommodate high standards of living worldwide and growth beyond that point. In fact, we already possess the technological know-how to substantially reduce our reliance on fossil fuels, but we lack the political will to avail ourselves of nuclear energy. With any luck, that will soften with installations of modular nuclear units.

Ultimately, we’ll see advances in fusion technology, beamed non-intermittent solar power from orbital collection platforms, advances in geothermal power, and effective carbon capture. Developing these technologies and implementing them at global scales will require massive investments that can be made possible only through economic growth, even if that means additional carbon emissions in the interim. We must unleash the private sector to conduct research and development without the meddling and clumsy efforts at top-down planning that typify governmental efforts (including an end to mandates, subsidies, and taxes). We must also reject ill-advised attempts at geoengineered cooling that are seemingly flying under the regulatory radar. Meanwhile, let’s save ourselves a lot of trouble by dismissing the interventionists in the planetary commons crowd.

They Pave Paradise Because Users Pay No Price

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , ,

The interchange above is just a few miles from my new home. It’s the world’s largest “diverging diamond” design and it usually works quite well, so I was interested to see this video discussing both its benefits and the conditions under which it hasn’t performed well.

Unfortunately, the video maintains a dubious focus on car dependence in most urban areas. The tale it tells is daunting… and if the reaction on Reddit is any indication, it seems to excite the populist mind. The narrator blames car dependence and sprawl on poor urban planning. I agree in a sense, and I’ll even stipulate that our car dependence is often excessive, but not because anyone could have “planned” better. Top-down planning is notoriously failure-prone. Rather, the corrective is something the creators of the video never contemplate: effective pricing for the use of roads.

There is deserved emphasis near the end of the video on the cost of building and maintaining roads and interchanges. For example, the cost of the interchange above was $74.5 million when it was built about 15 years ago. That sounds exorbitant, and it’s natural for people (and especially urban planners) to question the necessity of building an interchange of that magnitude in what many feel “should be” an outlying district. Did sprawl make it necessary? Can that be avoided in a growing region? What can or should be done?

Good Interchange Design

The interchange in question is at I-75 and University Parkway in Sarasota, FL. It’s used by many drivers to access a large shopping mall, other commercial centers, and nearby residential areas. The video stresses the diverging diamond’s effectiveness and safety in handling high flows of traffic. The design reduces the number of conflict points relative to conventional diamond interchanges, especially for crossing traffic.

Both diverging diamonds and conventional diamond interchanges have advantages over cloverleaf designs. While the latter have no crossover conflict points, they require more land use. They also create additional complexities for grading and drainage, and they are often constrained in the length of space available for left-turn merges. Furthermore, a cloverleaf places more severe limits on traffic flow. Flyover ramps are another alternative that can save space but entail greater expense.

The interchange in question serves an area of rapid growth. Residents increasingly complain about traffic, especially when “snow birds” are in town during the winter months. The video shows that even the diverging diamond has problems once traffic reaches a certain volume. But new residential communities and commercial areas continue to come on-line, adding to traffic flows and requiring additional roads and infrastructure. Again, the narrator believes the resulting traffic and sprawl could have been avoided, and he’s partly correct as far as that goes.

Sprawl Reflects Preferences

The video fails to consider important qualifications to the “car dependence” critique of suburban sprawl. For example, many people like to use their cars and enjoy the freedom of mobility their cars confer. More importantly, most people prefer to live in low-density residential environments rather than dense urban neighborhoods, or even the kinds of communities depicted as ideal in the video. I’m one of those people. More space, more privacy, and more greenery (though I grant that sprawling mall parking lots are not my favorite aesthetic).

Joel Kotkin presents data along those lines, quoting research by Jessica Trounstine, who says, “preferences for single-family development are ubiquitous.” And low-density communities have broad appeal across demographics, as noted by Kotkin:

Even in blue states, the majority of ethnic minorities live in suburbs, who have accounted for virtually all the suburban growth over the past decade. William Frey of the Brookings Institution notes that in 1990 roughly 20 percent of suburbanites were non-white. That rose to 30 percent in 2000 and 45 percent in 2020.

Urban Planning Myopia

As to the video’s emphasis on car dependence, its most serious omission is a failure to recognize the economics of pricing. Road use comes with various costs, but the key here is the zero price at the margin for using specific routes, interchanges, bridges, and suburban parking lots. There are many exceptions to be sure, but the video makes no mention of road pricing as a development tool. Nor does it consider “socialized roads” as the chief cause of ever-expanding demands for roads, parking, and the all-too-typical failure of these ersatz “commons”.

The federal government is complicit in this. After all, the interstate highway system was a federal initiative, and interchanges (along with concomitant commercial development) are integral to its success. Interstate highways often supplemented regional efforts to facilitate commuting to cities from distant suburbs. More recently, Joe Biden’s Infrastructure Investment and Jobs Act of 2021 added $110 billion a year from the government’s general fund to subsidize highways and bridges. It should be no surprise that federal gas taxes don’t fund these subsidies. (Gas taxes are user fees only in a vague sense, as they don’t price specific routes at the margin).

More Roads, Trains, Buses?

There are two knee-jerk reactions to congested roads. The first is a tendency to double-down on invested plant, building more, bigger, and wider roads in the hope that they can handle the growing traffic load. Presumably this must be funded by taxpayers, as in the past, and seldom if ever by charging per marginal use of these facilities. This “solution” basically calls for more socialized roads.

The second knee-jerk reaction to congestion, and it is also a reaction to the real or presumed shortcomings of a “paved paradise”, is to call for more buses, streetcars, or light rail. But mass transit systems seldom pay for their operating costs let alone their capital costs. One of the reasons, of course, is that they must compete with free roads!

What else might the urban planners have us do? We can’t just tear down the sprawling developments and road infrastructure and start over. However, we can accomplish a few other things like: 1) raise revenue from users to make the upkeep of road infrastructure self-funding; 2) minimize congestion, emissions, and time-use while improving safety; and 3) stem growth in demand that eventually would require more lanes, more parking, and other measures to maximize traffic flow. Pricing the actual use of roads would do all these things in greater or lesser degree, and it would more effectively balance development preferences with costs. In turn, positive road-use prices would incentivize other development models such as the “human-centric” communities the video’s narrator finds so attractive.

Those Who Benefit Shall Pay

Tolls for the use of roads and bridges (and paid parking) are hardly new ideas. Tolls on bridges were a natural continuation of fees charged by operators of ferry boats. Tolling was instituted by large landholders to extract rents from anyone wishing to traverse their property, and only later was used as a mechanism for funding road construction and maintenance. But like any price, tolls serve to ration the availability of a resource.

Today, tolling in the U.S. is an increasingly important source of funding for highways and bridges. This importance is growing due to a less sanguine outlook for gas tax collections. In any case, tolls are often more advantageous politically than taxes. Technological advance has allowed tolling to become more cost effective as well. In Florida, for example, the SunPass system allows drivers to cruise through toll collection points at moderate speeds. It’s also used for parking at certain facilities like airports. SunPass holders are required to set up automatic “recharge” of their available balance for toll payments. Similar systems are in place in other states.

Technology has enabled dynamic congestion pricing to be implemented by commercial interests like Uber and Lyft. This means that price responds to demand and supply conditions in real time. In coming years, congestion pricing is likely to be instituted by jurisdictions experiencing heavy traffic volumes. New York City’s congestion pricing plan has stalled, but it would charge a toll on vehicles using Manhattan streets below Central Park.

Law of Demand

Tolls at interchanges like the one at I-75 and University Parkway would help to allocate resources more efficiently. First, the mechanics could be simple enough in concept, but toll booths are probably out of the question, and toll authorities would have to sort through various administrative issues.

Let’s suppose SunPass was put to use here, with the revenue distributed to several jurisdictions or agencies responsible for maintaining the interchange and a defined set of connecting streets. When a driver exits I-75 to University, enters I-75 from University, or uses the through lanes on University, the SunPass transponder in their vehicle would communicate with the toll system to record their passage, and their account would be charged the appropriate toll. The charge might differ for through lanes versus I-75 entry or exit. Over the course of a month, tolls on various roads and interchanges would accumulate and be summarized by road or interchange on a statement for the driver.

Vehicles without SunPass (or another toll system partnering with SunPass) would have to be charged via photo identification of tags with billing by mail once a month. This is already a feature of toll roads in Florida (and other states) when vehicles without a SunPass use the SunPass lanes. The volume of mail billing would increase substantially, but that is not an obstacle in principle.

One other wrinkle would allow existing residents of neighborhoods with street entrances within one or two miles of the interchange to receive discounted tolls. That seems fair, but the danger is that discounts of this kind, if extended too far, would blunt incentives that otherwise discourage overuse and underpriced road sprawl. It would also add another layer of complexity to the tolling system.

The behavior of drivers will change in response to tolls. They derive benefits from using particular interchanges which depend upon the importance of errands or appointments in each vicinity, the distance and convenience of other shopping areas, the time of day, and the time saved by using any one route instead of alternates. The toll paid for using an interchange might depend on the size of vehicle, the time of day, or some measure of average congestion at that time of day. A higher toll prompts drivers to consider other routes, other shopping areas (including on-line shopping), or different times of day for those errands. Thus, tolls will redistribute traffic across space and time and are likely to reduce overall traffic at the most congested interchanges, at least at peak hours when tolls are highest.

Smart Pricing

The advent and installation of more sophisticated tolling infrastructure will enable “smart roads”, time-of-day pricing, or even dynamic congestion pricing on some routes. Integrating dynamic pricing with information systems guiding driver decisions about route choice and timing would be another major step. Implementing sophisticated route pricing systems like this will take time, but ultimately the technology will allow tolls to be applied broadly and efficiently… if we allow it to happen.

Private Vs. Public

The private sector is likely to play a greater role in a world of more widespread tolling. To some extent this will take the form of more privately-owned roads. Short of that, many toll roads and smart roads will be privately administered and operated. Private concerns will also play a major role in provisioning infrastructure and systems for more widespread and sophisticated toll roads.

There is a long history of private roads in the U.S. Robert P. Murphy offers a brief summary:

“… many analysts simply assume, because currently the government virtually monopolizes the production and administration of roads, that it must always have done so. And yet, from the 1790s through the 1830s, the private sector was responsible for the creation and operation of many turnpikes. According to economist Daniel Klein, ‘The turnpike companies were legally organized like corporate businesses of the day. The first, connecting Philadelphia and Lancaster, was chartered in 1792, opened in 1794, and proved significant in the competition for trade.’3 ‘By 1800,’ Klein reports, ‘sixty-nine companies had been chartered’ in New England and the Middle Atlantic states. Merchants would often underwrite the expense of building a turnpike, knowing that it would bring in extra traffic to their businesses.

In Norway and Sweden, most roads are owned and operated privately, though most of the private roads are local. The funding is generally provided by property owners along those routes. Private roads are increasingly common in the U.S., but they are mostly confined to private communities funded by residents. Broader private ownership of roads, and tolling, is likely to occur in the U.S. as governments at all levels struggle with issues of funding, maintenance, traffic control, and growth.

Pricing For Scarcity

There will be political obstacles to widespread tolling and road congestion pricing. Questions of equity and privacy will be raised, but pricing may hold the key to achieving more equitable outcomes. Greater reliance on tolls would avoid regressive tax increases, and selective tolls themselves might well have a progressive incidence, to the extent that congestion tends to be high in prosperous commercial districts. It would make alternatives like mass transit more competitive and viable as well. Furthermore, price signals will cause geographic patterns of commerce and development to shift, potentially encouraging the kinds of high-density, pedestrian communities long-favored by urban planners.

Urban sprawl and auto dependence are old targets of the urban planning community, not to mention the populist left. But those critics rely on a stylized characterization of geographic and social arrangements that happen to be preferred by masses of individuals. As an economist, I sympathize with the critics because those preferences are revealed under incentives that do not reflect the scarcity and real costs of roads and driving. However, in the absence of adequate price incentives, solutions offered by critics of sprawl and autos are at worst brutally intrusive and at best ineffectual. More efficient pricing of roads can be achieved with the installation of tolling solutions that are now technologically feasible. Optimizing tolls over specific roads, bridges, blocks, intersections, and interchanges will require more sophisticated systems, but for now, let’s at least get road-use prices going in the right direction!

Stubborn Inflation and the Fed’s Approach Trajectory

Tags

, , , , , , , , , , , , , , , , , , , ,

When Federal Reserve Chairman Jerome Powell said “higher for longer” last year, it wasn’t about the Grateful Dead concerts he’s attended over the years. No, he meant the Fed might need to raise its short-term interest rate target and/or keep it elevated for an extended period to squeeze inflation out of the economy. As late as December, Powell said that additional rate hikes remain on the table. But short of that, the Fed might keep its current target rate steady until inflation is solidly in-line with its 2% objective. The obvious risk is that tight monetary policy might tip the economy into recession. The market, for its part, is pricing in several rate cuts this year.

Thus far, the release of key economic data for December 2023 has not settled the debate as to whether disinflation has truly paused short of the Fed’s goal. There were inauspicious signs from the labor market in December as well. These data releases don’t rule out a “soft landing”, but they indicate that recession risks are still with us in 2024. The Fed will face a dilemma if the economy weakens but inflation fails to abate, either due to residual stickiness or new supply shocks. The latter are unfolding even now with the shut down of Red Sea shipping.

Bad Employment Report

On the surface, the employment report from the Bureau of Labor Statistics (BLS) was strong relative to expectations, and the media reported it on that superficial level: nonfarm payrolls increased by 216,000 jobs, about 45,000 more than expected; unemployment was unchanged from November at 3.7%.

Unfortunately, the report contained several ominous signs:

1) Employment from the BLS Household Survey declined by 683,000 in December and is essentially flat since July. This discrepancy should be rather unsettling to anyone waving off the possibility of a recession.

2) The number of full-time workers decreased by 1.53 million in December, and the number of part-time workers increased by 762,000 as the holidays approached. Retail employment was not particularly strong however, and the big loss of full-time work stands in contrast to the “strong-report” narrative.

3) The number of multiple jobholders hit a record and increased by 556,000 over the past year. This might indicate trouble for some workers making ends meet.

5) The civilian labor force declined by 676,000. What accounts for the change in status among these former workers or job seekers?

6) From the BLS Establishment Survey, government hiring accounted for 24% of the nonfarm jobs filled in December. Social Services accounted for 10% of the new hiring and health care for 18%, both of which are heavily dependent on government.

7) Nonfarm payrolls were revised downward by a total of 71,000 for October and November. We’ve seen downward revisions for 10 of the past 11 months.

8) In total, initial monthly job reports in 2023 overstated the full-year gain in nonfarm employment after available revisions by 439,000.

Those are big qualifiers on the “stronger than expected” jobs report. Furthermore, I tend to discount new government jobs as a real engine of production possibilities, so the report didn’t offer much assurance about the economy’s momentum. In addition, there are estimates that the payroll gain was due to better weather than the seasonal adjustment factors indicate.

Fictional Payroll Gains?

Still other issues cast doubt on the BLS payroll numbers. First, they are based on a survey of employers that is not complete by the time of each month’s initial report. Second, the survey is heavily skewed toward employees of government and large corporations; the sample of small employers is light by comparison. Third, seasonal adjustments often swamp the unadjusted changes in payrolls.

Finally, the BLS uses a statistical model of business births/deaths to adjust the figures. This is intended to correct for a lag in survey coverage as new businesses are formed and others close. The net effect on the payroll estimate can be positive or negative. Unfortunately, it’s difficult for even the BLS to tell how much the birth/death model affects the headline nonfarm jobs figure in any particular month. Therefore, it’s tough to put much faith in the monthly reports, but we watch them anyway.

Stubborn Inflation

The Consumer Price Index (CPI) for December increased 0.3% over November and 3.4% year-over-year, slightly more than expectations of 0.2% and 3.2%, respectively. The “core” CPI (excluding food and energy prices) rose 3.9% year-over-year, more than the 3.8% expected. The core rate declined on a one-month and year-over-year basis, however, as did the median item in the CPI.

All CPI measures in the chart declined during 2023, though the core and median lagged the headline CPI (green line), which “flattened” somewhat during the last half of the year. So there appears to be some stickiness hindering disinflation in the CPI at this point, but the apparent “stickiness” has been confined to lagging declines in housing costs (also see here).

The Producer Price Index (PPI) reported a day later was thought to be benign. Like the CPI, disinflation in the core PPI has tapered:

In this context, it should be noted that declines in the Fed’s preferred inflation gauge, the PCE deflator, have also undergone something of a pause, and the PCE weights housing costs much less heavily than the CPI.

The CPI and PPI reports don’t offer any reason for the Fed to reduce its target federal funds rate over the next couple of Federal Open Market Committee (FOMC) meetings. There are two more sets of monthly inflation reports before the meeting in late March, so things could change. But again, the Fed has given ample guidance that it might have to leave its target rate at the current level for an extended period.

The Market View

Markets had priced-in six cuts in the Fed funds rate target in 2024 prior to the CPI report, but traders began to discount that possibility in its immediate aftermath. However, members of the FOMC expected an average of three cuts in 2024, with more to come in 2025, whether or not that’s consistent with “higher for longer”. Inflation is hovering somewhat above the Fed’s goal, but getting the rest of the job done might be tough, and indeed, might imply “longer” if not “higher”.

But why did the market ever hold the expectation of six cuts this year? Traders must have anticipated an economic contraction, which would kick the Fed into rapid response mode. The employment report offered no assurance that such a “hard landing” will be avoided. A few more negative signals on the real economy without further progress on prices would provide quite a test of the Fed’s inflation-fighting resolve.