The Dubious 1917 Redemption of Karl Marx

Tags

, , , , , , , , , , , , , , , ,

Karl Marx has long been celebrated by the Left as a great intellectual, but the truth is that his legacy was destined to be of little significance until his writings were lauded, decades later, by the Bolsheviks during their savage October 1917 revolution in Russia. Vladimir Lenin and his murderous cadre promoted Marx and brought his ideas into prominence as political theory. That’s the conclusion of a fascinating article by Phil Magness and Michael Makovi (M&M) appearing in the Journal of Political Economy. The title: “The Mainstreaming of Marx: Measuring the Effect of the Russian Revolution on Karl Marx’s Influence“.

The idea that the early Soviet state and other brutal regimes in its mold were the main progenitors of Marxism is horrifying to its adherents today. That’s the embarrassing historical reality, however. It’s not really clear that Marx himself would have endorsed those regimes, though I hesitate to cut him too much slack.

A lengthy summary of the M&M paper is given by the authors in “Das Karl Marx Problem”. The “problem”, as M&M describe it, is in reconciling 1) the nearly complete and well-justified rejection of Marx’s economic theories during his life and in the 34 years after his death, with 2) the esteem in which he’s held today by so many so-called intellectuals. A key piece of the puzzle, noted by the authors, is that praise for Marx comes mainly from outside the economics profession. The vast majority of economists today recognize that Marx’s labor theory of value is incoherent as an explanation of the value created in production and exchange.

The theoretical rigors might be lost on many outside the profession, but a moments reflection should be adequate for almost anyone to realize that value is contributed by both labor and non-labor inputs to production. Of course, it might have dawned on communists over the years that mass graves can be dug more “efficiently” by combining labor with physical capital. On the other hand, you can bet they never paid market prices for any of the inputs to that grisly enterprise.

Marx never thought in terms of decisions made at the margin, the hallmark of the rational economic actor. That shortcoming in his framework led to mistaken conclusions. Second, and again, this should be obvious, prices of goods must incorporate (and reward) the value contributed by all inputs to production. That value ultimately depends on the judgement of buyers, but Marx’s theory left him unable to square the circle on all this. And not for lack of trying! It was a failed exercise, and M&M provide several pieces of testimony to that effect. Here’s one example:

By the time Lenin came along in 1917, Marx’s economic theories were already considered outdated and impractical. No less a source than John Maynard Keynes would deem Marx’s Capital ‘an obsolete economic textbook . . . without interest or application for the modern world’ in a 1925 essay.

Marxism, with its notion of a “workers’ paradise”, gets credit from intellectuals as a highly utopian form of socialism. In reality, it’s implementation usually takes the form of communism. The claim that Marxism is “scientific” socialism (despite the faulty science underlying Marx’s theories) is even more dangerous, because it offers a further rationale for authoritarian rule. A realistic transition to any sort of Marxist state necessarily involves massive expropriations of property and liberty. Violent resistance should be expected, but watch the carnage when the revolutionaries gain the upper hand.

What M&M demonstrate empirically is how lightly Marx was cited or mentioned in printed material up until 1917, both in English and German. Using Google’s Ngram tool, they follow a group of thinkers whose Ngram patterns were similar to Marx’s up to 1917. They use those records to construct an expected trajectory for Marx for 1917 forward and find an aberrant jump for Marx at that time, again both in English and in German material. But Ngram excludes newspaper mentions, so they also construct a database from Newspapers.com and their findings are the same: newspaper references to Marx spiked after 1917. There was nothing much different when the sample was confined to socialist writers, though M&M acknowledge that there were a couple of times prior to 1917 during which short-lived jumps in Marx citations occurred among socialists.

To be clear, however, Marx wasn’t unknown to economists during the 3+ decades following his death. His name was mentioned here and there in the writings of prominent economists of the day — just not in especially glowing terms.

“… absent the events of 1917, Marx would have continued to be an object of niche scholarly inquiry and radical labor activism. He likely would have continued to compete for attention in those same radical circles as the main thinker of one of its many factions. After the Soviet boost to Marx, he effectively crowded the other claimants out of [the] socialist-world.

Magness has acknowledged that he and Makovi aren’t the first to have noticed the boost given to Marx by the Bolsheviks. Here, Magness quotes Eric Hobsbawm’s take on the subject:

This situation changed after the October Revolution – at all events, in the Communist Parties. … Following Lenin, all leaders were now supposed to be important theorists, since all political decisions were justified on grounds of Marxist analysis – or, more probably, by reference to textual authority of the ‘classics’: Marx, Engels, Lenin, and, in due course, Stalin. The publication and popular distribution of Marx’s and Engels’s texts therefore become far more central to the movement than they had been in the days of the Second International [1889 – 1914].”

Much to the chagrin of our latter day Marxists and socialists, it was the advent of the monstrous Soviet regime that led to Marx’s “mainstream” ascendency. Other brutal regimes arising later reinforced Marx’s stature. The tyrants listed by M&M include Joseph Stalin, Mao Zedong, Fidel Castro, and Pol Pot, and they might have added several short-lived authoritarian regimes in Africa as well. Today’s Marxists continue to assure us that those cases are not representative of a Marxist state.

Perhaps it’s fair to say that Marx’s name was co-opted by thugs, but I posit something a little more consistent with the facts: it’s difficult to expropriate the “means of production” without a fight. Success requires massive takings of liberty and property. This is facilitated by means of a “class struggle” between social or economic strata, or it might reflect divisions based on other differences. Either way, groups are pitted against one another. As a consequence, we witness an “othering” of opponents on one basis or another. Marxists, no matter how “pure of heart”, find it impossible to take power without demanding ideological purity. Invariably, this requires “reeducation”, cleansing, and ultimately extermination of opponents.

Karl Marx had unsound ideas about how economic value manifests and where it should flow, and he used those ideas to describe what he thought was a more just form of social organization. The shortcomings of his theory were recognized within the economics profession of the day, and his writings might have lived on in relative obscurity were it not for the Bolshevik’s intellectual pretensions. Surely obscurity would have been better than a legacy shaped by butchers.

It’s a Big Government Mess

Tags

, , , , , , , , , , , , , , , , , , , , ,

I’m really grateful to have the midterm elections behind us. Well, except for the runoff Senate race in Georgia, the cockeyed ranked-choice Senate race in Alaska, and a few stray House races that remain unsettled after almost two weeks. I’m tired of campaign ads, including the junk mail and pestering “unknown” callers — undoubtedly campaign reps or polling organizations.

It’s astonishing how much money is donated and spent by political campaigns. This year’s elections saw total campaign spending (all levels) hit $16.7 billion, a record for a mid-term. The recent growth in campaign spending for federal offices has been dramatic, as the chart below shows:

Do you think spending of a few hundred million dollars on a Senate campaign is crazy? Me too, though I don’t advocate for legal limits on campaign spending because, for better or worse, that issue is entangled with free speech rights. Campaigns are zero-sum events, but presumably a big donor thinks a success carries some asymmetric reward…. A success rate of better than 50% across several campaigns probably buys much more…. And donors can throw money at sure political bets that are probably worth a great deal…. Many donors spread their largess across both parties, perhaps as a form of “protection”. But it all seems so distasteful, and it’s surely a source of waste in the aggregate.

My reservations about profligate campaign spending include the fact that it is a symptom of big government. Donors obviously believe they are buying something that government, in one way or another, makes possible for them. The greater the scope of government activity, the more numerous are opportunities for rent seeking — private gains through manipulation of public actors. This is the playground of fascists!

There are people who believe that placing things in the hands of government is an obvious solution to the excesses of “greed”. However, politicians and government employees are every bit as self-interested and “greedy” as actors in the private sector. And they can do much more damage: government actors legally exercise coercive power, they are not subject in any way to external market discipline, and they often lack any form of accountability. They are not compelled to respect consumer sovereignty, and they make correspondingly little contribution to the nation’s productivity and welfare.

Actors in the private sector, on the other hand, face strong incentives to engage in optimizing behavior: they must please customers and strive to improve performance to stay ahead of their competition. That is, unless they are seduced by what power they might have to seek rents through public sector activism.

A people who grant a wide scope of government will always suffer consequences they should expect, but they often proceed in abject ignorance. So here is my rant, a brief rundown on some of the things naive statists should expect to get for their votes. Of course, this is a short list — it could be much longer:

  • Opportunities for graft as bureaucrats administer the spending of others’ money and manipulate economic activity via central planning.
  • A ballooning and increasingly complex tax code seemingly designed to benefit attorneys, the accounting profession, and certainly some taxpayers, but at the expense of most taxpayers.
  • Subsidies granted to producers and technologies that are often either unnecessary or uneconomic (and see here), leading to malinvestment of capital. This is often a consequence of the rent seeking and cronyism that goes hand-in-hand with government dominance and ham-handed central planning.
  • Redistribution of existing wealth, a zero- or even negative-sum activity from an economic perspective, is prioritized over growth.
  • Redistribution beyond a reasonable safety net for those unable to work and without resources is a prescription for unnecessary dependency, and it very often constitutes a surreptitious political buy-off.
  • Budgetary language under which “budget cuts” mean reductions in the growth of spending.
  • Large categories of spending, known in the U.S. as non-discretionary entitlements, that are essentially off limits to lawmakers within the normal budget appropriations process.
  • Fiscal illusion” is exploited by politicians and statists to hide the cost of government expansion.
  • The strained refrain that too many private activities impose external costs is stretched to the point at which government authorities externalize internalities via coercive taxes, regulation, or legal actions.
  • Massive growth in regulation (see chart at top) extending to puddles classified as wetlands (EPA), the ”disparate impacts” of private hiring practices (EEOC), carbon footprints of your company and its suppliers (EPA, Fed, SEC), outrageous energy efficiency standards (DOE), and a multiplicity of other intrusions.
  • Growth in the costs of regulatory compliance.
  • A nearly complete lack of responsiveness to market prices, leading to misallocation of resources — waste.
  • Lack of value metrics for government activities to gauge the public’s “willingness to pay”.
  • Monopoly encouraged by regulatory capture and legal / compliance cost barriers to competition. Again, cronyism.
  • Monopoly granted by other mechanisms such as import restrictions and licensure requirements. Again, cronyism.
  • Ruination of key industries as government control takes it’s grip.
  • Shortages induced by price controls.
  • Inflation and diminished buying power stoked by monetized deficits, which is a long tradition in financing excessive government.
  • Malinvestment of private capital created by monetary excess and surplus liquidity.
  • That malinvestment of private capital creates macroeconomic instability. The poorly deployed capital must be written off and/or reallocated to productive uses at great cost.
  • Funding for bizarre activities folded into larger budget appropriations, like holograms of dead comedians, hamster fighting experiments, and an IHOP for a DC neighborhood.
  • A gigantic public sector workforce in whose interest is a large and growing government sector, and who believe that government shutdowns are the end of the world.
  • Attempts to achieve central control of information available to the public, and the quashing of dissent, even in a world with advanced private information technology. See the story of Hunter Biden’s laptop. This extends to control of scientific narratives to ensure support for certain government programs.
  • Central funding brings central pursestrings and control. This phenomenon is evident today in local governance, education, and science. This is another way in which big government fosters dependency.
  • Mission creep as increasing areas of economic activity are redefined as “public” in nature.
  • Law and tax enforcement, security, and investigative agencies pressed into service to defend established government interests and to compromise opposition.

I’ve barely scratched the surface! Many of the items above occur under big government precisely because various factions of the public demand responses to perceived problems or “injustices”, despite the broader harms interventions may bring. The press is partly responsible for this tendency, being largely ignorant and lacking the patience for private solutions and market processes. And obviously, those kinds of demands are a reason government gets big to begin with. In the past, I’ve referred to these knee-jerk demands as “do somethingism”, and politicians are usually too eager to play along. The squeaky wheel gets the oil.

I mentioned cronyism several times in the list. The very existence of broad public administration and spending invites the clamoring of obsequious cronies. They come forward to offer their services, do large and small “favors”, make policy suggestions, contribute to lawmakers, and to offer handsomely remunerative post-government employment opportunities. Of course, certaIn private parties also recognize the potential opportunities for market dominance when regulators come calling. We have here a perversion of the healthy economic incentives normally faced by private actors, and these are dynamics that gives rise to a fascist state.

It’s true, of course, that there are areas in which government action is justified, if not necessary. These include pure public goods such as national defense, as well as public safety, law enforcement, and a legal system for prosecuting crimes and adjudicating disputes. So a certain level of state capacity is a good thing. Nevertheless, as the list suggests, even these traditional roles for government are ripe for unhealthy mission creep and ultimately abuse by cronies.

The overriding issue motivating my voting patterns is the belief in limited government. Both major political parties in the U.S. violate this criterion, or at least carve out exceptions when it suits them. I usually identify the Democrat Party with statism, and there is no question that democrats rely far too heavily on government solutions and intervention in private markets. The GOP, on the other hand, often fails to recognize the statism inherent in it’s own public boondoggles, cronyism, and legislated morality. In the end, the best guide for voting would be a political candidate’s adherence to the constitutional principles of limited government and individual liberty, and whether they seem to understand those principles. Unfortunately, that is often too difficult to discern.

Sweden’s Pandemic Policy: Arguably Best Practice

Tags

, , , , , , , , , , , , , , , , , , ,

When Covid-19 began its awful worldwide spread in early 2020, the Swedes made an early decision that ultimately proved to be as protective of human life as anything chosen from the policy menu elsewhere. Sweden decided to focus on approaches for which there was evidence of efficacy in containing respiratory pandemics, not mere assertions by public health authorities (or anyone else) that stringent non-pharmaceutical interventions (NPIs) were necessary or superior.

The Swedish Rationale

The following appeared in an article in Stuff in late April, 2020,

Professor Johan Giesecke, who first recruited [Sweden’s State epidemiologist Anders] Tegnell during his own time as state epidemiologist, used a rare interview last week to argue that the Swedish people would respond better to more sensible measures. He blasted the sort of lockdowns imposed in Britain and Australia and warned a second wave would be inevitable once the measures are eased. ‘… when you start looking around at the measures being taken by different countries, you find very few of them have a shred of evidence-base,’ he said.

Giesecke, who has served as the first Chief Scientist of the European Centre for Disease Control and has been advising the Swedish Government during the pandemic, told the UnHerd website there was “almost no science” behind border closures and school closures and social distancing and said he looked forward to reviewing the course of the disease in a year’s time.

Giesecke was of the opinion that there would ultimately be little difference in Covid mortality across countries with different pandemic policies. Therefore, the least disruptive approach was to be preferred. That meant allowing people to go about their business, disseminating information to the public regarding symptoms and hygiene, and attempting to protect the most vulnerable segments of the population. Giesecke said:

I don’t think you can stop it. It’s spreading. It will roll over Europe no matter what you do.

He was right. Sweden had a large number of early Covid deaths primarily due to its large elderly population as well as its difficulty in crafting effective health messages for foreign-speaking immigrants residing in crowded enclaves. Nevertheless, two years later, Sweden has posted extremely good results in terms of excess deaths during the pandemic.

Excess Deaths

Excess deaths, or deaths relative to projections based on historical averages, are a better metric than Covid deaths (per million) for cross-country or jurisdictional comparisons. Among other reasons, the latter are subject to significant variations in methods of determining cause of death. Moreover, there was a huge disparity between excess deaths and Covid deaths during the pandemic, and the gap is still growing:

Excess deaths varied widely across countries, as illustrated by the left-hand side of the following chart:

Interestingly, most of the lowest excess death percentages were in Nordic countries, but especially Sweden and Norway. That might be surprising in terms of high Nordic latitudes, which may have created something of a disadvantage in terms of sun exposure and potentially low vitamin D levels. Norway enacted more stringent public policies during the pandemic than Sweden. Globally, however, lockdown measures showed no systematic advantage in terms of excess deaths. Notably, the U.S. did quite poorly in terms of excess deaths at 8X the Swedish rate,

Covid Deaths

The right-hand side of the chart above shows that Sweden experienced a significant number of Covid deaths per million residents. The figure still compares reasonably well internationally, despite the country’s fairly advanced age demographics. Most Covid deaths occurred in the elderly and especially in care settings. Like other places, that is where the bulk of Sweden’s Covid deaths occurred. Note that U.S. Covid deaths per million were more than 50% higher than in Sweden.

NPIs Are Often Deadly

Perhaps a more important reason to emphasize excess deaths over Covid deaths is that public policy itself had disastrous consequences in many countries. In particular, strict NPIs like lockdowns, including school and business closures, can undermine public health in significant ways. That includes the inevitably poor consequences of deferred health care, the more rapid spread of Covid within home environments, the physical and psychological stress from loss of livelihood, and the toll of isolation, including increased use of alcohol and drugs, less exercise, and binge eating. Isolation is particularly hard on the elderly and led to an increase in “deaths of despair” during the pandemic. These were the kinds of maladjustments caused by lockdowns that led to greater excess deaths. Sweden avoided much of that by eschewing stringent NPIs, and Iceland is sometimes cited as a similar case.

Oxford Stringency Index

I should note here, and this is a digression, that the most commonly used summary measure of policy “stringency” is not especially trustworthy. That measure is an index produced by Oxford University that is available on the Our World In Data web site. Joakim Book documented troubling issues with this index in late 2020, after changes in the index’s weightings dramatically altered its levels for Nordic countries. As Book said at that time:

Until sometime recently, Sweden, which most media coverage couldn’t get enough of reporting, was the least stringent of all the Nordics. Life was freer, pandemic restrictions were less invasive, and policy responses less strong; this aligned with Nordic people’s experience on the ground.

Again, Sweden relied on voluntary action to limit the spread of the virus, including encouragement of hygiene, social distancing, and avoiding public transportation when possible. Book was careful to note that “Sweden did not ‘do nothing’”, but it’s policies were less stringent than its Nordic neighbors in several ways. While Sweden had the same restrictions on arrivals from outside the European Economic Area as the rest of the EU, it did not impose quarantines, testing requirements, or other restrictions on travelers or on internal movements. Sweden’s school closures were short-lived, and its masking policies were liberal. The late-2020 changes in the Oxford Stringency Index, Book said, simply did not “pass the most rudimentary sniff test”.

Economic Stability

Sweden’s economy performed relatively well during the pandemic. The growth path of real GDP was smoother than most countries that succumbed to the excessive precautions of lockdowns. However, Norway’s economy appears to have been the most stable of those shown on the chart, at least in terms of real output, though it did suffer a spike in unemployment.

The Bottom Line

The big lesson is that Sweden’s “light touch” during the pandemic proved to be at least as effective, if not more so, than comparatively stringent policies imposed elsewhere. Covid deaths were sure to occur, but widespread non-Covid excess deaths were unanticipated by many countries practicing stringent intervention. That lack of foresight is best understood as a consequence of blind panic among public health “experts” and other policymakers, who too often are rewarded for misguided demonstrations that they have “done something”. Those actions failed to stop the spread in any systematic sense, but they managed to do great damage to other aspects of public health. Furthermore, they undermined economic well being and the cause of freedom. Johan Giesecke was right to be skeptical of those claiming they could contain the virus through NPIs, though he never anticipated the full extent to which aggressive interventions would prove deadly.

Biden’s Rx Price Controls: Cheap Politics Over Cures

Tags

, , , , , , , , , , , , , , , , , , , , , , ,

You can expect dysfunction when government intervenes in markets, and health care markets are no exception. The result is typically over-regulation, increased industry concentration, lower-quality care, longer waits, and higher costs to patients and taxpayers. The pharmaceutical industry is one of several tempting punching bags for ambitious politicians eager to “do something” in the health care arena. These firms, however, have produced many wonderful advances over the years, incurring huge research, development, and regulatory costs in the process. Reasonable attempts to recoup those costs often means conspicuously high prices, which puts a target on their backs for the likes of those willing to characterize return of capital and profit as ill-gotten.

Biden Flunks Econ … Again

Lately, under political pressure brought on by escalating inflation, Joe Biden has been talking up efforts to control the prices of prescription drugs for Medicare beneficiaries. Anyone with a modicum of knowledge about markets should understand that price controls are a fool’s errand. Price controls don’t make good policy unless the goal is to create shortages.

The preposterously-named Inflation Reduction Act is an example of this sad political dynamic. Reducing inflation is something the Act won’t do! Here is Wikipedia’s summary of the prescription drug provisions, which is probably adequate for now:

Prescription drug price reform to lower prices, including Medicare negotiation of drug prices for certain drugs (starting at 10 by 2026, more than 20 by 2029) and rebates from drug makers who price gouge… .”

The law contains provisions that cap insulin costs at $35/month and will cap out-of-pocket drug costs at $2,000 for people on Medicare, among other provisions.

Unpacking the Blather

“Price gouging”, of course, is a well-worn term of art among anti-market propagandists. In this case it’s meaning appears to be any form of non-compliance, including those for which fees and rebates are anticipated.

The insulin provision is responsive to a long-standing and misleading allegation that insulin is unavailable at reasonable prices. In fact, insulin is already available at zero cost as durable medical equipment under Medicare Part B for diabetics who use insulin pumps. Some types and brands of insulin are available at zero cost for uninsured individuals. A simple internet search on insulin under Medicare yields several sources of cheap insulin. GoodRx also offers brands at certain pharmacies at reasonable costs.

As for the cap on out-of-pocket spending under Part D, limiting the patient’s payment responsibility is a bad way to bring price discipline to the market. Excessive third-party shares of medical payments have long been implicated in escalating health care costs. That reality has eluded advocates of government health care, or perhaps they simply prefer escalating costs in the form of health care tax burdens.

Negotiated Theft

The Act’s adoption of the term “negotiation” is a huge abuse of that word’s meaning. David R. Henderson and Charles Hooper offer the following clarification about what will really happen when the government sits down with the pharmaceutical companies to discuss prices:

Where CMS is concerned, ‘negotiations’ is a ‘Godfather’-esque euphemism. If a drug company doesn’t accept the CMS price, it will be taxed up to 95% on its Medicare sales revenue for that drug. This penalty is so severe, Eli Lilly CEO David Ricks reports that his company treats the prospect of negotiations as a potential loss of patent protection for some products.

The first list of drugs for which prices will be “negotiated” by CMS won’t take effect until 2026. However, in the meantime, drug companies will be prohibited from increasing the price of any drug sold to Medicare beneficiaries by more than the rate of inflation. Price control is the correct name for these policies.

Death and Cost Control

Henderson and Hooper chose a title for their article that is difficult for the White House and legislators to comprehend: “Expensive Prescription Drugs Are a Bargain“. The authors first note that 9 out of 10 prescription drugs sold in the U.S. are generics. But then it’s easy to condemn high price tags for a few newer drugs that are invaluable to those whose lives they extend, and those numbers aren’t trivial.

Despite the protestations of certain advocates of price controls and the CBO’s guesswork on the matter, the price controls will stifle the development of new drugs and ultimately cause unnecessary suffering and lost life-years for patients. This reality is made all too clear by Joe Grogan in the Wall Street Journal in “The Inflation Reduction Act Is Already Killing Potential Cures” (probably gated). Grogan cites the cancellation of drugs under development or testing by three different companies: one for an eye disease, another for certain blood cancers, and one for gastric cancer. These cancellations won’t be the last.

Big Pharma Critiques

The pharmaceutical industry certainly has other grounds for criticism. Some of it has to do with government extensions of patent protection, which prolong guaranteed monopolies beyond points that may exceed what’s necessary to compensate for the high risk inherent in original investments in R&D. It can also be argued, however, that the FDA approval process increases drug development costs unreasonably, and it sometimes prevents or delays good drugs from coming to market. See here for some findings on the FDA’s excessive conservatism, limiting choice in dire cases for which patients are more than willing to risk complications. Pricing transparency has been another area of criticism. The refusal to release detailed data on the testing of Covid vaccines represents a serious breach of transparency, given what many consider to have been inadequate testing. Big pharma has also been condemned for the opioid crisis, but restrictions on opioid prescriptions were never a logical response to opioid abuse. (Also see here, including some good news from the Supreme Court on a more narrow definition of “over-prescribing”.)

Bad policy is often borne of short-term political objectives and a neglect of foreseeable long-term consequences. It’s also frequently driven by a failure to understand the fundamental role of profit incentives in driving innovation and productivity. This is a manifestation of the short-term focus afflicting many politicians and members of the public, which is magnified by the desire to demonize a sector of the economy that has brought undeniable benefits to the public over many years. The price controls in Biden’s Inflation Reduction Act are a sure way to short-circuit those benefits. Those interventions effectively destroy other incentives for innovation created by legislation over several decades, as Joe Grogan describes in his piece. If you dislike pharma pricing, look to reform of patenting and the FDA approval process. Those are far better approaches.

Conclusion

Note: The image above was created by “Alexa” for this Washington Times piece from 2019.

Wind and Solar Power: Brittle, Inefficient, and Destructive

Tags

, , , , , , , , , , , , , , , , , , , , ,

Just how renewable is “renewable” energy, or more specifically solar and wind power? Intermittent though they are, the wind will always blow and the sun will shine (well, half a day with no clouds). So the possibility of harvesting energy from these sources is truly inexhaustible. Obviously, it also takes man-made hardware to extract electric power from sunshine and wind — physical capital— and it is quite costly in several respects, though taxpayer subsidies might make it appear cheaper to investors and (ultimately) users. Man-made hardware is damaged, wears out, malfunctions, or simply fails for all sorts of reasons, and it must be replaced from time to time. Furthermore, man-made hardware such as solar panels, wind turbines, and the expansions to the electric grid needed to bring the power to users requires vast resources and not a little in the way of fossil fuels. The word “renewable” is therefore something of a misnomer when it comes to solar and wind facilities.

Solar Plant

B. F. Randall (@Mining_Atoms) has a Twitter thread on this topic, or actually several threads (see below). The first thing he notes is that solar panels require polysilicon, which not recyclable. Disposal presents severe hazards of its own, and to replace old solar panels, polysilicon must be produced. For that, Randall says you need high-purity silica from quartzite rock, high-purity coking coal, diesel fuel, and large flows of dispatchable (not intermittent) electric power. To get quartzite, you need carbide drilling tools, which are not renewable. You also need to blast rock using ammonium nitrate fuel oil derived from fossil fuels. Then the rock must be crushed and often milled into fine sand, which requires continuous power. The high temperatures required to create silicon are achieved with coking coal, which is also used in iron and steel making, but coking coal is non-renewable. The whole process requires massive amounts of electricity generated with fossil fuels. Randall calls polysilicon production “an electricity beast”.

Greenwashing

The resulting carbon emissions are, in reality, unlikely to be offset by any quantity of carbon credits these firms might purchase, which allow them to claim a “zero footprint”. Blake Lovewall describes the sham in play here:

The biggest and most common Carbon offset schemes are simply forests. Most of the offerings in Carbon marketplaces are forests, particularly in East Asian, African and South American nations. …

The only value being packaged and sold on these marketplaces is not cutting down the trees. Therefore, by not cutting down a forest, the company is maintaining a ‘Carbon sink’ …. One is paying the landowner for doing nothing. This logic has an acronym, and it is slapped all over these heralded offset projects: REDD. That is a UN scheme called ‘Reduce Emissions from Deforestation and Forest Degradation’. I would re-name it to, ‘Sell off indigenous forests to global investors’.

Lovewall goes on to explain that these carbon offset investments do not ensure that forests remain pristine by any stretch of the imagination. For one thing, the requirements for managing these “preserves” are often subject to manipulation by investors working with government; as such, the credits are often vehicle for graft. In Indonesia, for example, carbon credited forests have been converted to palm oil plantations without any loss of value to the credits! Lovewall also cites a story about carbon offset investments in Brazil, where the credits provided capital for a massive dam in the middle of the rainforest. This had severe environmental and social consequences for indigenous peoples. It’s also worth noting that planting trees, wherever that might occur under carbon credits, takes many years to become a real carbon sink.

While I can’t endorse all of Lovewall’s points of view, he makes a strong case that carbon credits are a huge fraud. They do little to offset carbon generated by entities that purchase them as offsets. Again, the credits are very popular with the manufacturers and miners who participate in the fabrication of physical capital for renewable energy installations who wish to “greenwash” their activities.

Wind Plant

Randall discusses the non-renewability of wind turbines in a separate thread. Turbine blades, he writes, are made from epoxy resins, balsa wood, and thermoplastics. They wear out, along with gears and other internal parts, and must be replaced. Land disposal is safe and cheap, but recycling is costly and requires even greater energy input than the use of virgin feedstocks. Randall’s thread on turbines raised some hackles among wind energy defenders and even a few detractors, and Randall might have overstated his case in one instance, but the main thrust of his argument is irrefutable: it’s very costly to recycle these components into other usable products. Entrepreneurs are still trying to work out processes for doing so. It’s not clear that recycling the blades into other products is more efficient than sending them to landfills, as the recycling processes are resource intensive.

But even then, the turbines must be replaced. Recycling the old blades into crates and flooring and what have you, and producing new wind turbines, requires lots of power. And as Randall says, replacement turbines require huge ongoing quantities of zinc, copper, cement, and fossil fuel feedstocks.

The Non-Renewability of Plant

It shouldn’t be too surprising that renewable power machinery is not “renewable” in any sense, despite the best efforts of advocates to convince us of their ecological neutrality. Furthermore, the idea that the production of this machinery will be “zero carbon” any time in the foreseeable future is absurd. In that respect, this is about like the ridiculous claim that electric vehicles (EVs) are “zero emission”, or the fallacy that we can achieve a zero carbon world based on renewable power.

It’s time the public came to grips with the reality that our heavy investments in renewables are not “renewable” in the ecological sense. Those investments, and reinvestments, merely buy us what Randall calls “garbage energy”, by which he means that it cannot be relied upon. Burning garbage to create steam is actually a more reliable power source.

Highly Variable With Low Utilization

Randall links to information provided by Martian Data (@MartianManiac1) on Europe’s wind energy generation as of September 22, 2022 (see the tweet for Martian Data’s sources):

Hourly wind generation in Europe for past 6 months:
Max: 122GW
Min: 10.2GW
Mean: 41.0
Installed capacity: ~236GW

That’s a whopping 17.4% utilization factor! That’s pathetic, and it means the effective cost is quintuple the value at nameplate capacity. Take a look at this chart comparing the levels and variations in European power demand, nuclear generation, and wind generation over the six months ending September 22nd (if you have trouble zooming in here, try going to the thread):

The various colors represent different countries. Here’s a larger view of the wind component:

A stable power grid cannot be built upon this kind of intermittency. Here is another comparison that includes solar power. This chart is daily covering 2021 through about May 26, 2022.

As for solar capacity utilization, it too is unimpressive. Here is Martian Data’s note on this point, followed by a chart of solar generation over the course of a few days in June:

so ~15% solar capacity is whole year average. ~5% winter ~20% summer. And solar is brief in summer too…, it misses both both morning and evening peaks in demand.

Like wind, the intermittency of solar power makes it an impractical substitute for traditional power sources. Check out Martian Data’s Twitter feed for updates and charts from other parts of the world.

Nuclear Efficiency

Nuclear power generation is an excellent source of baseload power. It is dispatchable and zero carbon except at plant construction. It also has an excellent safety record, and newer, modular reactor technologies are safer yet. It is cheaper in terms of generating capacity and it is more flexible than renewables. In fact, in terms of the resource costs of nuclear power vs. renewables over plant cycles, it’s not even close. Here’s a chart recently posted by Randall showing input quantities per megawatt hour produced over the expected life of each kind of power facility (different power sources are labeled at bottom, where PV = photovoltaic (solar)):

In fairness, I’m not completely satisfied with these comparisons. They should be stated in terms of current dollar costs, which would neutralize differences in input densities and reflect relative scarcities. Nevertheless, the differences in the chart are stark. Nuclear produces cheap, reliable power.

The Real Dirt

Solar and wind power are low utilization power sources and they are intermittent. Heavy reliance on these sources creates an extremely brittle power grid. Also, we should be mindful of the vast environmental degradation caused by the mining of minerals needed to produce solar panels and wind turbines, including their inevitable replacements, not to mention the massive land use requirements of wind and solar power. Also disturbing is the hazardous dumping of old solar panels from the “first world” now taking place in less developed countries. These so-called clean-energy sources are anything but clean or efficient.

Stealth Hiring Quotas Via AI

Tags

, , , , , , , , , , , , , , , ,

Hiring quotas are of questionable legal status, but for several years, some large companies have been adopting quota-like “targets” under the banner of Diversity, Equity and Inclusion (DEI) initiatives. Many of these so-called targets apply to the placement of minority candidates into “leadership positions”, and some targets may apply more broadly. Explicit quotas have long been viewed negatively by the public. Quotas have also been proscribed under most circumstances by the Supreme Court, and the EEOC’s Compliance Manual still includes rigid limits on when the setting of minority hiring “goals” is permissible.

Yet large employers seem to prefer the legal risks posed by aggressive DEI policies to the risk of lawsuits by minority interests, unrest among minority employees and “woke” activists, and “disparate impact” inquiries by the EEOC. Now, as Stewart Baker writes in a post over at the Volokh Conspiracy, employers have a new way of improving — or even eliminating — the tradeoff they face between these risks: “stealth quotas” delivered via artificial intelligence (AI) decisioning tools.

Skynet Smiles

A few years ago I discussed the extensive use of algorithms to guide a range of decisions in “Behold Our Algorithmic Overlords“. There, I wrote:

Imagine a world in which all the information you see is selected by algorithm. In addition, your success in the labor market is determined by algorithm. Your college admission and financial aid decisions are determined by algorithm. Credit applications are decisioned by algorithm. The prioritization you are assigned for various health care treatments is determined by algorithm. The list could go on and on, but many of these ‘use-cases’ are already happening to one extent or another.

That post dealt primarily with the use of algorithms by large tech companies to suppress information and censor certain viewpoints, a danger still of great concern. However, the use of AI to impose de facto quotas in hiring is a phenomenon that will unequivocally reduce the efficiency of the labor market. But exactly how does this mechanism work to the satisfaction of employers?

Machine Learning

As Baker explains, AI algorithms are “trained” to find optimal solutions to problems via machine learning techniques, such as neural networks, applied to large data sets. These techniques are are not as straightforward as more traditional modeling approaches such as linear regression, which more readily lend themselves to intuitive interpretation of model results. Baker uses the example of lung x-rays showing varying degrees of abnormalities, which range from the appearance of obvious masses in the lungs to apparently clear lungs. Machine learning algorithms sometimes accurately predict the development of lung cancer in individuals based on clues that are completely non-obvious to expert evaluators. This, I believe, is a great application of the technology. It’s too bad that the intuition behind many such algorithmic decisions are often impossible to discern. And the application of AI decisioning to social problems is troubling, not least because it necessarily reduces the richness of individual qualities to a set of data points, and in many cases, defines individuals based on group membership.

When it comes to hiring decisions, an AI algorithm can be trained to select the “best” candidate for a position based on all encodable information available to the employer, but the selection might not align with a hiring manager’s expectations, and it might be impossible to explain the reasons for the choice to the manager. Still, giving the AI algorithm the benefit of the doubt, it would tend to make optimal candidate selections across reasonably large sets of similar, open positions.

Algorithmic Bias

A major issue with respect to these algorithms has been called “algorithmic bias”. Here, I limit the discussion to hiring decisions. Ironically, “bias” in this context is a rather slanted description, but what’s meant is that the algorithms tend to select fewer candidates from “protected classes” than their proportionate shares of the general population. This is more along the lines of so-called “disparate impact”, as opposed to “bias” in the statistical sense. Baker discusses the attacks this has provoked against algorithmic decision techniques. In fact, a privacy bill is pending before Congress containing provisions to address “AI bias” called the American Data Privacy and Protection Act (ADPPA). Baker is highly skeptical of claims regarding AI bias both because he believes they have little substance and because “bias” probably means that AIs sometimes make decisions that don’t please DEI activists. Baker elaborates on these developments:

“The ADPPA was embraced almost unanimously by Republicans as well as Democrats on the House energy and commerce committee; it has stalled a bit, but still stands the best chance of enactment of any privacy bill in a decade (its supporters hope to push it through in a lame-duck session). The second is part of the AI Bill of Rights released last week by the Biden White House.

What the hell are the Republicans thinking? Whether or not it becomes a matter of law, misplaced concern about AI bias can be addressed in a practical sense by introducing the “right” constraints to the algorithm, such as a set of aggregate targets for hiring across pools of minority and non-minority job candidates. Then, the algorithm still optimizes, but the constraints impinge on the selections. The results are still “optimal”, but in a more restricted sense.

Stealth Quotas

As Baker says, these constrains on algorithmic tools would constitute a way of imposing quotas on hiring that employers won’t really have to explain to anyone. That’s because: 1) the decisioning rationale is so obtuse that it can’t readily be explained; and 2) the decisions are perceived as “fair” in the aggregate due to the absence of disparate impacts. As to #1, however, the vendors who create hiring algorithms, and specific details regarding algorithm development, might well be subject to regulatory scrutiny. In the end, the chief concern of these regulators is the absence of disparate impacts, which is cinched by #2.

About a month ago I posted about the EEOC’s outrageous and illegal enforcement of disparate impact liability. Should I welcome AI interventions because they’ll probably limit the number of enforcement actions against employers by the EEOC? After all, there is great benefit in avoiding as much of the rigamarole of regulatory challenges as possible. Nonetheless, as a constraint on hiring, quotas necessarily reduce productivity. By adopting quotas, either explicitly or via AI, the employer foregoes the opportunity to select the best candidate from the full population for a certain share of open positions, and instead limits the pool to narrow demographics.

Demographics are dynamic, and therefore stealth quotas must be dynamic to continue to meet the demands of zero disparate impact. But what happens as an increasing share of the population is of mixed race? Do all mixed race individuals receive protected status indefinitely, gaining preferences via algorithm? Does one’s protected status depend solely upon self-identification of racial, ethnic, or gender identity?

For that matter, do Asians receive hiring preferences? Sometimes they are excluded from so-called protected status because, as a minority, they have been “too successful”. Then, for example, there are issues such as the classification of Hispanics of European origin, who are likely to help fill quotas that are really intended for Hispanics of non-European descent.

Because self-identity has become so critical, quotas present massive opportunities for fraud. Furthermore, quotas often put minority candidates into positions at which they are less likely to be successful, with damaging long-term consequences to both the employer and the minority candidate. And of course there should remain deep concern about the way quotas violate the constitutional guarantee of equal protection to many job applicants.

The acceptance of AI hiring algorithms in the business community is likely to depend on the nature of the positions to be filled, especially when they require highly technical skills and/or the pool of candidates is limited. Of course, there can be tensions between hiring managers and human resources staff over issues like screening job candidates, but HR organizations are typically charged with spearheading DEI initiatives. They will be only too eager to adopt algorithmic selection and stealth quotas for many positions and will probably succeed, whether hiring departments like it or not.

The Death of Merit

Unfortunately, quotas are socially counter-productive, and they are not a good way around the dilemma posed by the EEOC’s aggressive enforcement of disparate impact liability. The latter can only be solved only when Congress acts to more precisely define the bounds of illegal discrimination in hiring. Meanwhile, stealth quotas cede control over important business decisions to external vendors selling algorithms that are often unfathomable. Quotas discard judgements as to relevant skills in favor of awarding jobs based on essentially superficial characteristics. This creates an unnecessary burden on producers, even if it goes unrecognized by those very firms and is self-inflicted. Even worse, once these algorithms and stealth quotas are in place, they are likely to become heavily regulated and manipulated in order to achieve political goals.

Baker sums up a most fundamental objection to quotas thusly:

Most Americans recognize that there are large demographic disparities in our society, and they are willing to believe that discrimination has played a role in causing the differences. But addressing disparities with group remedies like quotas runs counter to a deep-seated belief that people are, and should be, judged as individuals. Put another way, given a choice between fairness to individuals and fairness on a group basis, Americans choose individual fairness. They condemn racism precisely for its refusal to treat people as individuals, and they resist remedies grounded in race or gender for the same reason.”

Quotas, and stealth quotas, substitute overt discrimination against individuals in non-protected classes, and sometimes against individuals in protected classes as well, for the imagined sin of a disparate impact that might occur when the best candidate is hired for a job. AI algorithms with protection against “algorithmic bias” don’t satisfy this objection. In fact, the lack of accountability inherent in this kind of hiring solution makes it far worse than the status quo.

Hurricane—Warming Link Is All Model, No Data

Tags

, , , , , , , , , , , , , , , , , , , ,

There was deep disappointment among political opponents of Florida Governor Ron DeSantis at their inability to pin blame on him for Hurricane Ian’s destruction. It was a terrible hurricane, but they so wanted it to be “Hurricane Hitler”, as Glenn Reynolds noted with tongue in cheek. That just didn’t work out for them, given DeSantis’ competent performance in marshaling resources for aid and cleanup from the storm. Their last ditch refuge was to condemn DeSantis for dismissing the connection they presume to exist between climate change and hurricane frequency and intensity. That criticism didn’t seem to stick, however, and it shouldn’t.

There is no linkage to climate change in actual data on tropical cyclones. It is a myth. Yes, models of hurricane activity have been constructed that embed assumptions leading to predictions of more hurricanes, and more intense hurricanes, as temperatures rise. But these are models constructed as simplified representations of hurricane development. The following quote from the climate modelers at the Geophysical Fluid Dynamics Laboratory (GFDL) (a division of the National Oceanic and Atmospheric Administration (NOAA)) is straightforward on this point (emphases are mine):

Through research, GFDL scientists have concluded that it is premature to attribute past changes in hurricane activity to greenhouse warming, although simulated hurricanes tend to be more intense in a warmer climate. Other climate changes related to greenhouse warming, such as increases in vertical wind shear over the Caribbean, lead to fewer yet more intense hurricanes in the GFDL model projections for the late 21st century.

Models typically are said to be “calibrated” to historical data, but no one should take much comfort in that. As a long-time econometric modeler myself, I can say without reservation that such assurances are flimsy, especially with respect to “toy models” containing parameters that aren’t directly observable in the available data. In such a context, a modeler can take advantage of tremendous latitude in choosing parameters to include, sensitivities to assume for unknowns or unmeasured relationships, and historical samples for use in “calibration”. Sad to say, modelers can make these models do just about anything they want. The cautious approach to claims about model implications is a credit to GFDL.

Before I get to the evidence on hurricanes, it’s worth remembering that the entire edifice of climate alarmism relies not just on the temperature record, but on models based on other assumptions about the sensitivity of temperatures to CO2 concentration. The models relied upon to generate catastrophic warming assume very high sensitivity, and those models have a very poor track record of prediction. Estimates of sensitivity are highly uncertain, and this article cites research indicating that the IPCC’s assumptions about sensitivity are about 50% too high. And this article reviews recent findings that carbon sensitivity is even lower, about one-third of what many climate models assume. In addition, this research finds that sensitivities are nearly impossible to estimate from historical data with any precision because the record is plagued by different sources and types of atmospheric forcings, accompanying aerosol effects on climate, and differing half-lives of various greenhouse gases. If sensitivities are as low as discussed at the links above, it means that predictions of warming have been grossly exaggerated.

The evidence that hurricanes have become more frequent or severe, or that they now intensify more rapidly, is basically nonexistent. Ryan Maue and Roger Pielke Jr. of the University of Colorado have both researched hurricanes extensively for many years. They described their compilation of data on land-falling hurricanes in this Forbes piece in 2020. They point out that hurricane activity in older data is much more likely to be missing and undercounted, especially storms that never make landfall. That’s one of the reasons for the focus on landfalling hurricanes to begin with. With the advent of satellite data, storms are highly unlikely to be missed, but even landfalls have sometimes gone unreported historically. The farther back one goes, the less is known about the extent of hurricane activity, but Pielke and Maue feel that post-1970 data is fairly comprehensive.

The chart at the top of this post is a summery of the data that Pielke and Maue have compiled. There are no obvious trends in terms of the number of storms or their strength. The 1970s were quiet while the 90s were more turbulent. The absence of trends also characterizes NOAA’s data on U.S. landfalling hurricanes since 1851, as noted by Pail Driessen. Here is Driessen on Florida hurricane history:

Using pressure, Ian was not the fourth-strongest hurricane in Florida history but the tenth. The strongest hurricane in U.S. history moved through the Florida Keys in 1935. Among other Florida hurricanes stronger than Ian was another Florida Keys storm in 1919. This was followed by the hurricanes in 1926 in Miami, the Palm Beach/Lake Okeechobee storm in 1928, the Keys in 1948, and Donna in 1960. We do not know how strong the hurricane in 1873 was, but it destroyed Punta Rassa with a 14-foot storm surge. Punta Rassa is located at the mouth of the river leading up to Ft. Myers, where Ian made landfall.

Neil L. Frank, veteran meteorologist and former head of the National Hurricane Center, bemoans the changed conventions for assigning names to storms in the satellite era. A typical clash of warm and cold air will often produce thunderstorms and wind, but few of these types of systems were assigned names under older conventions. They are not typical of systems that usually produce tropical cyclones, although they can. Many of those kinds of storms are named today. Right or wrong, that gives the false impression of a trend in the number of named storms. Not only is it easier to identify storms today, given the advent of satellite data, but storms are assigned names more readily, even if they don’t strictly meet the definition of a tropical cyclone. It’s a wonder that certain policy advocates get away with saying the outcome of all this is a legitimate trend!

As Frank insists, there is no evidence of a trend toward more frequent and powerful hurricanes during the last several decades, and there is no evidence of rapid intensification. More importantly, there is no evidence that climate change is leading to more hurricane activity. It’s also worth noting that today we suffer far fewer casualties from hurricanes owing to much earlier warnings, better precautions, and better construction.

Hiring Discrimination In the U.S., Canada, and Western Europe

Tags

, , , , , , , , ,

Some people have the impression that the U.S. is uniquely bad in terms of racial, ethnic, gender, and other forms of discrimination. This misapprehension is almost as grossly in error as the belief held in some circles that the history of slavery is uniquely American, when in fact the practice has been so common historically, and throughout the world, as to be the rule rather than the exception.

This week, Alex Tabarrok shared some research I’d never seen on one kind of discriminatory behavior. In his post, “The US has Relatively Low Rates of Hiring Discrimination”, he cites the findings of a 2019 meta-study of “… 97 Field Experiments of Racial Discrimination in Hiring”. The research focused on several Western European countries, Canada, and the U.S. The experiments involved the use of “faux applicants” for actual job openings. Some studies used applications only and were randomized across different racial or ethnic cues for otherwise similar applicants. Other studies paired similar individuals of different racial or ethnic background for separate in-person interviews.

The authors found that hiring discrimination is fairly ubiquitous against non-white groups across employers in these countries. The authors were careful to note that the study did not address levels of hiring discrimination in countries outside the area of the study. They also disclaimed any implication about other forms of discrimination within the covered countries, such as bias in lending or housing.

The study’s point estimates indicated “ubiquitous hiring discrimination”, though not all the estimates were statistically significant. My apologies if the chart below is difficult to read. If so, try zooming in, clicking on it, or following the link to the study above.

Some of the largest point estimates were highly imprecise due to less coverage by individual studies. The impacted groups and severity varied across countries. Blacks suffered significant discrimination in the U.S., Canada, France, and Great Britain. For Hispanics, the only coverage was in the U. S. and sparsely in Canada. The point estimates showed discrimination in both counties, but it was (barely) significant only in the U.S. For Middle Eastern and North African (MENA) applicants, discrimination was severe in France, the Netherlands, Belgium, and Sweden. Asian applicants faced discrimination in France, Norway, Canada, and Great Britain.

Across all countries, the group suffering the least hiring discrimination was white immigrants, followed by Latin Americans / Hispanics (but only two countries were covered). Asians seemed to suffer the most discrimination, though not significantly more than Blacks (and less in the U.S. than in France, Norway, Canada, and Great Britain). Blacks and MENA applicants suffered a bit less than Asians from hiring discrimination, but again, not significantly less.

Comparing countries, the authors used U.S. hiring discrimination as a baseline, assigning a value of one. France had the most severe hiring discrimination and at a high level of significance. Sweden was next highest, but it was not significantly higher than in the U.S. Belgium, Canada, the Netherlands and Great Britain had higher point estimates of overall discrimination than the U. S., though none of those differences were significant. Employers in Norway were about as discriminatory as the U.S., and German employers were less discriminatory, though not significantly.

The upshot is that as a group, U.S. employers are generally at the low end of the spectrum in terms of discriminatory hiring. Again, the intent of this research was not to single out the selected countries. Rather, these countries were chosen because relevant studies were available. In fact, Tabarrok makes the following comment, which the authors probably wouldn’t endorse and is admittedly speculative, but I suspect it’s right:

I would bet that discrimination rates would be much higher in Japan, China and Korea not to mention Indonesia, Iraq, Nigeria or the Congo. Understanding why discrimination is lower in Western capitalist democracies would reorient the literature in a very useful way.

So the U.S. is not on the high-side of this set of Western countries in terms of discriminatory hiring practices. While discrimination against blacks and Hispanics in the U.S. appears to be a continuing phenomenon, overall hiring discrimination in the U.S. is, at worst, comparable to many European countries.

To anticipate one kind of response to this emphasis, the U.S. is not alone in its institutional efforts to reduce discrimination. In fact, the study’s authors say:

A fairly similar set of antidiscrimination laws were adopted in North America and many Western European countries from the 1960s to the 1990s. In 2000, the European Union passed a series of race directives that mandated a range of antidiscrimination measures to be adopted by all member states, putting their legislative frameworks on racial discrimination on highly similar footing.”

Despite these similarities, there are a few institutional details that might have some bearing on the results. For example, France bans the recording and “formal discussion” of race and ethnicity during the hiring process. (However, photos are often included in job applications in European countries.) Does this indicate that reporting mandates and prohibiting certain questions reduce hiring discrimination? That might be suggestive, but the evidence is not as clear cut as the authors seem to believe. They cite one piece of conflicting literature on that point. Moreover, it does not explain why Great Britain had a greater (and highly significant) point estimate of discrimination against Asians, or why Canada and Norway were roughly equivalent to France on this basis. Nor does it explain why Sweden and Belgium did not differ from France significantly in terms of discrimination against MENA applicants. Or why Canada was not significantly different from France in terms of hiring discrimination against Blacks. Overall, discrimination in Sweden was not significantly less than in France. Still, at least based on the three applicant groups covered by studies of France, that country had the highest overall level of discrimination. France also had the most significant departure from the U.S., where recording the race and ethnicity of job applicants is institutionalized.

Germany had the lowest overall point estimates of hiring discrimination in the study. According to the authors, employers in German-speaking countries tend to collect a fairly thorough set of background information on job applications. This detail can actually work against discrimination in hiring. Tabarrok notes that so-called “ban the box” policies, or laws that prohibit employers from asking about an applicant’s criminal record, are known to result in greater racial disparities in hiring. The same is true of policies that threaten sanctions against the use of objective job qualifications which might have disparate impacts on “protected” groups. That’s because generalized proxies based on race are often adopted by hiring managers, consciously or subconsciously.

Discrimination in hiring based on race and ethnicity might actually be reasonable when a job entails sensitive interactions requiring high levels of trust with members of a minority community. This statement acknowledges that we do not live in a perfect world in which racial and ethnic differences are irrelevant. Still, aside from exceptions of that kind, overt hiring discrimination based on race or ethnicity is a negative social outcome. The conundrum we face is whether it is more or less negative than efforts to coerce nondiscrimination on those bases across a broad range of behaviors, most of which are nondiscriminatory to begin with, and when interventions often have perverse discriminatory effects. Policymakers and observers in the U.S. should maintain perspective. Discriminatory behavior persists in the U.S., especially against Blacks, but some of this discrimination is likely caused by prohibitions on objective tests of relevant job skills. And as the research discussed above shows, employers here appear to be a bit less discriminatory than those in most other Western democracies.

“Hard Landing” Is Often Cost of Fixing Inflationary Policy Mistakes

Tags

, , , , , , , , , , , , , , , , , , , , , , ,

The debate over the Federal Reserve’s policy stance has undergone an interesting but understandable shift, though I disagree with the “new” sentiment. For the better part of this year, the consensus was that the Fed waited too long and was too dovish about tightening monetary policy, and I agree. Inflation ran at rates far in excess of the Fed’s target, but the necessary correction was delayed and weak at the start. This violated the necessary symmetry of a legitimate inflation-targeting regime under which the Fed claims to operate, and it fostered demand-side pressure on prices while risking embedded expectations of higher prices. The Fed was said to be “behind the curve”.

Punch Bowl Resentment

The past few weeks have seen equity markets tank amid rising interest rates and growing fears of recession. This brought forth a chorus of panicked analysts. Bloomberg has a pretty good take on the shift. Hopes from some economists for a “soft landing” notwithstanding, no one should have imagined that tighter monetary policy would be without risk of an economic downturn. At least the Fed has committed to a more aggressive policy with respect to price stability, which is one of its key mandates. To be clear, however, it would be better if we could always avoid “hard landings”, but the best way to do that is to minimize over-stimulation by following stable policy rules.

Price Trends

Some of the new criticism of the Fed’s tightening is related to a perceived change in inflation signals, and there is obvious logic to that point of view. But have prices really peaked or started to reverse? Economist Jeremy Siegel thinks signs point to lower inflation and believes the Fed is being too aggressive. He cites a series of recent inflation indicators that have been lower in the past month. Certainly a number of commodity prices are generally lower than in the spring, but commodity indices remain well above their year-ago levels and there are new worries about the direction of oil prices, given OPEC’s decision this week to cut production.

Central trends in consumer prices show that there is a threat of inflation that may be fairly resistant to economic weakness and Fed actions, as the following chart demonstrates:

Overall CPI growth stopped accelerating after June, and it wasn’t just moderation in oil prices that held it back (and that moderation might soon reverse). Growth of the Core CPI, which excludes food and energy prices, stopped accelerating a bit earlier, but growth in the CPI and the Core CPI are still running above 8% and 6%, respectively. More worrisome is the continued upward trend in more central measures of CPI growth. Growth in the median component of the CPI continues to accelerate, as has the so-called “Trimmed CPI”, which excludes the most extreme sets of high and low growth components. The response of those central measures lagged behind the overall CPI, but it means there is still inflationary momentum in the economy. There is a substantial risk that expectations of a more permanent inflation are becoming embedded in expectations, and therefore in price and wage setting, including long-term contracts.

The Fed pays more attention to a measure of prices called the Personal Consumption Expenditures (PCE) deflator. Unlike the CPI, the PCE deflator accounts for changes in the composition of a typical “basket” of goods and services. In particular, the Fed focuses most closely on the Core PCE deflator, which excludes food and energy prices. Inflation in the PCE deflator is lower than the CPI, in large part because consumers actively substitute away from products with larger price increases. However, the recent story is similar for these two indices:

Both overall PCE inflation and Core PCE inflation stopped accelerating a few months ago, but growth in the median PCE component has continued to increase. This central measure of inflation still has upward momentum. Again, this raises the prospect that inflationary forces remain strong, and that higher and more widespread expected inflation might make the trend more difficult for the Fed to rein in.

That leaves the Fed little choice if it hopes to bring inflation back down to its target level. It’s really a only a choice of whether to do it faster or slower. One big qualification is that the Fed can’t do much about supply shortfalls, which have been a source of price pressure since the start of the rebound from the pandemic. However, demand pressures have been present since the acceleration in price growth began in earnest in early 2021. At this point, it appears that they are driving the larger part of inflation.

The following chart shows share decompositions for growth in both the “headline” PCE deflator and the Core PCE deflator. Actual inflation rates are NOT shown in these charts. Focus only on the bolder colored bars. (The lighter bars represent estimates having less precision.) Red represents “supply-side” factors contributing to changes in the PCE deflator, while blue summarizes “demand-side” factors. This division is based on a number of assumptions (methodological source at the link), but there is no question that demand has contributed strongly to price pressures. At least that gives a sense about how much of the inflation can be addressed by actions the Fed might take.

I mentioned the role of expectations in laying the groundwork for more permanent inflation. Expected inflation not only becomes embedded in pricing decisions: it also leads to accelerated buying. So expectations of inflation become a self-fulfilling prophesy that manifests on both the supply side and the demand-side. Firms are planning to raise prices in 2023 because input prices are expected to continue rising. In terms of the charts above, however, I suspect this phenomenon is likely to appear in the “ambiguous” category, as it’s not clear that the counting method can discern the impacts of expectations.

What’s a Central Bank To Do?

Has the Fed become too hawkish as inflation accelerated this year while proving to be more persistent than expected? One way to look at that question is to ask whether real interest rates are still conducive to excessive rate-sensitive demand. With PCE inflation running at 6 – 7% and Treasury yields below 4%, real returns are still negative. That’s hardly seems like a prescription for taming inflation, or “hawkish”. Rate increases, however, are not the most reliable guide to the tenor of monetary policy. As both John Cochrane and Scott Sumner point out, interest rate increases are NOT always accompanied by slower money growth or slowing inflation!

However, Cochrane has demonstrated elsewhere that it’s possible the Fed was on the right track with its earlier dovish response, and that price pressures might abate without aggressive action. I’m skeptical to say the least, and continuing fiscal profligacy won’t help in that regard.

The Policy Instrument That Matters

Ultimately, the best indicator that policy has tightened is the dramatic slowdown (and declines) in the growth of the monetary aggregates. The three charts below show five years of year-over-year growth in two monetary measures: the monetary base (bank reserves plus currency in circulation), and M2 (checking, saving, money market accounts plus currency).

Growth of these aggregates slowed sharply in 2021 after the Fed’s aggressive moves to ease liquidity during the first year of the pandemic. The monetary base and M2 growth have slowed much more in 2022 as the realization took hold that inflation was not transitory, as had been hoped. Changes in the growth of the money stock takes time to influence economic activity and inflation, but perhaps the effects have already begun, or probably will in earnest during the first half of 2023.

The Protuberant Balance Sheet

Since June, the Fed has also taken steps to reduce the size of its bloated balance sheet. In other words, it is allowing its large holdings of U.S. Treasuries and Agency Mortgage-Backed Securities to shrink. These securities were acquired during rounds of so-called quantitative easing (QE), which were a major contributor to the money growth in 2020 that left us where we are today. The securities holdings were about $8.5 trillion in May and now stand at roughly $8.2 trillion. Allowing the portfolio to run-off reduces bank reserves and liquidity. The process was accelerated in September, but there is increasing tension among analysts that this quantitative tightening will cause disruptions in financial markets and ultimately the real economy, There is no question that reducing the size of the balance sheet is contractionary, but that is another necessary step toward reducing the rate of inflation.

The Federal Spigot

The federal government is not making the Fed’s job any easier. The energy shortages now afflicting markets are largely the fault of misguided federal policy restricting supplies, with an assist from Russian aggression. Importantly, however, heavy borrowing by the U.S. Treasury continues with no end in sight. This puts even more pressure on financial markets, especially when such ongoing profligacy leaves little question that the debt won’t ever be repaid out of future budget surpluses. The only way the government’s long-term budget constraint can be preserved is if the real value of that debt is bid downward. That’s where the so-called inflation tax comes in, and however implicit, it is indeed a tax on the public.

Don’t Dismiss the Real Costs of Inflation

Inflation is a costly process, especially when it erodes real wages. It takes its greatest toll on the poor. It penalizes holders of nominal assets, like cash, savings accounts, and non-indexed debt. It creates a high degree of uncertainty in interpreting price signals, which ordinarily carry information to which resource flows respond. That means it confounds the efficient allocation of resources, costing all of us in our roles as consumers and producers. The longer it continues, the more it erodes our economy’s ability to enhance well being, not to mention the instability it creates in the political environment.

Imminent Recession?

So far there are only limited signs of a recession. Granted, real GDP declined in both the first and second quarters of this year, but many reject that standard as overly broad for calling a recession. Moreover, consumer spending held up fairly well. Employment statistics have remained solid, though we’ll get an update on those this Friday. Nevertheless, payroll gains have held up and the unemployment rate edged up to a still-low 3.7% in August.

Those are backward-looking signs, however. The financial markets have been signaling recession via the inverted yield curve, which is a pretty reliable guide. The weak stock market has taken a bite out of wealth, which is likely to mean weaker demand for goods. In addition to energy-supply shocks, the strong dollar makes many internationally-traded commodities very costly overseas, which places the global economy at risk. Moreover, consumers have run-down their savings to some extent, corporate earnings estimates have been trimmed, and the housing market has weakened considerably with higher mortgage rates. Another recent sign of weakness was a soft report on manufacturing growth in September.

Deliver the Medicine

The Fed must remain on course. At least it has pretensions of regaining credibility for its inflation targeting regime, and ultimately it must act in a symmetric way when inflation overshoots its target, and it has. It’s not clear how far the Fed will have to go to squeeze demand-side inflation down to a modest level. It should also be noted that as long as supply-side pressures remain, it might be impossible for the Fed to engineer a reduction of inflation to as low as its 2% target. Therefore, it must always bear supply factors in mind to avoid over-contraction.

As to raising the short-term interest rates the Fed controls, we can hope we’re well beyond the halfway point. Reductions in the Fed’s balance sheet will continue in an effort to tighten liquidity and to provide more long-term flexibility in conducting operations, and until bank reserves threaten to fall below the Fed’s so-called “ample reserves” criterion, which is intended to give banks the wherewithal to absorb small shocks. Signs that inflationary pressures are abating is a minimum requirement for laying off the brakes. Clear signs of recession would also lead to more gradual moves or possibly a reversal. But again, demand-side inflation is not likely to ease very much without at least a mild recession.

The Beatles in ‘69: By the Book, Wary of Live Performance

Tags

, , , , , , , , , , , ,

I finally got around to watching Peter Jackson’s “Get Back!, a distillation of the many hours of video from the Beatles’ recording sessions covering 21 days back in late 1969. The culmination of the film was a brief rooftop “concert” in London. It was the band’s first public performance in years, and it proved to be their last ever. Get Back! is lengthy but very enjoyable and an incredible glimpse into the various personalities of the group.

The film projects a strong impression of the Beatles’ anxiety, at that time, about playing a live gig. During all but the last few days captured on the film, it was unclear to everyone involved whether the band would actually do a live performance. The band members were of decidedly mixed enthusiasm about it. They were also skeptical that the cameras at their sessions could capture enough interesting material for a film.

The Beatles had an early reputation as a great live band, but they had last played live in 1966. Kieran McGovern says the band quit touring for three reasons: poor sound quality, exhaustion, and security concerns. The last two are probably self-explanatory, though McGovern thinks the “bigger than Jesus” controversy was worrisome to the band. As to sound quality, the Beatles were the first band to play massive stadium concerts, but the sound equipment was too puny and not adequately advanced to handle those demands. Even worse, the band was unable to hear itself on stage over the throngs of screaming fans. So they just stopped. By then, they were so wildly successful as recording artists that it was unnecessary to promote themselves by touring.

During the Get Back! sessions, Paul McCartney mused about the pros and cons of doing a live concert, but the band seemed a little paralyzed by the notion. It was as if they were clinging to the idea that studio albums should remain their sole focus. And as they worked out arrangements for new songs, various “takes” were preserved by the engineers so that, if nothing else, they would have material for a new album. They did take after take, often stopping after just a few bars.

I’m sure studio sessions with new material can be challenging. In fact, a few of the songs were composed right there in the studio, going from rough idea to fruition over the course of days. It was interesting to witness the band’s humanity in the face of self-imposed pressure to “get it right”, over and over. I know the feeling in my own small way. When I learn new material on the guitar, I sometimes record myself, but an odd thing happens as soon as I hit “record” … it’s hard to get through a song without some perceived mishap. And one attempt is followed by another. And another. Sometimes these “mishaps” stop me almost right at the start. In some ways it was reassuring, and frustrating, to see the same thing happening to the iconic Beatles. I’m also sure this reinforced their hesitation to “go live”. But when you play live, you just have to play through the mishaps, and I’m sure they’d done it many times before!

Years earlier, as the band rose to fame, they performed live all the time, but oddly, the highly creative years away from the stage seemed to corrode their confidence as a working band. There were so many incredible groups performing live in those days, but not for such immense crowds until perhaps Monterey, Woodstock, and maybe a few other big festivals in the late 60s. Much larger sound systems were a requirement that went unfulfilled at the Beatles’ earlier stadium shows, and the poor sound quality was a great frustration to the band. In the later, post-Beatle years, individual members of the band played huge concerts, and the surviving members still do.

While *nobody* is quite like the Beatles, all live bands make mistakes and play through them. Practice might make close to perfect, but even well-drilled classical musicians have their bad days. The Beatles, however, seemed intimidated by the possibility of screwing up in front of an audience, and about knowing the right notes to play. So the film gave me the impression that the Beatles were at heart, or had at least become, what one might call “book musicians”. Play it the same way every time! And they were so eccentrically “book” oriented that they fought a certain paralysis as to the demands of live performance.

There was an astonishing admission from George Harrison fairly early in the film: I’m paraphrasing, but he found it incredible to hear Eric Clapton launch into lengthy guitar improvisations and then somehow end up “in the right place”. And Harrison said, “I just can’t do that.” I love George Harrison’s guitar work, and he wrote some wonderful songs, but the first statement sounds like something one might have heard from a newbie at a Grateful Dead concert. His lack of improvisational confidence puts emphasis on the idea that he was, in fact, a “book musician”.

For the Beatles, in 1969 at least, the idea of improvisation, or just playing around, was fine for a bit of fun in the studio, or to loosen up. They tended toward old rock n’ roll material or messed around with their own, older stuff, often with comic effect. And John Lennon was very funny, by the way. But the emphasis wasn’t on the concept of musical improvisation, and the idea of doing it on stage, or playing from the cuff before a live audience, was out of the question.

Meanwhile, improvisation had been an active pursuit among jazz musicians almost from the beginning. It was inherently a looser form than what the Beatles wanted to do. The jam band genre was an extension of the jazz aesthetic into adjacent musical forms like blues, rock, and even country. The Grateful Dead pioneered the jam band “form”, if that word can be used, but in any case, improvisation, or a loose approach to live performance with spontaneous creativity, was widespread in the late 1960s. That’s definitely not where the Beatles were at.

The Beatles were a wonderful band, brilliant songwriters, poets, and musicians. They also were driven by perfectionism, at least at the late stages of their time together. Improvisation was not their “cup of tea”, as it were. They had strong reasons for their reluctance to play live after their 1966 tour. By 1969, they hesitated to do even one concert before a smaller audience. The tentative “show date” on their calendar seemed like an approaching freight train, and they dithered over the kind of show it would be and where it would be staged. Finally, the rooftop of Apple Studios was selected with just a couple of days to go. It was an interesting promotional stunt, but it seemed like a cop-out. Not many people could really see them up there, and the sound quality on the street was probably a very mixed bag. Still, Get Back! was a lot of fun to watch. And I do love the Beatles, even if I love the music and often careening style of the original jam band much more.