Sweden’s Pandemic Policy: Arguably Best Practice

Tags

, , , , , , , , , , , , , , , , , , ,

When Covid-19 began its awful worldwide spread in early 2020, the Swedes made an early decision that ultimately proved to be as protective of human life as anything chosen from the policy menu elsewhere. Sweden decided to focus on approaches for which there was evidence of efficacy in containing respiratory pandemics, not mere assertions by public health authorities (or anyone else) that stringent non-pharmaceutical interventions (NPIs) were necessary or superior.

The Swedish Rationale

The following appeared in an article in Stuff in late April, 2020,

Professor Johan Giesecke, who first recruited [Sweden’s State epidemiologist Anders] Tegnell during his own time as state epidemiologist, used a rare interview last week to argue that the Swedish people would respond better to more sensible measures. He blasted the sort of lockdowns imposed in Britain and Australia and warned a second wave would be inevitable once the measures are eased. ‘… when you start looking around at the measures being taken by different countries, you find very few of them have a shred of evidence-base,’ he said.

Giesecke, who has served as the first Chief Scientist of the European Centre for Disease Control and has been advising the Swedish Government during the pandemic, told the UnHerd website there was “almost no science” behind border closures and school closures and social distancing and said he looked forward to reviewing the course of the disease in a year’s time.

Giesecke was of the opinion that there would ultimately be little difference in Covid mortality across countries with different pandemic policies. Therefore, the least disruptive approach was to be preferred. That meant allowing people to go about their business, disseminating information to the public regarding symptoms and hygiene, and attempting to protect the most vulnerable segments of the population. Giesecke said:

I don’t think you can stop it. It’s spreading. It will roll over Europe no matter what you do.

He was right. Sweden had a large number of early Covid deaths primarily due to its large elderly population as well as its difficulty in crafting effective health messages for foreign-speaking immigrants residing in crowded enclaves. Nevertheless, two years later, Sweden has posted extremely good results in terms of excess deaths during the pandemic.

Excess Deaths

Excess deaths, or deaths relative to projections based on historical averages, are a better metric than Covid deaths (per million) for cross-country or jurisdictional comparisons. Among other reasons, the latter are subject to significant variations in methods of determining cause of death. Moreover, there was a huge disparity between excess deaths and Covid deaths during the pandemic, and the gap is still growing:

Excess deaths varied widely across countries, as illustrated by the left-hand side of the following chart:

Interestingly, most of the lowest excess death percentages were in Nordic countries, but especially Sweden and Norway. That might be surprising in terms of high Nordic latitudes, which may have created something of a disadvantage in terms of sun exposure and potentially low vitamin D levels. Norway enacted more stringent public policies during the pandemic than Sweden. Globally, however, lockdown measures showed no systematic advantage in terms of excess deaths. Notably, the U.S. did quite poorly in terms of excess deaths at 8X the Swedish rate,

Covid Deaths

The right-hand side of the chart above shows that Sweden experienced a significant number of Covid deaths per million residents. The figure still compares reasonably well internationally, despite the country’s fairly advanced age demographics. Most Covid deaths occurred in the elderly and especially in care settings. Like other places, that is where the bulk of Sweden’s Covid deaths occurred. Note that U.S. Covid deaths per million were more than 50% higher than in Sweden.

NPIs Are Often Deadly

Perhaps a more important reason to emphasize excess deaths over Covid deaths is that public policy itself had disastrous consequences in many countries. In particular, strict NPIs like lockdowns, including school and business closures, can undermine public health in significant ways. That includes the inevitably poor consequences of deferred health care, the more rapid spread of Covid within home environments, the physical and psychological stress from loss of livelihood, and the toll of isolation, including increased use of alcohol and drugs, less exercise, and binge eating. Isolation is particularly hard on the elderly and led to an increase in “deaths of despair” during the pandemic. These were the kinds of maladjustments caused by lockdowns that led to greater excess deaths. Sweden avoided much of that by eschewing stringent NPIs, and Iceland is sometimes cited as a similar case.

Oxford Stringency Index

I should note here, and this is a digression, that the most commonly used summary measure of policy “stringency” is not especially trustworthy. That measure is an index produced by Oxford University that is available on the Our World In Data web site. Joakim Book documented troubling issues with this index in late 2020, after changes in the index’s weightings dramatically altered its levels for Nordic countries. As Book said at that time:

Until sometime recently, Sweden, which most media coverage couldn’t get enough of reporting, was the least stringent of all the Nordics. Life was freer, pandemic restrictions were less invasive, and policy responses less strong; this aligned with Nordic people’s experience on the ground.

Again, Sweden relied on voluntary action to limit the spread of the virus, including encouragement of hygiene, social distancing, and avoiding public transportation when possible. Book was careful to note that “Sweden did not ‘do nothing’”, but it’s policies were less stringent than its Nordic neighbors in several ways. While Sweden had the same restrictions on arrivals from outside the European Economic Area as the rest of the EU, it did not impose quarantines, testing requirements, or other restrictions on travelers or on internal movements. Sweden’s school closures were short-lived, and its masking policies were liberal. The late-2020 changes in the Oxford Stringency Index, Book said, simply did not “pass the most rudimentary sniff test”.

Economic Stability

Sweden’s economy performed relatively well during the pandemic. The growth path of real GDP was smoother than most countries that succumbed to the excessive precautions of lockdowns. However, Norway’s economy appears to have been the most stable of those shown on the chart, at least in terms of real output, though it did suffer a spike in unemployment.

The Bottom Line

The big lesson is that Sweden’s “light touch” during the pandemic proved to be at least as effective, if not more so, than comparatively stringent policies imposed elsewhere. Covid deaths were sure to occur, but widespread non-Covid excess deaths were unanticipated by many countries practicing stringent intervention. That lack of foresight is best understood as a consequence of blind panic among public health “experts” and other policymakers, who too often are rewarded for misguided demonstrations that they have “done something”. Those actions failed to stop the spread in any systematic sense, but they managed to do great damage to other aspects of public health. Furthermore, they undermined economic well being and the cause of freedom. Johan Giesecke was right to be skeptical of those claiming they could contain the virus through NPIs, though he never anticipated the full extent to which aggressive interventions would prove deadly.

Biden’s Rx Price Controls: Cheap Politics Over Cures

Tags

, , , , , , , , , , , , , , , , , , , , , , ,

You can expect dysfunction when government intervenes in markets, and health care markets are no exception. The result is typically over-regulation, increased industry concentration, lower-quality care, longer waits, and higher costs to patients and taxpayers. The pharmaceutical industry is one of several tempting punching bags for ambitious politicians eager to “do something” in the health care arena. These firms, however, have produced many wonderful advances over the years, incurring huge research, development, and regulatory costs in the process. Reasonable attempts to recoup those costs often means conspicuously high prices, which puts a target on their backs for the likes of those willing to characterize return of capital and profit as ill-gotten.

Biden Flunks Econ … Again

Lately, under political pressure brought on by escalating inflation, Joe Biden has been talking up efforts to control the prices of prescription drugs for Medicare beneficiaries. Anyone with a modicum of knowledge about markets should understand that price controls are a fool’s errand. Price controls don’t make good policy unless the goal is to create shortages.

The preposterously-named Inflation Reduction Act is an example of this sad political dynamic. Reducing inflation is something the Act won’t do! Here is Wikipedia’s summary of the prescription drug provisions, which is probably adequate for now:

Prescription drug price reform to lower prices, including Medicare negotiation of drug prices for certain drugs (starting at 10 by 2026, more than 20 by 2029) and rebates from drug makers who price gouge… .”

The law contains provisions that cap insulin costs at $35/month and will cap out-of-pocket drug costs at $2,000 for people on Medicare, among other provisions.

Unpacking the Blather

“Price gouging”, of course, is a well-worn term of art among anti-market propagandists. In this case it’s meaning appears to be any form of non-compliance, including those for which fees and rebates are anticipated.

The insulin provision is responsive to a long-standing and misleading allegation that insulin is unavailable at reasonable prices. In fact, insulin is already available at zero cost as durable medical equipment under Medicare Part B for diabetics who use insulin pumps. Some types and brands of insulin are available at zero cost for uninsured individuals. A simple internet search on insulin under Medicare yields several sources of cheap insulin. GoodRx also offers brands at certain pharmacies at reasonable costs.

As for the cap on out-of-pocket spending under Part D, limiting the patient’s payment responsibility is a bad way to bring price discipline to the market. Excessive third-party shares of medical payments have long been implicated in escalating health care costs. That reality has eluded advocates of government health care, or perhaps they simply prefer escalating costs in the form of health care tax burdens.

Negotiated Theft

The Act’s adoption of the term “negotiation” is a huge abuse of that word’s meaning. David R. Henderson and Charles Hooper offer the following clarification about what will really happen when the government sits down with the pharmaceutical companies to discuss prices:

Where CMS is concerned, ‘negotiations’ is a ‘Godfather’-esque euphemism. If a drug company doesn’t accept the CMS price, it will be taxed up to 95% on its Medicare sales revenue for that drug. This penalty is so severe, Eli Lilly CEO David Ricks reports that his company treats the prospect of negotiations as a potential loss of patent protection for some products.

The first list of drugs for which prices will be “negotiated” by CMS won’t take effect until 2026. However, in the meantime, drug companies will be prohibited from increasing the price of any drug sold to Medicare beneficiaries by more than the rate of inflation. Price control is the correct name for these policies.

Death and Cost Control

Henderson and Hooper chose a title for their article that is difficult for the White House and legislators to comprehend: “Expensive Prescription Drugs Are a Bargain“. The authors first note that 9 out of 10 prescription drugs sold in the U.S. are generics. But then it’s easy to condemn high price tags for a few newer drugs that are invaluable to those whose lives they extend, and those numbers aren’t trivial.

Despite the protestations of certain advocates of price controls and the CBO’s guesswork on the matter, the price controls will stifle the development of new drugs and ultimately cause unnecessary suffering and lost life-years for patients. This reality is made all too clear by Joe Grogan in the Wall Street Journal in “The Inflation Reduction Act Is Already Killing Potential Cures” (probably gated). Grogan cites the cancellation of drugs under development or testing by three different companies: one for an eye disease, another for certain blood cancers, and one for gastric cancer. These cancellations won’t be the last.

Big Pharma Critiques

The pharmaceutical industry certainly has other grounds for criticism. Some of it has to do with government extensions of patent protection, which prolong guaranteed monopolies beyond points that may exceed what’s necessary to compensate for the high risk inherent in original investments in R&D. It can also be argued, however, that the FDA approval process increases drug development costs unreasonably, and it sometimes prevents or delays good drugs from coming to market. See here for some findings on the FDA’s excessive conservatism, limiting choice in dire cases for which patients are more than willing to risk complications. Pricing transparency has been another area of criticism. The refusal to release detailed data on the testing of Covid vaccines represents a serious breach of transparency, given what many consider to have been inadequate testing. Big pharma has also been condemned for the opioid crisis, but restrictions on opioid prescriptions were never a logical response to opioid abuse. (Also see here, including some good news from the Supreme Court on a more narrow definition of “over-prescribing”.)

Bad policy is often borne of short-term political objectives and a neglect of foreseeable long-term consequences. It’s also frequently driven by a failure to understand the fundamental role of profit incentives in driving innovation and productivity. This is a manifestation of the short-term focus afflicting many politicians and members of the public, which is magnified by the desire to demonize a sector of the economy that has brought undeniable benefits to the public over many years. The price controls in Biden’s Inflation Reduction Act are a sure way to short-circuit those benefits. Those interventions effectively destroy other incentives for innovation created by legislation over several decades, as Joe Grogan describes in his piece. If you dislike pharma pricing, look to reform of patenting and the FDA approval process. Those are far better approaches.

Conclusion

Note: The image above was created by “Alexa” for this Washington Times piece from 2019.

Wind and Solar Power: Brittle, Inefficient, and Destructive

Tags

, , , , , , , , , , , , , , , , , , , , ,

Just how renewable is “renewable” energy, or more specifically solar and wind power? Intermittent though they are, the wind will always blow and the sun will shine (well, half a day with no clouds). So the possibility of harvesting energy from these sources is truly inexhaustible. Obviously, it also takes man-made hardware to extract electric power from sunshine and wind — physical capital— and it is quite costly in several respects, though taxpayer subsidies might make it appear cheaper to investors and (ultimately) users. Man-made hardware is damaged, wears out, malfunctions, or simply fails for all sorts of reasons, and it must be replaced from time to time. Furthermore, man-made hardware such as solar panels, wind turbines, and the expansions to the electric grid needed to bring the power to users requires vast resources and not a little in the way of fossil fuels. The word “renewable” is therefore something of a misnomer when it comes to solar and wind facilities.

Solar Plant

B. F. Randall (@Mining_Atoms) has a Twitter thread on this topic, or actually several threads (see below). The first thing he notes is that solar panels require polysilicon, which not recyclable. Disposal presents severe hazards of its own, and to replace old solar panels, polysilicon must be produced. For that, Randall says you need high-purity silica from quartzite rock, high-purity coking coal, diesel fuel, and large flows of dispatchable (not intermittent) electric power. To get quartzite, you need carbide drilling tools, which are not renewable. You also need to blast rock using ammonium nitrate fuel oil derived from fossil fuels. Then the rock must be crushed and often milled into fine sand, which requires continuous power. The high temperatures required to create silicon are achieved with coking coal, which is also used in iron and steel making, but coking coal is non-renewable. The whole process requires massive amounts of electricity generated with fossil fuels. Randall calls polysilicon production “an electricity beast”.

Greenwashing

The resulting carbon emissions are, in reality, unlikely to be offset by any quantity of carbon credits these firms might purchase, which allow them to claim a “zero footprint”. Blake Lovewall describes the sham in play here:

The biggest and most common Carbon offset schemes are simply forests. Most of the offerings in Carbon marketplaces are forests, particularly in East Asian, African and South American nations. …

The only value being packaged and sold on these marketplaces is not cutting down the trees. Therefore, by not cutting down a forest, the company is maintaining a ‘Carbon sink’ …. One is paying the landowner for doing nothing. This logic has an acronym, and it is slapped all over these heralded offset projects: REDD. That is a UN scheme called ‘Reduce Emissions from Deforestation and Forest Degradation’. I would re-name it to, ‘Sell off indigenous forests to global investors’.

Lovewall goes on to explain that these carbon offset investments do not ensure that forests remain pristine by any stretch of the imagination. For one thing, the requirements for managing these “preserves” are often subject to manipulation by investors working with government; as such, the credits are often vehicle for graft. In Indonesia, for example, carbon credited forests have been converted to palm oil plantations without any loss of value to the credits! Lovewall also cites a story about carbon offset investments in Brazil, where the credits provided capital for a massive dam in the middle of the rainforest. This had severe environmental and social consequences for indigenous peoples. It’s also worth noting that planting trees, wherever that might occur under carbon credits, takes many years to become a real carbon sink.

While I can’t endorse all of Lovewall’s points of view, he makes a strong case that carbon credits are a huge fraud. They do little to offset carbon generated by entities that purchase them as offsets. Again, the credits are very popular with the manufacturers and miners who participate in the fabrication of physical capital for renewable energy installations who wish to “greenwash” their activities.

Wind Plant

Randall discusses the non-renewability of wind turbines in a separate thread. Turbine blades, he writes, are made from epoxy resins, balsa wood, and thermoplastics. They wear out, along with gears and other internal parts, and must be replaced. Land disposal is safe and cheap, but recycling is costly and requires even greater energy input than the use of virgin feedstocks. Randall’s thread on turbines raised some hackles among wind energy defenders and even a few detractors, and Randall might have overstated his case in one instance, but the main thrust of his argument is irrefutable: it’s very costly to recycle these components into other usable products. Entrepreneurs are still trying to work out processes for doing so. It’s not clear that recycling the blades into other products is more efficient than sending them to landfills, as the recycling processes are resource intensive.

But even then, the turbines must be replaced. Recycling the old blades into crates and flooring and what have you, and producing new wind turbines, requires lots of power. And as Randall says, replacement turbines require huge ongoing quantities of zinc, copper, cement, and fossil fuel feedstocks.

The Non-Renewability of Plant

It shouldn’t be too surprising that renewable power machinery is not “renewable” in any sense, despite the best efforts of advocates to convince us of their ecological neutrality. Furthermore, the idea that the production of this machinery will be “zero carbon” any time in the foreseeable future is absurd. In that respect, this is about like the ridiculous claim that electric vehicles (EVs) are “zero emission”, or the fallacy that we can achieve a zero carbon world based on renewable power.

It’s time the public came to grips with the reality that our heavy investments in renewables are not “renewable” in the ecological sense. Those investments, and reinvestments, merely buy us what Randall calls “garbage energy”, by which he means that it cannot be relied upon. Burning garbage to create steam is actually a more reliable power source.

Highly Variable With Low Utilization

Randall links to information provided by Martian Data (@MartianManiac1) on Europe’s wind energy generation as of September 22, 2022 (see the tweet for Martian Data’s sources):

Hourly wind generation in Europe for past 6 months:
Max: 122GW
Min: 10.2GW
Mean: 41.0
Installed capacity: ~236GW

That’s a whopping 17.4% utilization factor! That’s pathetic, and it means the effective cost is quintuple the value at nameplate capacity. Take a look at this chart comparing the levels and variations in European power demand, nuclear generation, and wind generation over the six months ending September 22nd (if you have trouble zooming in here, try going to the thread):

The various colors represent different countries. Here’s a larger view of the wind component:

A stable power grid cannot be built upon this kind of intermittency. Here is another comparison that includes solar power. This chart is daily covering 2021 through about May 26, 2022.

As for solar capacity utilization, it too is unimpressive. Here is Martian Data’s note on this point, followed by a chart of solar generation over the course of a few days in June:

so ~15% solar capacity is whole year average. ~5% winter ~20% summer. And solar is brief in summer too…, it misses both both morning and evening peaks in demand.

Like wind, the intermittency of solar power makes it an impractical substitute for traditional power sources. Check out Martian Data’s Twitter feed for updates and charts from other parts of the world.

Nuclear Efficiency

Nuclear power generation is an excellent source of baseload power. It is dispatchable and zero carbon except at plant construction. It also has an excellent safety record, and newer, modular reactor technologies are safer yet. It is cheaper in terms of generating capacity and it is more flexible than renewables. In fact, in terms of the resource costs of nuclear power vs. renewables over plant cycles, it’s not even close. Here’s a chart recently posted by Randall showing input quantities per megawatt hour produced over the expected life of each kind of power facility (different power sources are labeled at bottom, where PV = photovoltaic (solar)):

In fairness, I’m not completely satisfied with these comparisons. They should be stated in terms of current dollar costs, which would neutralize differences in input densities and reflect relative scarcities. Nevertheless, the differences in the chart are stark. Nuclear produces cheap, reliable power.

The Real Dirt

Solar and wind power are low utilization power sources and they are intermittent. Heavy reliance on these sources creates an extremely brittle power grid. Also, we should be mindful of the vast environmental degradation caused by the mining of minerals needed to produce solar panels and wind turbines, including their inevitable replacements, not to mention the massive land use requirements of wind and solar power. Also disturbing is the hazardous dumping of old solar panels from the “first world” now taking place in less developed countries. These so-called clean-energy sources are anything but clean or efficient.

Stealth Hiring Quotas Via AI

Tags

, , , , , , , , , , , , , , , ,

Hiring quotas are of questionable legal status, but for several years, some large companies have been adopting quota-like “targets” under the banner of Diversity, Equity and Inclusion (DEI) initiatives. Many of these so-called targets apply to the placement of minority candidates into “leadership positions”, and some targets may apply more broadly. Explicit quotas have long been viewed negatively by the public. Quotas have also been proscribed under most circumstances by the Supreme Court, and the EEOC’s Compliance Manual still includes rigid limits on when the setting of minority hiring “goals” is permissible.

Yet large employers seem to prefer the legal risks posed by aggressive DEI policies to the risk of lawsuits by minority interests, unrest among minority employees and “woke” activists, and “disparate impact” inquiries by the EEOC. Now, as Stewart Baker writes in a post over at the Volokh Conspiracy, employers have a new way of improving — or even eliminating — the tradeoff they face between these risks: “stealth quotas” delivered via artificial intelligence (AI) decisioning tools.

Skynet Smiles

A few years ago I discussed the extensive use of algorithms to guide a range of decisions in “Behold Our Algorithmic Overlords“. There, I wrote:

Imagine a world in which all the information you see is selected by algorithm. In addition, your success in the labor market is determined by algorithm. Your college admission and financial aid decisions are determined by algorithm. Credit applications are decisioned by algorithm. The prioritization you are assigned for various health care treatments is determined by algorithm. The list could go on and on, but many of these ‘use-cases’ are already happening to one extent or another.

That post dealt primarily with the use of algorithms by large tech companies to suppress information and censor certain viewpoints, a danger still of great concern. However, the use of AI to impose de facto quotas in hiring is a phenomenon that will unequivocally reduce the efficiency of the labor market. But exactly how does this mechanism work to the satisfaction of employers?

Machine Learning

As Baker explains, AI algorithms are “trained” to find optimal solutions to problems via machine learning techniques, such as neural networks, applied to large data sets. These techniques are are not as straightforward as more traditional modeling approaches such as linear regression, which more readily lend themselves to intuitive interpretation of model results. Baker uses the example of lung x-rays showing varying degrees of abnormalities, which range from the appearance of obvious masses in the lungs to apparently clear lungs. Machine learning algorithms sometimes accurately predict the development of lung cancer in individuals based on clues that are completely non-obvious to expert evaluators. This, I believe, is a great application of the technology. It’s too bad that the intuition behind many such algorithmic decisions are often impossible to discern. And the application of AI decisioning to social problems is troubling, not least because it necessarily reduces the richness of individual qualities to a set of data points, and in many cases, defines individuals based on group membership.

When it comes to hiring decisions, an AI algorithm can be trained to select the “best” candidate for a position based on all encodable information available to the employer, but the selection might not align with a hiring manager’s expectations, and it might be impossible to explain the reasons for the choice to the manager. Still, giving the AI algorithm the benefit of the doubt, it would tend to make optimal candidate selections across reasonably large sets of similar, open positions.

Algorithmic Bias

A major issue with respect to these algorithms has been called “algorithmic bias”. Here, I limit the discussion to hiring decisions. Ironically, “bias” in this context is a rather slanted description, but what’s meant is that the algorithms tend to select fewer candidates from “protected classes” than their proportionate shares of the general population. This is more along the lines of so-called “disparate impact”, as opposed to “bias” in the statistical sense. Baker discusses the attacks this has provoked against algorithmic decision techniques. In fact, a privacy bill is pending before Congress containing provisions to address “AI bias” called the American Data Privacy and Protection Act (ADPPA). Baker is highly skeptical of claims regarding AI bias both because he believes they have little substance and because “bias” probably means that AIs sometimes make decisions that don’t please DEI activists. Baker elaborates on these developments:

“The ADPPA was embraced almost unanimously by Republicans as well as Democrats on the House energy and commerce committee; it has stalled a bit, but still stands the best chance of enactment of any privacy bill in a decade (its supporters hope to push it through in a lame-duck session). The second is part of the AI Bill of Rights released last week by the Biden White House.

What the hell are the Republicans thinking? Whether or not it becomes a matter of law, misplaced concern about AI bias can be addressed in a practical sense by introducing the “right” constraints to the algorithm, such as a set of aggregate targets for hiring across pools of minority and non-minority job candidates. Then, the algorithm still optimizes, but the constraints impinge on the selections. The results are still “optimal”, but in a more restricted sense.

Stealth Quotas

As Baker says, these constrains on algorithmic tools would constitute a way of imposing quotas on hiring that employers won’t really have to explain to anyone. That’s because: 1) the decisioning rationale is so obtuse that it can’t readily be explained; and 2) the decisions are perceived as “fair” in the aggregate due to the absence of disparate impacts. As to #1, however, the vendors who create hiring algorithms, and specific details regarding algorithm development, might well be subject to regulatory scrutiny. In the end, the chief concern of these regulators is the absence of disparate impacts, which is cinched by #2.

About a month ago I posted about the EEOC’s outrageous and illegal enforcement of disparate impact liability. Should I welcome AI interventions because they’ll probably limit the number of enforcement actions against employers by the EEOC? After all, there is great benefit in avoiding as much of the rigamarole of regulatory challenges as possible. Nonetheless, as a constraint on hiring, quotas necessarily reduce productivity. By adopting quotas, either explicitly or via AI, the employer foregoes the opportunity to select the best candidate from the full population for a certain share of open positions, and instead limits the pool to narrow demographics.

Demographics are dynamic, and therefore stealth quotas must be dynamic to continue to meet the demands of zero disparate impact. But what happens as an increasing share of the population is of mixed race? Do all mixed race individuals receive protected status indefinitely, gaining preferences via algorithm? Does one’s protected status depend solely upon self-identification of racial, ethnic, or gender identity?

For that matter, do Asians receive hiring preferences? Sometimes they are excluded from so-called protected status because, as a minority, they have been “too successful”. Then, for example, there are issues such as the classification of Hispanics of European origin, who are likely to help fill quotas that are really intended for Hispanics of non-European descent.

Because self-identity has become so critical, quotas present massive opportunities for fraud. Furthermore, quotas often put minority candidates into positions at which they are less likely to be successful, with damaging long-term consequences to both the employer and the minority candidate. And of course there should remain deep concern about the way quotas violate the constitutional guarantee of equal protection to many job applicants.

The acceptance of AI hiring algorithms in the business community is likely to depend on the nature of the positions to be filled, especially when they require highly technical skills and/or the pool of candidates is limited. Of course, there can be tensions between hiring managers and human resources staff over issues like screening job candidates, but HR organizations are typically charged with spearheading DEI initiatives. They will be only too eager to adopt algorithmic selection and stealth quotas for many positions and will probably succeed, whether hiring departments like it or not.

The Death of Merit

Unfortunately, quotas are socially counter-productive, and they are not a good way around the dilemma posed by the EEOC’s aggressive enforcement of disparate impact liability. The latter can only be solved only when Congress acts to more precisely define the bounds of illegal discrimination in hiring. Meanwhile, stealth quotas cede control over important business decisions to external vendors selling algorithms that are often unfathomable. Quotas discard judgements as to relevant skills in favor of awarding jobs based on essentially superficial characteristics. This creates an unnecessary burden on producers, even if it goes unrecognized by those very firms and is self-inflicted. Even worse, once these algorithms and stealth quotas are in place, they are likely to become heavily regulated and manipulated in order to achieve political goals.

Baker sums up a most fundamental objection to quotas thusly:

Most Americans recognize that there are large demographic disparities in our society, and they are willing to believe that discrimination has played a role in causing the differences. But addressing disparities with group remedies like quotas runs counter to a deep-seated belief that people are, and should be, judged as individuals. Put another way, given a choice between fairness to individuals and fairness on a group basis, Americans choose individual fairness. They condemn racism precisely for its refusal to treat people as individuals, and they resist remedies grounded in race or gender for the same reason.”

Quotas, and stealth quotas, substitute overt discrimination against individuals in non-protected classes, and sometimes against individuals in protected classes as well, for the imagined sin of a disparate impact that might occur when the best candidate is hired for a job. AI algorithms with protection against “algorithmic bias” don’t satisfy this objection. In fact, the lack of accountability inherent in this kind of hiring solution makes it far worse than the status quo.

Hurricane—Warming Link Is All Model, No Data

Tags

, , , , , , , , , , , , , , , , , , , ,

There was deep disappointment among political opponents of Florida Governor Ron DeSantis at their inability to pin blame on him for Hurricane Ian’s destruction. It was a terrible hurricane, but they so wanted it to be “Hurricane Hitler”, as Glenn Reynolds noted with tongue in cheek. That just didn’t work out for them, given DeSantis’ competent performance in marshaling resources for aid and cleanup from the storm. Their last ditch refuge was to condemn DeSantis for dismissing the connection they presume to exist between climate change and hurricane frequency and intensity. That criticism didn’t seem to stick, however, and it shouldn’t.

There is no linkage to climate change in actual data on tropical cyclones. It is a myth. Yes, models of hurricane activity have been constructed that embed assumptions leading to predictions of more hurricanes, and more intense hurricanes, as temperatures rise. But these are models constructed as simplified representations of hurricane development. The following quote from the climate modelers at the Geophysical Fluid Dynamics Laboratory (GFDL) (a division of the National Oceanic and Atmospheric Administration (NOAA)) is straightforward on this point (emphases are mine):

Through research, GFDL scientists have concluded that it is premature to attribute past changes in hurricane activity to greenhouse warming, although simulated hurricanes tend to be more intense in a warmer climate. Other climate changes related to greenhouse warming, such as increases in vertical wind shear over the Caribbean, lead to fewer yet more intense hurricanes in the GFDL model projections for the late 21st century.

Models typically are said to be “calibrated” to historical data, but no one should take much comfort in that. As a long-time econometric modeler myself, I can say without reservation that such assurances are flimsy, especially with respect to “toy models” containing parameters that aren’t directly observable in the available data. In such a context, a modeler can take advantage of tremendous latitude in choosing parameters to include, sensitivities to assume for unknowns or unmeasured relationships, and historical samples for use in “calibration”. Sad to say, modelers can make these models do just about anything they want. The cautious approach to claims about model implications is a credit to GFDL.

Before I get to the evidence on hurricanes, it’s worth remembering that the entire edifice of climate alarmism relies not just on the temperature record, but on models based on other assumptions about the sensitivity of temperatures to CO2 concentration. The models relied upon to generate catastrophic warming assume very high sensitivity, and those models have a very poor track record of prediction. Estimates of sensitivity are highly uncertain, and this article cites research indicating that the IPCC’s assumptions about sensitivity are about 50% too high. And this article reviews recent findings that carbon sensitivity is even lower, about one-third of what many climate models assume. In addition, this research finds that sensitivities are nearly impossible to estimate from historical data with any precision because the record is plagued by different sources and types of atmospheric forcings, accompanying aerosol effects on climate, and differing half-lives of various greenhouse gases. If sensitivities are as low as discussed at the links above, it means that predictions of warming have been grossly exaggerated.

The evidence that hurricanes have become more frequent or severe, or that they now intensify more rapidly, is basically nonexistent. Ryan Maue and Roger Pielke Jr. of the University of Colorado have both researched hurricanes extensively for many years. They described their compilation of data on land-falling hurricanes in this Forbes piece in 2020. They point out that hurricane activity in older data is much more likely to be missing and undercounted, especially storms that never make landfall. That’s one of the reasons for the focus on landfalling hurricanes to begin with. With the advent of satellite data, storms are highly unlikely to be missed, but even landfalls have sometimes gone unreported historically. The farther back one goes, the less is known about the extent of hurricane activity, but Pielke and Maue feel that post-1970 data is fairly comprehensive.

The chart at the top of this post is a summery of the data that Pielke and Maue have compiled. There are no obvious trends in terms of the number of storms or their strength. The 1970s were quiet while the 90s were more turbulent. The absence of trends also characterizes NOAA’s data on U.S. landfalling hurricanes since 1851, as noted by Pail Driessen. Here is Driessen on Florida hurricane history:

Using pressure, Ian was not the fourth-strongest hurricane in Florida history but the tenth. The strongest hurricane in U.S. history moved through the Florida Keys in 1935. Among other Florida hurricanes stronger than Ian was another Florida Keys storm in 1919. This was followed by the hurricanes in 1926 in Miami, the Palm Beach/Lake Okeechobee storm in 1928, the Keys in 1948, and Donna in 1960. We do not know how strong the hurricane in 1873 was, but it destroyed Punta Rassa with a 14-foot storm surge. Punta Rassa is located at the mouth of the river leading up to Ft. Myers, where Ian made landfall.

Neil L. Frank, veteran meteorologist and former head of the National Hurricane Center, bemoans the changed conventions for assigning names to storms in the satellite era. A typical clash of warm and cold air will often produce thunderstorms and wind, but few of these types of systems were assigned names under older conventions. They are not typical of systems that usually produce tropical cyclones, although they can. Many of those kinds of storms are named today. Right or wrong, that gives the false impression of a trend in the number of named storms. Not only is it easier to identify storms today, given the advent of satellite data, but storms are assigned names more readily, even if they don’t strictly meet the definition of a tropical cyclone. It’s a wonder that certain policy advocates get away with saying the outcome of all this is a legitimate trend!

As Frank insists, there is no evidence of a trend toward more frequent and powerful hurricanes during the last several decades, and there is no evidence of rapid intensification. More importantly, there is no evidence that climate change is leading to more hurricane activity. It’s also worth noting that today we suffer far fewer casualties from hurricanes owing to much earlier warnings, better precautions, and better construction.

Hiring Discrimination In the U.S., Canada, and Western Europe

Tags

, , , , , , , , ,

Some people have the impression that the U.S. is uniquely bad in terms of racial, ethnic, gender, and other forms of discrimination. This misapprehension is almost as grossly in error as the belief held in some circles that the history of slavery is uniquely American, when in fact the practice has been so common historically, and throughout the world, as to be the rule rather than the exception.

This week, Alex Tabarrok shared some research I’d never seen on one kind of discriminatory behavior. In his post, “The US has Relatively Low Rates of Hiring Discrimination”, he cites the findings of a 2019 meta-study of “… 97 Field Experiments of Racial Discrimination in Hiring”. The research focused on several Western European countries, Canada, and the U.S. The experiments involved the use of “faux applicants” for actual job openings. Some studies used applications only and were randomized across different racial or ethnic cues for otherwise similar applicants. Other studies paired similar individuals of different racial or ethnic background for separate in-person interviews.

The authors found that hiring discrimination is fairly ubiquitous against non-white groups across employers in these countries. The authors were careful to note that the study did not address levels of hiring discrimination in countries outside the area of the study. They also disclaimed any implication about other forms of discrimination within the covered countries, such as bias in lending or housing.

The study’s point estimates indicated “ubiquitous hiring discrimination”, though not all the estimates were statistically significant. My apologies if the chart below is difficult to read. If so, try zooming in, clicking on it, or following the link to the study above.

Some of the largest point estimates were highly imprecise due to less coverage by individual studies. The impacted groups and severity varied across countries. Blacks suffered significant discrimination in the U.S., Canada, France, and Great Britain. For Hispanics, the only coverage was in the U. S. and sparsely in Canada. The point estimates showed discrimination in both counties, but it was (barely) significant only in the U.S. For Middle Eastern and North African (MENA) applicants, discrimination was severe in France, the Netherlands, Belgium, and Sweden. Asian applicants faced discrimination in France, Norway, Canada, and Great Britain.

Across all countries, the group suffering the least hiring discrimination was white immigrants, followed by Latin Americans / Hispanics (but only two countries were covered). Asians seemed to suffer the most discrimination, though not significantly more than Blacks (and less in the U.S. than in France, Norway, Canada, and Great Britain). Blacks and MENA applicants suffered a bit less than Asians from hiring discrimination, but again, not significantly less.

Comparing countries, the authors used U.S. hiring discrimination as a baseline, assigning a value of one. France had the most severe hiring discrimination and at a high level of significance. Sweden was next highest, but it was not significantly higher than in the U.S. Belgium, Canada, the Netherlands and Great Britain had higher point estimates of overall discrimination than the U. S., though none of those differences were significant. Employers in Norway were about as discriminatory as the U.S., and German employers were less discriminatory, though not significantly.

The upshot is that as a group, U.S. employers are generally at the low end of the spectrum in terms of discriminatory hiring. Again, the intent of this research was not to single out the selected countries. Rather, these countries were chosen because relevant studies were available. In fact, Tabarrok makes the following comment, which the authors probably wouldn’t endorse and is admittedly speculative, but I suspect it’s right:

I would bet that discrimination rates would be much higher in Japan, China and Korea not to mention Indonesia, Iraq, Nigeria or the Congo. Understanding why discrimination is lower in Western capitalist democracies would reorient the literature in a very useful way.

So the U.S. is not on the high-side of this set of Western countries in terms of discriminatory hiring practices. While discrimination against blacks and Hispanics in the U.S. appears to be a continuing phenomenon, overall hiring discrimination in the U.S. is, at worst, comparable to many European countries.

To anticipate one kind of response to this emphasis, the U.S. is not alone in its institutional efforts to reduce discrimination. In fact, the study’s authors say:

A fairly similar set of antidiscrimination laws were adopted in North America and many Western European countries from the 1960s to the 1990s. In 2000, the European Union passed a series of race directives that mandated a range of antidiscrimination measures to be adopted by all member states, putting their legislative frameworks on racial discrimination on highly similar footing.”

Despite these similarities, there are a few institutional details that might have some bearing on the results. For example, France bans the recording and “formal discussion” of race and ethnicity during the hiring process. (However, photos are often included in job applications in European countries.) Does this indicate that reporting mandates and prohibiting certain questions reduce hiring discrimination? That might be suggestive, but the evidence is not as clear cut as the authors seem to believe. They cite one piece of conflicting literature on that point. Moreover, it does not explain why Great Britain had a greater (and highly significant) point estimate of discrimination against Asians, or why Canada and Norway were roughly equivalent to France on this basis. Nor does it explain why Sweden and Belgium did not differ from France significantly in terms of discrimination against MENA applicants. Or why Canada was not significantly different from France in terms of hiring discrimination against Blacks. Overall, discrimination in Sweden was not significantly less than in France. Still, at least based on the three applicant groups covered by studies of France, that country had the highest overall level of discrimination. France also had the most significant departure from the U.S., where recording the race and ethnicity of job applicants is institutionalized.

Germany had the lowest overall point estimates of hiring discrimination in the study. According to the authors, employers in German-speaking countries tend to collect a fairly thorough set of background information on job applications. This detail can actually work against discrimination in hiring. Tabarrok notes that so-called “ban the box” policies, or laws that prohibit employers from asking about an applicant’s criminal record, are known to result in greater racial disparities in hiring. The same is true of policies that threaten sanctions against the use of objective job qualifications which might have disparate impacts on “protected” groups. That’s because generalized proxies based on race are often adopted by hiring managers, consciously or subconsciously.

Discrimination in hiring based on race and ethnicity might actually be reasonable when a job entails sensitive interactions requiring high levels of trust with members of a minority community. This statement acknowledges that we do not live in a perfect world in which racial and ethnic differences are irrelevant. Still, aside from exceptions of that kind, overt hiring discrimination based on race or ethnicity is a negative social outcome. The conundrum we face is whether it is more or less negative than efforts to coerce nondiscrimination on those bases across a broad range of behaviors, most of which are nondiscriminatory to begin with, and when interventions often have perverse discriminatory effects. Policymakers and observers in the U.S. should maintain perspective. Discriminatory behavior persists in the U.S., especially against Blacks, but some of this discrimination is likely caused by prohibitions on objective tests of relevant job skills. And as the research discussed above shows, employers here appear to be a bit less discriminatory than those in most other Western democracies.

“Hard Landing” Is Often Cost of Fixing Inflationary Policy Mistakes

Tags

, , , , , , , , , , , , , , , , , , , , , , ,

The debate over the Federal Reserve’s policy stance has undergone an interesting but understandable shift, though I disagree with the “new” sentiment. For the better part of this year, the consensus was that the Fed waited too long and was too dovish about tightening monetary policy, and I agree. Inflation ran at rates far in excess of the Fed’s target, but the necessary correction was delayed and weak at the start. This violated the necessary symmetry of a legitimate inflation-targeting regime under which the Fed claims to operate, and it fostered demand-side pressure on prices while risking embedded expectations of higher prices. The Fed was said to be “behind the curve”.

Punch Bowl Resentment

The past few weeks have seen equity markets tank amid rising interest rates and growing fears of recession. This brought forth a chorus of panicked analysts. Bloomberg has a pretty good take on the shift. Hopes from some economists for a “soft landing” notwithstanding, no one should have imagined that tighter monetary policy would be without risk of an economic downturn. At least the Fed has committed to a more aggressive policy with respect to price stability, which is one of its key mandates. To be clear, however, it would be better if we could always avoid “hard landings”, but the best way to do that is to minimize over-stimulation by following stable policy rules.

Price Trends

Some of the new criticism of the Fed’s tightening is related to a perceived change in inflation signals, and there is obvious logic to that point of view. But have prices really peaked or started to reverse? Economist Jeremy Siegel thinks signs point to lower inflation and believes the Fed is being too aggressive. He cites a series of recent inflation indicators that have been lower in the past month. Certainly a number of commodity prices are generally lower than in the spring, but commodity indices remain well above their year-ago levels and there are new worries about the direction of oil prices, given OPEC’s decision this week to cut production.

Central trends in consumer prices show that there is a threat of inflation that may be fairly resistant to economic weakness and Fed actions, as the following chart demonstrates:

Overall CPI growth stopped accelerating after June, and it wasn’t just moderation in oil prices that held it back (and that moderation might soon reverse). Growth of the Core CPI, which excludes food and energy prices, stopped accelerating a bit earlier, but growth in the CPI and the Core CPI are still running above 8% and 6%, respectively. More worrisome is the continued upward trend in more central measures of CPI growth. Growth in the median component of the CPI continues to accelerate, as has the so-called “Trimmed CPI”, which excludes the most extreme sets of high and low growth components. The response of those central measures lagged behind the overall CPI, but it means there is still inflationary momentum in the economy. There is a substantial risk that expectations of a more permanent inflation are becoming embedded in expectations, and therefore in price and wage setting, including long-term contracts.

The Fed pays more attention to a measure of prices called the Personal Consumption Expenditures (PCE) deflator. Unlike the CPI, the PCE deflator accounts for changes in the composition of a typical “basket” of goods and services. In particular, the Fed focuses most closely on the Core PCE deflator, which excludes food and energy prices. Inflation in the PCE deflator is lower than the CPI, in large part because consumers actively substitute away from products with larger price increases. However, the recent story is similar for these two indices:

Both overall PCE inflation and Core PCE inflation stopped accelerating a few months ago, but growth in the median PCE component has continued to increase. This central measure of inflation still has upward momentum. Again, this raises the prospect that inflationary forces remain strong, and that higher and more widespread expected inflation might make the trend more difficult for the Fed to rein in.

That leaves the Fed little choice if it hopes to bring inflation back down to its target level. It’s really a only a choice of whether to do it faster or slower. One big qualification is that the Fed can’t do much about supply shortfalls, which have been a source of price pressure since the start of the rebound from the pandemic. However, demand pressures have been present since the acceleration in price growth began in earnest in early 2021. At this point, it appears that they are driving the larger part of inflation.

The following chart shows share decompositions for growth in both the “headline” PCE deflator and the Core PCE deflator. Actual inflation rates are NOT shown in these charts. Focus only on the bolder colored bars. (The lighter bars represent estimates having less precision.) Red represents “supply-side” factors contributing to changes in the PCE deflator, while blue summarizes “demand-side” factors. This division is based on a number of assumptions (methodological source at the link), but there is no question that demand has contributed strongly to price pressures. At least that gives a sense about how much of the inflation can be addressed by actions the Fed might take.

I mentioned the role of expectations in laying the groundwork for more permanent inflation. Expected inflation not only becomes embedded in pricing decisions: it also leads to accelerated buying. So expectations of inflation become a self-fulfilling prophesy that manifests on both the supply side and the demand-side. Firms are planning to raise prices in 2023 because input prices are expected to continue rising. In terms of the charts above, however, I suspect this phenomenon is likely to appear in the “ambiguous” category, as it’s not clear that the counting method can discern the impacts of expectations.

What’s a Central Bank To Do?

Has the Fed become too hawkish as inflation accelerated this year while proving to be more persistent than expected? One way to look at that question is to ask whether real interest rates are still conducive to excessive rate-sensitive demand. With PCE inflation running at 6 – 7% and Treasury yields below 4%, real returns are still negative. That’s hardly seems like a prescription for taming inflation, or “hawkish”. Rate increases, however, are not the most reliable guide to the tenor of monetary policy. As both John Cochrane and Scott Sumner point out, interest rate increases are NOT always accompanied by slower money growth or slowing inflation!

However, Cochrane has demonstrated elsewhere that it’s possible the Fed was on the right track with its earlier dovish response, and that price pressures might abate without aggressive action. I’m skeptical to say the least, and continuing fiscal profligacy won’t help in that regard.

The Policy Instrument That Matters

Ultimately, the best indicator that policy has tightened is the dramatic slowdown (and declines) in the growth of the monetary aggregates. The three charts below show five years of year-over-year growth in two monetary measures: the monetary base (bank reserves plus currency in circulation), and M2 (checking, saving, money market accounts plus currency).

Growth of these aggregates slowed sharply in 2021 after the Fed’s aggressive moves to ease liquidity during the first year of the pandemic. The monetary base and M2 growth have slowed much more in 2022 as the realization took hold that inflation was not transitory, as had been hoped. Changes in the growth of the money stock takes time to influence economic activity and inflation, but perhaps the effects have already begun, or probably will in earnest during the first half of 2023.

The Protuberant Balance Sheet

Since June, the Fed has also taken steps to reduce the size of its bloated balance sheet. In other words, it is allowing its large holdings of U.S. Treasuries and Agency Mortgage-Backed Securities to shrink. These securities were acquired during rounds of so-called quantitative easing (QE), which were a major contributor to the money growth in 2020 that left us where we are today. The securities holdings were about $8.5 trillion in May and now stand at roughly $8.2 trillion. Allowing the portfolio to run-off reduces bank reserves and liquidity. The process was accelerated in September, but there is increasing tension among analysts that this quantitative tightening will cause disruptions in financial markets and ultimately the real economy, There is no question that reducing the size of the balance sheet is contractionary, but that is another necessary step toward reducing the rate of inflation.

The Federal Spigot

The federal government is not making the Fed’s job any easier. The energy shortages now afflicting markets are largely the fault of misguided federal policy restricting supplies, with an assist from Russian aggression. Importantly, however, heavy borrowing by the U.S. Treasury continues with no end in sight. This puts even more pressure on financial markets, especially when such ongoing profligacy leaves little question that the debt won’t ever be repaid out of future budget surpluses. The only way the government’s long-term budget constraint can be preserved is if the real value of that debt is bid downward. That’s where the so-called inflation tax comes in, and however implicit, it is indeed a tax on the public.

Don’t Dismiss the Real Costs of Inflation

Inflation is a costly process, especially when it erodes real wages. It takes its greatest toll on the poor. It penalizes holders of nominal assets, like cash, savings accounts, and non-indexed debt. It creates a high degree of uncertainty in interpreting price signals, which ordinarily carry information to which resource flows respond. That means it confounds the efficient allocation of resources, costing all of us in our roles as consumers and producers. The longer it continues, the more it erodes our economy’s ability to enhance well being, not to mention the instability it creates in the political environment.

Imminent Recession?

So far there are only limited signs of a recession. Granted, real GDP declined in both the first and second quarters of this year, but many reject that standard as overly broad for calling a recession. Moreover, consumer spending held up fairly well. Employment statistics have remained solid, though we’ll get an update on those this Friday. Nevertheless, payroll gains have held up and the unemployment rate edged up to a still-low 3.7% in August.

Those are backward-looking signs, however. The financial markets have been signaling recession via the inverted yield curve, which is a pretty reliable guide. The weak stock market has taken a bite out of wealth, which is likely to mean weaker demand for goods. In addition to energy-supply shocks, the strong dollar makes many internationally-traded commodities very costly overseas, which places the global economy at risk. Moreover, consumers have run-down their savings to some extent, corporate earnings estimates have been trimmed, and the housing market has weakened considerably with higher mortgage rates. Another recent sign of weakness was a soft report on manufacturing growth in September.

Deliver the Medicine

The Fed must remain on course. At least it has pretensions of regaining credibility for its inflation targeting regime, and ultimately it must act in a symmetric way when inflation overshoots its target, and it has. It’s not clear how far the Fed will have to go to squeeze demand-side inflation down to a modest level. It should also be noted that as long as supply-side pressures remain, it might be impossible for the Fed to engineer a reduction of inflation to as low as its 2% target. Therefore, it must always bear supply factors in mind to avoid over-contraction.

As to raising the short-term interest rates the Fed controls, we can hope we’re well beyond the halfway point. Reductions in the Fed’s balance sheet will continue in an effort to tighten liquidity and to provide more long-term flexibility in conducting operations, and until bank reserves threaten to fall below the Fed’s so-called “ample reserves” criterion, which is intended to give banks the wherewithal to absorb small shocks. Signs that inflationary pressures are abating is a minimum requirement for laying off the brakes. Clear signs of recession would also lead to more gradual moves or possibly a reversal. But again, demand-side inflation is not likely to ease very much without at least a mild recession.

The Beatles in ‘69: By the Book, Wary of Live Performance

Tags

, , , , , , , , , , , ,

I finally got around to watching Peter Jackson’s “Get Back!, a distillation of the many hours of video from the Beatles’ recording sessions covering 21 days back in late 1969. The culmination of the film was a brief rooftop “concert” in London. It was the band’s first public performance in years, and it proved to be their last ever. Get Back! is lengthy but very enjoyable and an incredible glimpse into the various personalities of the group.

The film projects a strong impression of the Beatles’ anxiety, at that time, about playing a live gig. During all but the last few days captured on the film, it was unclear to everyone involved whether the band would actually do a live performance. The band members were of decidedly mixed enthusiasm about it. They were also skeptical that the cameras at their sessions could capture enough interesting material for a film.

The Beatles had an early reputation as a great live band, but they had last played live in 1966. Kieran McGovern says the band quit touring for three reasons: poor sound quality, exhaustion, and security concerns. The last two are probably self-explanatory, though McGovern thinks the “bigger than Jesus” controversy was worrisome to the band. As to sound quality, the Beatles were the first band to play massive stadium concerts, but the sound equipment was too puny and not adequately advanced to handle those demands. Even worse, the band was unable to hear itself on stage over the throngs of screaming fans. So they just stopped. By then, they were so wildly successful as recording artists that it was unnecessary to promote themselves by touring.

During the Get Back! sessions, Paul McCartney mused about the pros and cons of doing a live concert, but the band seemed a little paralyzed by the notion. It was as if they were clinging to the idea that studio albums should remain their sole focus. And as they worked out arrangements for new songs, various “takes” were preserved by the engineers so that, if nothing else, they would have material for a new album. They did take after take, often stopping after just a few bars.

I’m sure studio sessions with new material can be challenging. In fact, a few of the songs were composed right there in the studio, going from rough idea to fruition over the course of days. It was interesting to witness the band’s humanity in the face of self-imposed pressure to “get it right”, over and over. I know the feeling in my own small way. When I learn new material on the guitar, I sometimes record myself, but an odd thing happens as soon as I hit “record” … it’s hard to get through a song without some perceived mishap. And one attempt is followed by another. And another. Sometimes these “mishaps” stop me almost right at the start. In some ways it was reassuring, and frustrating, to see the same thing happening to the iconic Beatles. I’m also sure this reinforced their hesitation to “go live”. But when you play live, you just have to play through the mishaps, and I’m sure they’d done it many times before!

Years earlier, as the band rose to fame, they performed live all the time, but oddly, the highly creative years away from the stage seemed to corrode their confidence as a working band. There were so many incredible groups performing live in those days, but not for such immense crowds until perhaps Monterey, Woodstock, and maybe a few other big festivals in the late 60s. Much larger sound systems were a requirement that went unfulfilled at the Beatles’ earlier stadium shows, and the poor sound quality was a great frustration to the band. In the later, post-Beatle years, individual members of the band played huge concerts, and the surviving members still do.

While *nobody* is quite like the Beatles, all live bands make mistakes and play through them. Practice might make close to perfect, but even well-drilled classical musicians have their bad days. The Beatles, however, seemed intimidated by the possibility of screwing up in front of an audience, and about knowing the right notes to play. So the film gave me the impression that the Beatles were at heart, or had at least become, what one might call “book musicians”. Play it the same way every time! And they were so eccentrically “book” oriented that they fought a certain paralysis as to the demands of live performance.

There was an astonishing admission from George Harrison fairly early in the film: I’m paraphrasing, but he found it incredible to hear Eric Clapton launch into lengthy guitar improvisations and then somehow end up “in the right place”. And Harrison said, “I just can’t do that.” I love George Harrison’s guitar work, and he wrote some wonderful songs, but the first statement sounds like something one might have heard from a newbie at a Grateful Dead concert. His lack of improvisational confidence puts emphasis on the idea that he was, in fact, a “book musician”.

For the Beatles, in 1969 at least, the idea of improvisation, or just playing around, was fine for a bit of fun in the studio, or to loosen up. They tended toward old rock n’ roll material or messed around with their own, older stuff, often with comic effect. And John Lennon was very funny, by the way. But the emphasis wasn’t on the concept of musical improvisation, and the idea of doing it on stage, or playing from the cuff before a live audience, was out of the question.

Meanwhile, improvisation had been an active pursuit among jazz musicians almost from the beginning. It was inherently a looser form than what the Beatles wanted to do. The jam band genre was an extension of the jazz aesthetic into adjacent musical forms like blues, rock, and even country. The Grateful Dead pioneered the jam band “form”, if that word can be used, but in any case, improvisation, or a loose approach to live performance with spontaneous creativity, was widespread in the late 1960s. That’s definitely not where the Beatles were at.

The Beatles were a wonderful band, brilliant songwriters, poets, and musicians. They also were driven by perfectionism, at least at the late stages of their time together. Improvisation was not their “cup of tea”, as it were. They had strong reasons for their reluctance to play live after their 1966 tour. By 1969, they hesitated to do even one concert before a smaller audience. The tentative “show date” on their calendar seemed like an approaching freight train, and they dithered over the kind of show it would be and where it would be staged. Finally, the rooftop of Apple Studios was selected with just a couple of days to go. It was an interesting promotional stunt, but it seemed like a cop-out. Not many people could really see them up there, and the sound quality on the street was probably a very mixed bag. Still, Get Back! was a lot of fun to watch. And I do love the Beatles, even if I love the music and often careening style of the original jam band much more.

Ubiquitous Guilt: EEOC Disparate Impact Liability

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , ,

A key part of the Civil Rights Act of 1964 was Title VII, which dealt with employment discrimination. Title VII applied only to intentional discrimination, but it didn’t take long for the Equal Employment Opportunity Commission (EEOC), the agency charged with administering Title VII, to find ways to expand the scope of its enforcement mandate under the law. The EEOC eventually managed to convince virtually all parties, including employers, employees, job applicants, attorneys, and even the courts, that the law prohibited employment practices having disparate impacts on groups protected from actual discrimination under the law. Predictably, this warped reinterpretation created severe distortions to the efficiency and fairness of labor market outcomes .

Another Rogue Agency

On the EEOC’s complete and erroneous reimagining of Title VII, Gail Heriot’s Title VII Disparate Impact Liability Makes Almost Everything Presumptively Illegal” is a must read. Heriot is a Professor at the University of San Diego School of Law and is a member of the U.S. Commission on Civil Rights. This post attempts to summarize most of the important points in Heriot’s paper, so if you don’t have time for Heriot’s paper, read on. All errors are mine, of course!

Heriot provides an incredible case study on the dangers of regulatory overreach. She first discusses the EEOC’s blatant usurpation of Congressional power:

It is hardly surprising that EEOC officials would undertake to publish answers to the questions they were hearing repeatedly…. But publishing such ‘guidances’ also had the potential to spin out of control. The temptation would always be to use them to establish what the EEOC staff wanted the law to be rather than what it was. Instead of interpreting Title VII in good faith, guidances would soon become quasi-legislation—disguised as interpretation, but in reality imposing new duties on employers not found in Title VII itself.

None of this should be surprising. It is in the nature of bureaucracy. It naturally seeks to expand its powers, often beginning by occupying niches that are otherwise unoccupied. Over time, a little power often becomes a lot of power. What is surprising is how upfront EEOC officials were about their tactics in accumulating that power.”

Having gone this far, one might be tempted to ask the EEOC what limiting principle they actually apply to determine whether various employment and hiring practices are permissible. Are level of education, industry experience, and tests of physical and cognitive faculties verboten? The answer that is there is no consistent, limiting principle. Instead, Heriot says the EEOC “picks its battles” (see below). She also describes the EEOC’s adoption of a so-called “four-fifths rule”, which is about as arbitrary as it gets. It means the EEOC will challenge an employment practice only if it leads to a selection of any protected group at a rate less than 80% of the most-selected group. That is, the “disparate impact” must be less than 20% to rule out a challenge. This rule appears nowhere in Title VII.

Job Qualifications? You’re Guilty!

Unfortunately, as Heriot takes pains to demonstrate, it’s virtually impossible to identify a hiring guideline or method of employee assessment that does not have a disparate impact. The examples she provides on pp. 34 – 37 of her paper, and on p. 40, are convincing. Furthermore, the EEOC’s “four-fifths” rule hardly narrows the potential for challenge at all.

Selection rates of less than four-fifths relative to the group with the highest rate are extremely common. Just as everything or nearly everything has a disparate impact, everything or nearly everything has a selection rate that fails the ‘four fifths rule’ for some race, color, religion, sex, or national origin group.

So the EEOC is allowed to operate with tremendous discretion. Again, Heriot says the agency “picks its battles”, focusing on challenges to screening tools like “written tests, physical strength and endurance tests, criminal background tests [sic], high school diploma requirements, personal credit histories, residency requirements, and a few others.

This regulatory environment encourages employers to keep job requirements vague, sometimes to the point at which potential applicants might not be sure what the job qualifications really are, or exactly what the job function entails. One upshot is that this makes it harder to detect and prove actual discrimination, and it often leads to more arbitrary decisions by hiring managers, which may, in fact, involve real discrimination, including nepotism and/or cronyism.

Unbiased Intent Doesn’t Matter

Heriot points to a disastrous decision by the Supreme Court that, perhaps unintentionally, helped legitimize the concept of disparate impact as legal doctrine, and as a valid cause of action by plaintiffs against employers. In Griggs v. Duke Power Co. (1971), the Court rejected the premise that an employer’s innocence with respect to their intent to discriminate was an inadequate defense of an employment practice that had adverse consequences to a protected group. Heriot quotes the opinion of Chief Justice Warren Burger:

“… good intent or absence of discriminatory intent does not redeem…. Congress directed the thrust of the Act to the consequences of employment practices, not simply the motivation.”

It’s as if the Court convinced itself that adverse consequences prove actual discrimination, even when there is no intent to discriminate. The Court also emphasized that it’s decision was based on “general deference” to the EEOC! And this was years before the unfortunate Chevron Doctrine (judicial deference to administrative agencies on interpretation of law) was formally established by the Court. Heriot and others assert that the decision in Griggs would have astonished the authors of Title VII.

Heriot also discusses changes in the treatment of “business necessity” as a defense against complaints of disparate impact. It is generally the employer’s burden to show the “necessity” of a challenged hiring practice. “Necessity” was the subject of several Supreme Court decisions in the 1970s and 1980s, but the Court stopped short of requiring an employer to show that a practice was “essential”. In one case, the court shifted some of the burden back onto the plaintiff to show that a practiced lacked necessity. In 1990, there was concern in the Bush Administration and Congress that the difficulty of proving business necessity would eventually lead to the adoption of racial quotas by employers in order to prevent EEOC challenges, though the authors of Title VII had staunchly opposed quotas. While the original hope was that the Civil Rights Act of 1991 would resolve questions about “business necessity” and the burden of proof, it did not. Instead, it can be said that it legitimized disparate impact liability, with conditions. The standard for proving necessity, based on Court decisions, evolved to become more strict with time. There are cases in which courts seem to have left the EEOC to define “business necessity”, as if the EEOC would be in a better position to do that than the business itself!

Inviting Discrimination

Heriot devotes part of her paper to the perverse effects of disparate impact. When employers are faced with prohibitions or the threat of action against a certain practice, whether it be tests of aptitude, strength, or screening on criminal or credit records, they may abandon those devices and opt instead for “informal” proxies. The use of proxies, however, often leads to instances of actual discrimination, whether born of conscious or unconscious bias on the part of hiring managers.

Heriot provides a number of examples of the proxy phenomenon, some of which have been confirmed by empirical research. For example, an employer interviewing candidates for a job that requires math proficiency might reasonably use a test of math skill as a key criterion. If such a test is prohibited, the hiring manager might be tempted to hire an Asian candidate, since Asians have a reputation for good math skills. Similarly, an applicant of West European ancestry might be favored for a position requiring excellent grammar skills, absent the ability to explicitly test grammatical skill. Candidates for a job requiring a certain level of physical strength could be evaluated by various tests of strength, but barring that, a hiring manager might be inclined to hire based on gender.

When criminal background checks are prohibited, employers might be tempted to use proxies such as gender and race as a substitute. Likewise, if it’s forbidden to check a candidate’s credit record to gauge reliability, other proxies might lead to discrimination against members of protected classes. Needless to say, these kinds of outcomes are precisely the opposite of what the EEOC hopes to achieve.

As Heriot further notes, the outcomes can be much systematic and destructive than a bit of one-off discrimination in hiring, promotion, pay raises, or task assignment. These may inflict damage reaching well beyond having the wrong people gaining favorable labor market outcomes. For example, an employer might choose to relocate operations to a “safer” or more affluent community, barring an ability to perform criminal background or credit checks. Or businesses might decide to substitute capital for labor, given the interference in their attempts to identify the best job candidates. The difficulty in screening also creates an incentive to automate, just as premature automation is becoming more common with rising wage floors imposed by government.

Killing Jobs and Competition

Like many forms of regulation, however, large firms in less competitive industries are usually better positioned to survive EEOC scrutiny than smaller firms in competitive markets. Indeed, we often see large market players embrace regulation because it gives them a competitive advantage over smaller rivals. In this case, we see large firms adopting their own diversity, equity, and inclusion (DEI) goals. This is not solely related to the threat of EEOC challenges, however. Private lawsuits alleging discrimination or disparate impact are also a concern, as is pleasing activists inside and outside the company. Nevertheless, as Christopher Rufo reveals, there is growing push-back against the corporate DEI regime. Let’s hope it continues to gain traction.

Unconstitutional Executive Discretion

Heriot also dedicates part of her paper to constitutional issues related to the EEOC’s broad discretion in the application of disparate impact to employment practices. For one thing, disparate impact is a direct source of discrimination: when members of “protected groups” are awarded opportunities based on the possibility of disparate statistical outcomes, it means the majority candidates are denied those opportunities, no matter their qualifications. This is outright discrimination, and it’s instigation by a federal agency constitutes an explicit denial of equal protection under the law.

It should be no surprise that many consider disparate impact actions against employers to be denials of due process. Furthermore, when a federal agency like the EEOC exercises broad discretion, the so-called non-delegation doctrine should come into play. That is, the EEOC makes judgements on matters that are not necessarily authorized Congress. Thus, there are legitimate questions as to whether the EEOC’s discretion is a violation of the separation of powers. Granted, the courts have long deferred to administrative agencies in the interpretation of enabling statutes, but the Supreme Court has taken a new tack under Chief Justice Roberts. In some recent decisions, the Court has relied on a new “major questions” doctrine to place certain limits on executive discretion.

Conclusion

Hiring? Creating jobs? Better not get picky about checking your applicants’ skills and backgrounds or you risk liability for contributing to the statistical malaise of one, or of many, protected groups. That’s how it is under “disparate impact” rules imposed by the EEOC. The success of your business be damned!

Gail Heriot’s excellent paper details the way in which the EEOC transformed the meaning of its enabling legislation, expanding its reign over employment practices across the nation. She demonstrates the breadth of disparate impact rules with examples showing that virtually any attempt at systematic screening of job applicants can be held to be illegal. Your intent to hire the most qualified candidate without bias doesn’t matter, under an insane Supreme Court decision that buttressed the EEOC’s authority. As Heriot says, “… everything is presumptively illegal”. She also describes how disparate impact liability leads to employment decisions based on proxy criteria, which often lead to actual (even if unintended) discrimination. Further unintended consequences are the possibility of larger job losses in minority communities and less competition in product and labor markets. Finally, Heriot delineates several constitutional violations inherent in broad EEOC discretion and the enforcement of disparate impact.

One day a court challenge to the EEOC and disparate impact liability might rise to the level of the Supreme Court. Justice Antonin Scalia expected it, but it still hasn’t come before the Court. It should! Another way to do battle against the EEOC’s scourge is to challenge corporations who cow-tow to activists and to the EEOC with their own DEI initiatives. This manifestation of stakeholder capitalism is a cancer on the wealth and productivity of the U.S. economy, resting side-by-side with disparate impact liability.

Net Zero: It Ain’t Gonna Happen

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , ,

A number of countries have targeted net zero carbon dioxide emissions, to be achieved within various “deadlines” over the next few decades. The target dates currently range from 2030 -2050. Political leaders around the world are speaking in the tongues favored by climate change fundamentalism, as Brad Allenby aptly named the cult some years ago. The costly net zero goal is a chimera, however. The effort to completely substitute renewables — wind and solar — for fossil fuels will fail without question. In fact, net zero carbon emissions is unlikely to be achieved anywhere in this century without massive investments in nuclear power. Wind and solar energy suffer from a fatal flaw: intermittency. They will never be able to provide for all energy needs without a drastic breakthrough in battery technology, which is not on the horizon. Geothermal power might make a contribution, but it won’t make much of a dent in our energy needs any time soon. Likewise, carbon capture technology is still in its infancy, and it cannot be expected to offset much of the carbon released by our unavoidable reliance on fossil fuels.

Exposing Green Risks

The worst of it is that net zero mandates will inflict huge costs on society. Indeed, various efforts to force conversion to “green” energy technologies have already raised costs and exposed humanity to immediate threats to health and well being. These realities are far more palpable than the risks posed by speculative model predictions of climate change decades ahead. As Joseph Sternberg notes at the link above, climate policies:

“… have created an energy system of dangerous rigidity and inefficiency incapable of adapting to a blow such as Russia’s partial exit from the European gas market. It’s almost inevitable that the imminent result will be a recession in Europe. We can only hope that it won’t also trigger a global financial crisis.

Escalating energy costs are inflicting catastrophic harm on businesses large and small throughout the West, but especially in Europe and the UK. A Finnish economist recently commented on these conditions, as quoted by Walter Jacobson at the Legal Insurrection blog:

I saw this tweet thread by Finnish economist and professor Tuomas Malinen:

I am telling you people that the situation in #Europe is much worse than many understand. We are essentially on the brink of another banking crisis, a collapse of our industrial base and households, and thus on the brink of the collapse of our economies.

Jacobson also offers the following quote from Murtaza Hussain of The Intercept:

“If you turned the electricity off for a few months in any developed Western society 500 years of supposed philosophical progress about human rights and individualism would quickly evaporate like they never happened.”

Where’s the Proof of Concept?

This is not all about Russian aggression, however. We’ve seen the cost consequences of “green” mandates and forced conversion to wind and solar in places like California, Texas, and Germany even before Russia invaded Ukraine and began starving Europe of natural gas.

Frances Minton at the Manhattan Contrarian blog points to one of the most remarkable aspects of the singular focus on net zero: the complete absence of any successful demonstration project anywhere on the globe! The closest things to such a test are cited by Minton. One is on El Hierro in Spain’s Canary Islands, which has wind turbine capacity of more than double average demand, It also has pumped storage with hydro generators for more than double average demand. In 2020, however, El Hierro took all of its power from the combined wind/storage system only about 15% of the time. 2021 didn’t look much better. Diesel power is used to fill in the frequent “shortfalls”.

Land Use

The land use requirements of a large scale transition to wind and solar are incredible, given projected technological capabilities. Ezra Klein explains:

The center of our decarbonization strategy is an almost unimaginably large buildup of wind and solar power. To put some numbers to that: A plausible path to decarbonization, modeled by researchers at Princeton, sees wind and solar using up to 590,000 square kilometers – which is roughly equal to the land mass of Connecticut, Illinois, Indiana, Kentucky, Massachusetts, Ohio, Rhode Island and Tennessee put together. ‘The m footprint is very, very large, and people don’t really understand that,’ Danny Cullenward, co author of ‘Making Climate Policy Work’, told me.

That’s a major obstacle to accelerating the transition to wind and solar power, but there are many others.

A Slap of Realism

Mark P. Mills elaborates on the daunting complexity and costs of the transition, and like land use requirements, they are all potential show stoppers. It’s a great article excepting a brief section that reveals a poor understanding of monetary theory. Putting that aside, it’s first important to reemphasize what should be obvious: shutting down production of fossil fuels makes them scarce and more costly,. This immediately reduces our standard of living and hampers our future ability to respond to tumultuous circumstances as are always likely to befall us. Mills makes that abundantly clear:

“… current policies and two decades of mandates and spending on a transition have led to escalating energy prices that help fuel the destructive effects of inflation. The price of oil, which powers nearly 97% of all transportation, is on track to reach or exceed half-century highs, and gasoline prices have climbed. The price of natural gas, accounting for 40% of all industrial energy use and one-fourth of global electricity, has soared past a decadal high. Coal prices are also at a decadal high. Coal fuels 40% of global electricity; it is also used to make 70% of all steel and accounts for half its cost of production.

It bears noting that energy prices started soaring, and oil breached $100 a barrel, well before Russia invaded Ukraine in late February. The fallout from that invasion has hardened, not resolved, the battle lines between those advocating for and those skeptical of government policies directed at accelerating an energy transition.

Civilization still depends on hydrocarbons for 84% of all energy, a mere two percentage points lower than two decades ago. Solar and wind technologies today supply barely 5% of global energy. Electric vehicles still offset less than 0.5% of world oil demand.”

As Mills says, it surprises most people that today’s high tech sectors, such as electronic devices like phones and computers, and even drugs, require much more energy relative to product size and weight than traditional manufactured goods. Even the cloud uses vast quantities of energy. Yet U.S. carbon intensity per dollar of GDP has declined over the past 20 years. That’s partly due to the acquisition of key components from abroad, mitigation efforts here at home, and the introduction of renewables. However, the substitution of natural gas for other fossil fuels played a major role. Still, our thirst for energy intensive technologies will cause worldwide demand for energy to continue to grow, and renewables won’t come close to meeting that demand.

Capacity Costs

Policy makers have been deceived by cost estimates associated with additions of renewable capacity. That’s due to the fiction that renewables can simply replace hydrocarbons, but the intermittency of solar and wind power mean that demand cannot be continuously matched by renewables capacity. Additions to renewables capacity requires reliable and sometimes redundant backup capacity. At the risk of understatement, this necessity raises the marginal cost of renewable additions significantly if the hope is to meet growth in demand.

Furthermore, as Mills points out, renewables have not reached cost parity with fossil fuels, contrary to media hype and an endless flow of propaganda from government and the “green” investors seeking rents from government. Subsidies to renewables have created an illusion that costs that are lower than they are in reality.

So Many Snags

From Mills, here are a few of the onerous cost factors that will present severe obstacles to even a partial transition to renewables:

  • Even with the best battery technology now available, using lithium, storing power is still extremely expensive. Producing and storing it at scale for periods long enough to serve as a true source of power redundancy is prohibitive.
  • The infrastructure buildout required for a hypothetical transition to zero-carbon is massive. The quantity of raw materials needed would be far in excess of those used in our investments in energy infrastructure over at least the past 60 years.
  • Even the refueling infrastructure required for a large increase in the share of electronic vehicles on the road would require a massive investment, including more land and at much greater expense than traditional service stations. That’s especially true considering the grid enhancements needed to deliver the power.
  • The transition would place a huge strain on the world’s ability to mine minerals such as lithium, graphite, nickel, and rare earths. Mills puts the needed increases in supply at 4,200%, 2,500%, 1,900%, and 700%, respectively, by 2040. In fact, the known global reserves of these minerals are inadequate to meet these demands.
  • Mining today is heavily reliant on hydrocarbon power, of course. Moreover, all this mining activity would have devastating effects on the environment, as would disposal of “green” components as they reach their useful lives. The latter is a disaster we’re already seeing played out in the third world, where we are exporting much of our toxic, high-tech waste.
  • The time it would take to make the transition to zero carbon would far exceed the timetable specified in the mandates already in place. It’s realistic to admit that development of new mines, drastic alterations of land use patterns, construction of new generating capacity, and the massive infrastructure buildout will stretch out for many decades.
  • Given U.S. dependence on imports of a large number of minerals now considered “strategic”, decarbonization will require a major reconfiguration of supply chains. In fact, political instability in parts of the world upon which we currently rely for supplies of these minerals makes the entire enterprise quite brittle relative to reliance on fossil fuels.

Conclusion

The demands for raw materials, physical capital and labor required by the imagined transition to net zero carbon dioxide emissions will put tremendous upward pressure on prices. The coerced competition for resources will mean sacrifices in other aspects of our standard of living, and it will have depressing effects on other markets, causing their relative prices to decline.

For all the effort and cost of the mandated transition, what will we get? Without major investments in reliable but redundant backup capacity, we’ll get an extremely fragile electric grid, frequent power failures, a diminished standard of living, and roughly zero impact on climate. In other words, it will be a major but unnecessary and predictably disastrous exercise in central planning. We’ve already seen the futility of this effort in the few, small trials that have been undertaken, but governments, rent-seeking investors, and green activists can’t resist plunging us headlong into the economic abyss. Don’t let them do it!