The Twitter Files and Political Exploitation of Social Media


, , , , , , , , , , , , , , , , , , , , , , , ,

I’ve been cheering for Elon Musk in his effort to remake Twitter into the kind of “public square” it always held the promise to be. He’s standing up for free expression, against one-party control of speech on social media, and especially against government efforts to control speech. That’s a great and significant thing, yet as Duke economist Michael Munger notes, we hear calls from the Biden Administration and congressional Democrats to “keep an eye on Twitter”, a not-so-veiled threat of future investigative actions or worse.

Your Worst Enemy Test, Public or Private

As a disclaimer, I submit that I’m not an unadulterated fan of Musk’s business ventures. His business models too often leverage wrong-headed government policy for profitability. It reeks of rent seeking behavior, whatever Musk’s ideals, and the availability of those rents, primarily subsidies, violates the test for good governance I discussed in my last post. That’s the Munger Test (the “Your Worst Enemy” Test), formally:

You can only give the State power that you favor giving to your worst enemy.

On the other hand, Musk’s release of the “Twitter Files” last weekend, with more to come, is certainly a refreshing development. Censorship at the behest of political organizations, foreign governments, or our own government are all controversial and possibly illegal. While we’d ordinarily hope to transact privately at arms length with free exchange being strictly an economic proposition, one might even apply the Munger Test to the perspective of a user of a social media platform: would you trust your worst enemy to exercise censorship on that platform on the basis of politics? Like Donald Trump? Or Chuck Schumer? If not, then you probably won’t be happy there! Now, add to that your worst enemy’s immunity to prosecution for any content they deem favorable!

Cloaked Government Censorship?

Censorship runs afoul of the First Amendment if government actors are involved. In an interesting twist in the case of the Twitter Files, the two independent journalists working with the files, Matt Taibbi and Bari Weiss, learned that some of the information had been redacted by one James Baker, Twitter’s Deputy General Counsel. Perhaps not coincidentally, Baker was also formerly General Counsel of the FBI and a key figure in the Trump-Russia investigation. Musk promptly fired Baker from Twitter over the weekend. We might see, very soon, just how coincidental Baker’s redactions were.

Mark Zuckerberg himself recently admitted that Facebook was pressured by the FBI to censor the Hunter Biden laptop story, which is a key part of the controversy underlying the Twitter Files. The Biden Administration had ambitious plans for working alongside social media on content moderation, but the Orwellian-sounding “Disinformation Governance Board” has been shelved, at least for now. Furthermore, activity performed for a political campaign may represent an impermissible in-kind campaign donation, and Twitter falsely denied to the FEC that it had worked with the Biden campaign.


What remedies exist for potential social media abuses of constitutionally-protected rights, or even politically-driven censorship? Elon Musk’s remaking of Twitter is a big win, of course, and market solutions now seem more realistic. Court challenges to social media firms are also possible, but there are statutory obstacles. Court challenges to the federal government are more likely to succeed (if its involvement can be proven).

The big social media firms have all adopted a fairly definitive political stance and have acted on it ruthlessly, contrary to their professed role in the provision of an open “public square”. For that reason, I have in the past supported eliminating social media’s immunity from prosecution for content posted on their networks. A cryptic jest by Musk might just refer to that very prospect:

Anything anyone says will be used against me in a court of law.

Or maybe not … even with the sort of immunity granted to social media platforms, the Twitter Files might implicate his own company in potential violations of law, and he seems to be okay with that.

Immunity was granted to social media platforms under Section 230 of the Communications Decency Act (DCA). It was something many thought “the state should do” in the 1990s in order to foster growth in the internet. And it would seem that a platform’s immunity for content shared broadly should be consistent with promoting free speech. So the issue of revoking immunity is thorny for free speech advocates.

Section 230 And Content Moderation

There have always been legal restrictions on speech related to libel and “fighting words”. In addition, the CDA, which is a part of the Telecommunications Act, restricts “obscene” or “offensive” speech and content in various ways. The problem is that social media firms seem to have used the CDA as a pretext for censoring content more generally. It’s also possible they felt as if immunity from liability made them legally impervious to objections of any sort, including aggressive political censorship and user bans on behalf of government.

The social value of granting immunity depends on the context. There are two different kinds of immunity under Section 230: subsection (c)(1) grants immunity to so-called common carriers (e.g. telephone companies) for the content of private messages or calls on their networks; subsection (c)(2) grants immunity to social media companies for content posted on their platforms as long as those companies engage in content moderation consistent with the provisions of the CDA.

Common carrier immunity is comparatively noncontroversial, but with respect to 230(c)(2), I go back to the question: would I want my worst enemy to have the power to grant this kind of immunity? Not if it meant the power to forgive political manipulation of social media content with the heavy involvement of one political party! The right to ban users is completely unlike the “must serve” legal treatment of “public accommodations” provided by most private businesses. And immunity is inconsistent with other policies. For example, if social media acts to systematically host and to amplify some viewpoints and suppress others, it suggests that they are behaving more like publishers, who are liable for material they might publish, whether produced on their own or by third-party contributors.

Still, social media firms are private companies and their user agreements generally allow them to take down content for any reason. And if content moderation decisions are colored by input from one side of the political aisle, that is within the rights of a private firm (unless its actions are held to be illegal in-kind contributions to a political campaign). Likewise, it is every consumer’s right not to join such a platform, and today there are a number of alternatives to Twitter and Facebook.

Again, political censorship exercised privately is not the worst of it. There are indications that government actors have been complicit in censorship decisions made by social media. That would be a clear violation of the First Amendment for which immunity should be out of the question. I’d probably cut a platform considerable slack, however, if they acted under threat of retaliation by government actors, if that could be proven.

Volokh’s Quid Pro Quo

Rather than simply stripping away Section 230 protection for social media firms, another solution has been suggested by Eugene Volokh in “Common Carrier Status as Quid Pro Quo for § 230(c)(1) Immunity”. He proposes the following choice for these companies:

(1) Be common carriers like phone companies, immune from liability but also required to host all viewpoints, or

(2) be distributors like bookstores, free to pick and choose what to host but subject to liability (at least on a notice-and-takedown basis).

Option 2 is the very solution discussed in the last section (revoke immunity). Option 1, however, would impinge on a private company’s right to moderate content in exchange for continued immunity. Said differently, the quid pro quo offers continued rents created by immunity in exchange for status as a public utility of sorts, along with limits on the private right to moderate content. Common carriers often face other regulatory rules that bear on pricing and profits, but since basic service on social media is usually free, this is probably not at issue for the time being.

Does Volokh’s quid pro quo pass the Munger Test? Well, at least it’s a choice! For social media firms to host all viewpoints isn’t nearly as draconian as the universal service obligation imposed on local phone companies and other utilities, because the marginal cost of hosting an extra social media user is negligible.

Would I give my worst enemy the power to impose this choice? The CDA would still obligate social media firms selecting Option 1 to censor obscene or offensive content. Option 2 carries greater legal risks to firms, who might respond by exercising more aggressive content moderation. The coexistence of common carriers and more content-selective hosts might create competitive pressures for restrained content moderation (within the limits of the CDA) and a better balance for users. Therefore, Volokh’s quid pro quo option seems reasonable. The only downside is whether government might interfere with social media common carriers’ future profitability or plans to price user services. Then again, if a firm could reverse its choice at some point, that might address the concern. The CDA itself might not have passed the “Worst Enemy” Munger Test, but at least within the context of established law, I think Volokh’s quid pro quo probably does.

We’ll Know More Soon

More will be revealed as new “episodes” of the Twitter Files are released. We may well hear direct evidence of government involvement in censorship decisions. If so, it will be interesting to see the fallout in terms of legal actions against government censorship, and whether support coalesces around changes in the social media regulatory environment.

Government Action and the “Your Worst Enemy” Test


, , , , , , , , , , ,

A couple of weeks back I posted an admittedly partial list of the disadvantages, dysfunctions, and dangers of the Big Government Mess seemingly wished upon us by so many otherwise reasonable people. A wise addition to that line of thinking is the so-called Munger Test articulated by Michael Munger of Duke University. Here, he applies the test to government involvement in social media content regulation:

If someone says “The STATE should do X” (in this case, decide what is true and what can be published in a privately-owned space), they need to make a substitution.

Instead of “The STATE” substitute “Donald Trump,” and see if you still belief it. (Or “Nancy Pelosi”, if you want).

If approached honestly, Munger’s test is sure to make a partisan think twice about having government “do something”, or do anything! In a another tweet, Munger elaborates on the case of Twitter, which is highly topical at the moment:

In fact, the reporters and media moguls who are calling for the state to hammer Twitter, and censor all those other ‘liars’, naively believe that they have a 1000 Year Reich.

You don’t. 𝙔𝙤𝙪 𝙘𝙖𝙣 𝙤𝙣𝙡𝙮 𝙜𝙞𝙫𝙚 𝙩𝙝𝙚 𝙎𝙩𝙖𝙩𝙚 𝙥𝙤𝙬𝙚𝙧𝙨 𝙩𝙝𝙖𝙩 𝙮𝙤𝙪 𝙛𝙖𝙫𝙤𝙧 𝙜𝙞𝙫𝙞𝙣𝙜 𝙩𝙤 𝙮𝙤𝙪𝙧 𝙬𝙤𝙧𝙨𝙩 𝙚𝙣𝙚𝙢𝙮. Deal with it.”

The second sentence in that last paragraph is an even more concise statement of the general principle behind the Munger Test, which we might dub the “Worst Enemy Test” with no disrespect to Munger. He proposed the test (immodestly named, he admits) in his 2014 article, “Unicorn Governance”, in which he offered a few other examples of its application. The article is subtitled:

Ever argued public policy with people whose State is in fantasyland?

The answer for me is yes, almost every time I talk to anyone about public policy! And as Munger says, that’s because:

Everybody imagines that ‘The STATE’ is smart people who agree with them. Once MY team controls the state, order will be restored to the Force.

So go ahead! Munger-test all your friends’ favorite policy positions the next time you talk!

But what about the case of “regulating” Twitter or somehow interfering with its approach to content moderation? More on that in my next post.

The Dubious 1917 Redemption of Karl Marx


, , , , , , , , , , , , , , , ,

Karl Marx has long been celebrated by the Left as a great intellectual, but the truth is that his legacy was destined to be of little significance until his writings were lauded, decades later, by the Bolsheviks during their savage October 1917 revolution in Russia. Vladimir Lenin and his murderous cadre promoted Marx and brought his ideas into prominence as political theory. That’s the conclusion of a fascinating article by Phil Magness and Michael Makovi (M&M) appearing in the Journal of Political Economy. The title: “The Mainstreaming of Marx: Measuring the Effect of the Russian Revolution on Karl Marx’s Influence“.

The idea that the early Soviet state and other brutal regimes in its mold were the main progenitors of Marxism is horrifying to its adherents today. That’s the embarrassing historical reality, however. It’s not really clear that Marx himself would have endorsed those regimes, though I hesitate to cut him too much slack.

A lengthy summary of the M&M paper is given by the authors in “Das Karl Marx Problem”. The “problem”, as M&M describe it, is in reconciling 1) the nearly complete and well-justified rejection of Marx’s economic theories during his life and in the 34 years after his death, with 2) the esteem in which he’s held today by so many so-called intellectuals. A key piece of the puzzle, noted by the authors, is that praise for Marx comes mainly from outside the economics profession. The vast majority of economists today recognize that Marx’s labor theory of value is incoherent as an explanation of the value created in production and exchange.

The theoretical rigors might be lost on many outside the profession, but a moments reflection should be adequate for almost anyone to realize that value is contributed by both labor and non-labor inputs to production. Of course, it might have dawned on communists over the years that mass graves can be dug more “efficiently” by combining labor with physical capital. On the other hand, you can bet they never paid market prices for any of the inputs to that grisly enterprise.

Marx never thought in terms of decisions made at the margin, the hallmark of the rational economic actor. That shortcoming in his framework led to mistaken conclusions. Second, and again, this should be obvious, prices of goods must incorporate (and reward) the value contributed by all inputs to production. That value ultimately depends on the judgement of buyers, but Marx’s theory left him unable to square the circle on all this. And not for lack of trying! It was a failed exercise, and M&M provide several pieces of testimony to that effect. Here’s one example:

By the time Lenin came along in 1917, Marx’s economic theories were already considered outdated and impractical. No less a source than John Maynard Keynes would deem Marx’s Capital ‘an obsolete economic textbook . . . without interest or application for the modern world’ in a 1925 essay.

Marxism, with its notion of a “workers’ paradise”, gets credit from intellectuals as a highly utopian form of socialism. In reality, it’s implementation usually takes the form of communism. The claim that Marxism is “scientific” socialism (despite the faulty science underlying Marx’s theories) is even more dangerous, because it offers a further rationale for authoritarian rule. A realistic transition to any sort of Marxist state necessarily involves massive expropriations of property and liberty. Violent resistance should be expected, but watch the carnage when the revolutionaries gain the upper hand.

What M&M demonstrate empirically is how lightly Marx was cited or mentioned in printed material up until 1917, both in English and German. Using Google’s Ngram tool, they follow a group of thinkers whose Ngram patterns were similar to Marx’s up to 1917. They use those records to construct an expected trajectory for Marx for 1917 forward and find an aberrant jump for Marx at that time, again both in English and in German material. But Ngram excludes newspaper mentions, so they also construct a database from and their findings are the same: newspaper references to Marx spiked after 1917. There was nothing much different when the sample was confined to socialist writers, though M&M acknowledge that there were a couple of times prior to 1917 during which short-lived jumps in Marx citations occurred among socialists.

To be clear, however, Marx wasn’t unknown to economists during the 3+ decades following his death. His name was mentioned here and there in the writings of prominent economists of the day — just not in especially glowing terms.

“… absent the events of 1917, Marx would have continued to be an object of niche scholarly inquiry and radical labor activism. He likely would have continued to compete for attention in those same radical circles as the main thinker of one of its many factions. After the Soviet boost to Marx, he effectively crowded the other claimants out of [the] socialist-world.

Magness has acknowledged that he and Makovi aren’t the first to have noticed the boost given to Marx by the Bolsheviks. Here, Magness quotes Eric Hobsbawm’s take on the subject:

This situation changed after the October Revolution – at all events, in the Communist Parties. … Following Lenin, all leaders were now supposed to be important theorists, since all political decisions were justified on grounds of Marxist analysis – or, more probably, by reference to textual authority of the ‘classics’: Marx, Engels, Lenin, and, in due course, Stalin. The publication and popular distribution of Marx’s and Engels’s texts therefore become far more central to the movement than they had been in the days of the Second International [1889 – 1914].”

Much to the chagrin of our latter day Marxists and socialists, it was the advent of the monstrous Soviet regime that led to Marx’s “mainstream” ascendency. Other brutal regimes arising later reinforced Marx’s stature. The tyrants listed by M&M include Joseph Stalin, Mao Zedong, Fidel Castro, and Pol Pot, and they might have added several short-lived authoritarian regimes in Africa as well. Today’s Marxists continue to assure us that those cases are not representative of a Marxist state.

Perhaps it’s fair to say that Marx’s name was co-opted by thugs, but I posit something a little more consistent with the facts: it’s difficult to expropriate the “means of production” without a fight. Success requires massive takings of liberty and property. This is facilitated by means of a “class struggle” between social or economic strata, or it might reflect divisions based on other differences. Either way, groups are pitted against one another. As a consequence, we witness an “othering” of opponents on one basis or another. Marxists, no matter how “pure of heart”, find it impossible to take power without demanding ideological purity. Invariably, this requires “reeducation”, cleansing, and ultimately extermination of opponents.

Karl Marx had unsound ideas about how economic value manifests and where it should flow, and he used those ideas to describe what he thought was a more just form of social organization. The shortcomings of his theory were recognized within the economics profession of the day, and his writings might have lived on in relative obscurity were it not for the Bolshevik’s intellectual pretensions. Surely obscurity would have been better than a legacy shaped by butchers.

It’s a Big Government Mess


, , , , , , , , , , , , , , , , , , , , ,

I’m really grateful to have the midterm elections behind us. Well, except for the runoff Senate race in Georgia, the cockeyed ranked-choice Senate race in Alaska, and a few stray House races that remain unsettled after almost two weeks. I’m tired of campaign ads, including the junk mail and pestering “unknown” callers — undoubtedly campaign reps or polling organizations.

It’s astonishing how much money is donated and spent by political campaigns. This year’s elections saw total campaign spending (all levels) hit $16.7 billion, a record for a mid-term. The recent growth in campaign spending for federal offices has been dramatic, as the chart below shows:

Do you think spending of a few hundred million dollars on a Senate campaign is crazy? Me too, though I don’t advocate for legal limits on campaign spending because, for better or worse, that issue is entangled with free speech rights. Campaigns are zero-sum events, but presumably a big donor thinks a success carries some asymmetric reward…. A success rate of better than 50% across several campaigns probably buys much more…. And donors can throw money at sure political bets that are probably worth a great deal…. Many donors spread their largess across both parties, perhaps as a form of “protection”. But it all seems so distasteful, and it’s surely a source of waste in the aggregate.

My reservations about profligate campaign spending include the fact that it is a symptom of big government. Donors obviously believe they are buying something that government, in one way or another, makes possible for them. The greater the scope of government activity, the more numerous are opportunities for rent seeking — private gains through manipulation of public actors. This is the playground of fascists!

There are people who believe that placing things in the hands of government is an obvious solution to the excesses of “greed”. However, politicians and government employees are every bit as self-interested and “greedy” as actors in the private sector. And they can do much more damage: government actors legally exercise coercive power, they are not subject in any way to external market discipline, and they often lack any form of accountability. They are not compelled to respect consumer sovereignty, and they make correspondingly little contribution to the nation’s productivity and welfare.

Actors in the private sector, on the other hand, face strong incentives to engage in optimizing behavior: they must please customers and strive to improve performance to stay ahead of their competition. That is, unless they are seduced by what power they might have to seek rents through public sector activism.

A people who grant a wide scope of government will always suffer consequences they should expect, but they often proceed in abject ignorance. So here is my rant, a brief rundown on some of the things naive statists should expect to get for their votes. Of course, this is a short list — it could be much longer:

  • Opportunities for graft as bureaucrats administer the spending of others’ money and manipulate economic activity via central planning.
  • A ballooning and increasingly complex tax code seemingly designed to benefit attorneys, the accounting profession, and certainly some taxpayers, but at the expense of most taxpayers.
  • Subsidies granted to producers and technologies that are often either unnecessary or uneconomic (and see here), leading to malinvestment of capital. This is often a consequence of the rent seeking and cronyism that goes hand-in-hand with government dominance and ham-handed central planning.
  • Redistribution of existing wealth, a zero- or even negative-sum activity from an economic perspective, is prioritized over growth.
  • Redistribution beyond a reasonable safety net for those unable to work and without resources is a prescription for unnecessary dependency, and it very often constitutes a surreptitious political buy-off.
  • Budgetary language under which “budget cuts” mean reductions in the growth of spending.
  • Large categories of spending, known in the U.S. as non-discretionary entitlements, that are essentially off limits to lawmakers within the normal budget appropriations process.
  • Fiscal illusion” is exploited by politicians and statists to hide the cost of government expansion.
  • The strained refrain that too many private activities impose external costs is stretched to the point at which government authorities externalize internalities via coercive taxes, regulation, or legal actions.
  • Massive growth in regulation (see chart at top) extending to puddles classified as wetlands (EPA), the ”disparate impacts” of private hiring practices (EEOC), carbon footprints of your company and its suppliers (EPA, Fed, SEC), outrageous energy efficiency standards (DOE), and a multiplicity of other intrusions.
  • Growth in the costs of regulatory compliance.
  • A nearly complete lack of responsiveness to market prices, leading to misallocation of resources — waste.
  • Lack of value metrics for government activities to gauge the public’s “willingness to pay”.
  • Monopoly encouraged by regulatory capture and legal / compliance cost barriers to competition. Again, cronyism.
  • Monopoly granted by other mechanisms such as import restrictions and licensure requirements. Again, cronyism.
  • Ruination of key industries as government control takes it’s grip.
  • Shortages induced by price controls.
  • Inflation and diminished buying power stoked by monetized deficits, which is a long tradition in financing excessive government.
  • Malinvestment of private capital created by monetary excess and surplus liquidity.
  • That malinvestment of private capital creates macroeconomic instability. The poorly deployed capital must be written off and/or reallocated to productive uses at great cost.
  • Funding for bizarre activities folded into larger budget appropriations, like holograms of dead comedians, hamster fighting experiments, and an IHOP for a DC neighborhood.
  • A gigantic public sector workforce in whose interest is a large and growing government sector, and who believe that government shutdowns are the end of the world.
  • Attempts to achieve central control of information available to the public, and the quashing of dissent, even in a world with advanced private information technology. See the story of Hunter Biden’s laptop. This extends to control of scientific narratives to ensure support for certain government programs.
  • Central funding brings central pursestrings and control. This phenomenon is evident today in local governance, education, and science. This is another way in which big government fosters dependency.
  • Mission creep as increasing areas of economic activity are redefined as “public” in nature.
  • Law and tax enforcement, security, and investigative agencies pressed into service to defend established government interests and to compromise opposition.

I’ve barely scratched the surface! Many of the items above occur under big government precisely because various factions of the public demand responses to perceived problems or “injustices”, despite the broader harms interventions may bring. The press is partly responsible for this tendency, being largely ignorant and lacking the patience for private solutions and market processes. And obviously, those kinds of demands are a reason government gets big to begin with. In the past, I’ve referred to these knee-jerk demands as “do somethingism”, and politicians are usually too eager to play along. The squeaky wheel gets the oil.

I mentioned cronyism several times in the list. The very existence of broad public administration and spending invites the clamoring of obsequious cronies. They come forward to offer their services, do large and small “favors”, make policy suggestions, contribute to lawmakers, and to offer handsomely remunerative post-government employment opportunities. Of course, certaIn private parties also recognize the potential opportunities for market dominance when regulators come calling. We have here a perversion of the healthy economic incentives normally faced by private actors, and these are dynamics that gives rise to a fascist state.

It’s true, of course, that there are areas in which government action is justified, if not necessary. These include pure public goods such as national defense, as well as public safety, law enforcement, and a legal system for prosecuting crimes and adjudicating disputes. So a certain level of state capacity is a good thing. Nevertheless, as the list suggests, even these traditional roles for government are ripe for unhealthy mission creep and ultimately abuse by cronies.

The overriding issue motivating my voting patterns is the belief in limited government. Both major political parties in the U.S. violate this criterion, or at least carve out exceptions when it suits them. I usually identify the Democrat Party with statism, and there is no question that democrats rely far too heavily on government solutions and intervention in private markets. The GOP, on the other hand, often fails to recognize the statism inherent in it’s own public boondoggles, cronyism, and legislated morality. In the end, the best guide for voting would be a political candidate’s adherence to the constitutional principles of limited government and individual liberty, and whether they seem to understand those principles. Unfortunately, that is often too difficult to discern.

Sweden’s Pandemic Policy: Arguably Best Practice


, , , , , , , , , , , , , , , , , , ,

When Covid-19 began its awful worldwide spread in early 2020, the Swedes made an early decision that ultimately proved to be as protective of human life as anything chosen from the policy menu elsewhere. Sweden decided to focus on approaches for which there was evidence of efficacy in containing respiratory pandemics, not mere assertions by public health authorities (or anyone else) that stringent non-pharmaceutical interventions (NPIs) were necessary or superior.

The Swedish Rationale

The following appeared in an article in Stuff in late April, 2020,

Professor Johan Giesecke, who first recruited [Sweden’s State epidemiologist Anders] Tegnell during his own time as state epidemiologist, used a rare interview last week to argue that the Swedish people would respond better to more sensible measures. He blasted the sort of lockdowns imposed in Britain and Australia and warned a second wave would be inevitable once the measures are eased. ‘… when you start looking around at the measures being taken by different countries, you find very few of them have a shred of evidence-base,’ he said.

Giesecke, who has served as the first Chief Scientist of the European Centre for Disease Control and has been advising the Swedish Government during the pandemic, told the UnHerd website there was “almost no science” behind border closures and school closures and social distancing and said he looked forward to reviewing the course of the disease in a year’s time.

Giesecke was of the opinion that there would ultimately be little difference in Covid mortality across countries with different pandemic policies. Therefore, the least disruptive approach was to be preferred. That meant allowing people to go about their business, disseminating information to the public regarding symptoms and hygiene, and attempting to protect the most vulnerable segments of the population. Giesecke said:

I don’t think you can stop it. It’s spreading. It will roll over Europe no matter what you do.

He was right. Sweden had a large number of early Covid deaths primarily due to its large elderly population as well as its difficulty in crafting effective health messages for foreign-speaking immigrants residing in crowded enclaves. Nevertheless, two years later, Sweden has posted extremely good results in terms of excess deaths during the pandemic.

Excess Deaths

Excess deaths, or deaths relative to projections based on historical averages, are a better metric than Covid deaths (per million) for cross-country or jurisdictional comparisons. Among other reasons, the latter are subject to significant variations in methods of determining cause of death. Moreover, there was a huge disparity between excess deaths and Covid deaths during the pandemic, and the gap is still growing:

Excess deaths varied widely across countries, as illustrated by the left-hand side of the following chart:

Interestingly, most of the lowest excess death percentages were in Nordic countries, but especially Sweden and Norway. That might be surprising in terms of high Nordic latitudes, which may have created something of a disadvantage in terms of sun exposure and potentially low vitamin D levels. Norway enacted more stringent public policies during the pandemic than Sweden. Globally, however, lockdown measures showed no systematic advantage in terms of excess deaths. Notably, the U.S. did quite poorly in terms of excess deaths at 8X the Swedish rate,

Covid Deaths

The right-hand side of the chart above shows that Sweden experienced a significant number of Covid deaths per million residents. The figure still compares reasonably well internationally, despite the country’s fairly advanced age demographics. Most Covid deaths occurred in the elderly and especially in care settings. Like other places, that is where the bulk of Sweden’s Covid deaths occurred. Note that U.S. Covid deaths per million were more than 50% higher than in Sweden.

NPIs Are Often Deadly

Perhaps a more important reason to emphasize excess deaths over Covid deaths is that public policy itself had disastrous consequences in many countries. In particular, strict NPIs like lockdowns, including school and business closures, can undermine public health in significant ways. That includes the inevitably poor consequences of deferred health care, the more rapid spread of Covid within home environments, the physical and psychological stress from loss of livelihood, and the toll of isolation, including increased use of alcohol and drugs, less exercise, and binge eating. Isolation is particularly hard on the elderly and led to an increase in “deaths of despair” during the pandemic. These were the kinds of maladjustments caused by lockdowns that led to greater excess deaths. Sweden avoided much of that by eschewing stringent NPIs, and Iceland is sometimes cited as a similar case.

Oxford Stringency Index

I should note here, and this is a digression, that the most commonly used summary measure of policy “stringency” is not especially trustworthy. That measure is an index produced by Oxford University that is available on the Our World In Data web site. Joakim Book documented troubling issues with this index in late 2020, after changes in the index’s weightings dramatically altered its levels for Nordic countries. As Book said at that time:

Until sometime recently, Sweden, which most media coverage couldn’t get enough of reporting, was the least stringent of all the Nordics. Life was freer, pandemic restrictions were less invasive, and policy responses less strong; this aligned with Nordic people’s experience on the ground.

Again, Sweden relied on voluntary action to limit the spread of the virus, including encouragement of hygiene, social distancing, and avoiding public transportation when possible. Book was careful to note that “Sweden did not ‘do nothing’”, but it’s policies were less stringent than its Nordic neighbors in several ways. While Sweden had the same restrictions on arrivals from outside the European Economic Area as the rest of the EU, it did not impose quarantines, testing requirements, or other restrictions on travelers or on internal movements. Sweden’s school closures were short-lived, and its masking policies were liberal. The late-2020 changes in the Oxford Stringency Index, Book said, simply did not “pass the most rudimentary sniff test”.

Economic Stability

Sweden’s economy performed relatively well during the pandemic. The growth path of real GDP was smoother than most countries that succumbed to the excessive precautions of lockdowns. However, Norway’s economy appears to have been the most stable of those shown on the chart, at least in terms of real output, though it did suffer a spike in unemployment.

The Bottom Line

The big lesson is that Sweden’s “light touch” during the pandemic proved to be at least as effective, if not more so, than comparatively stringent policies imposed elsewhere. Covid deaths were sure to occur, but widespread non-Covid excess deaths were unanticipated by many countries practicing stringent intervention. That lack of foresight is best understood as a consequence of blind panic among public health “experts” and other policymakers, who too often are rewarded for misguided demonstrations that they have “done something”. Those actions failed to stop the spread in any systematic sense, but they managed to do great damage to other aspects of public health. Furthermore, they undermined economic well being and the cause of freedom. Johan Giesecke was right to be skeptical of those claiming they could contain the virus through NPIs, though he never anticipated the full extent to which aggressive interventions would prove deadly.

Biden’s Rx Price Controls: Cheap Politics Over Cures


, , , , , , , , , , , , , , , , , , , , , , ,

You can expect dysfunction when government intervenes in markets, and health care markets are no exception. The result is typically over-regulation, increased industry concentration, lower-quality care, longer waits, and higher costs to patients and taxpayers. The pharmaceutical industry is one of several tempting punching bags for ambitious politicians eager to “do something” in the health care arena. These firms, however, have produced many wonderful advances over the years, incurring huge research, development, and regulatory costs in the process. Reasonable attempts to recoup those costs often means conspicuously high prices, which puts a target on their backs for the likes of those willing to characterize return of capital and profit as ill-gotten.

Biden Flunks Econ … Again

Lately, under political pressure brought on by escalating inflation, Joe Biden has been talking up efforts to control the prices of prescription drugs for Medicare beneficiaries. Anyone with a modicum of knowledge about markets should understand that price controls are a fool’s errand. Price controls don’t make good policy unless the goal is to create shortages.

The preposterously-named Inflation Reduction Act is an example of this sad political dynamic. Reducing inflation is something the Act won’t do! Here is Wikipedia’s summary of the prescription drug provisions, which is probably adequate for now:

Prescription drug price reform to lower prices, including Medicare negotiation of drug prices for certain drugs (starting at 10 by 2026, more than 20 by 2029) and rebates from drug makers who price gouge… .”

The law contains provisions that cap insulin costs at $35/month and will cap out-of-pocket drug costs at $2,000 for people on Medicare, among other provisions.

Unpacking the Blather

“Price gouging”, of course, is a well-worn term of art among anti-market propagandists. In this case it’s meaning appears to be any form of non-compliance, including those for which fees and rebates are anticipated.

The insulin provision is responsive to a long-standing and misleading allegation that insulin is unavailable at reasonable prices. In fact, insulin is already available at zero cost as durable medical equipment under Medicare Part B for diabetics who use insulin pumps. Some types and brands of insulin are available at zero cost for uninsured individuals. A simple internet search on insulin under Medicare yields several sources of cheap insulin. GoodRx also offers brands at certain pharmacies at reasonable costs.

As for the cap on out-of-pocket spending under Part D, limiting the patient’s payment responsibility is a bad way to bring price discipline to the market. Excessive third-party shares of medical payments have long been implicated in escalating health care costs. That reality has eluded advocates of government health care, or perhaps they simply prefer escalating costs in the form of health care tax burdens.

Negotiated Theft

The Act’s adoption of the term “negotiation” is a huge abuse of that word’s meaning. David R. Henderson and Charles Hooper offer the following clarification about what will really happen when the government sits down with the pharmaceutical companies to discuss prices:

Where CMS is concerned, ‘negotiations’ is a ‘Godfather’-esque euphemism. If a drug company doesn’t accept the CMS price, it will be taxed up to 95% on its Medicare sales revenue for that drug. This penalty is so severe, Eli Lilly CEO David Ricks reports that his company treats the prospect of negotiations as a potential loss of patent protection for some products.

The first list of drugs for which prices will be “negotiated” by CMS won’t take effect until 2026. However, in the meantime, drug companies will be prohibited from increasing the price of any drug sold to Medicare beneficiaries by more than the rate of inflation. Price control is the correct name for these policies.

Death and Cost Control

Henderson and Hooper chose a title for their article that is difficult for the White House and legislators to comprehend: “Expensive Prescription Drugs Are a Bargain“. The authors first note that 9 out of 10 prescription drugs sold in the U.S. are generics. But then it’s easy to condemn high price tags for a few newer drugs that are invaluable to those whose lives they extend, and those numbers aren’t trivial.

Despite the protestations of certain advocates of price controls and the CBO’s guesswork on the matter, the price controls will stifle the development of new drugs and ultimately cause unnecessary suffering and lost life-years for patients. This reality is made all too clear by Joe Grogan in the Wall Street Journal in “The Inflation Reduction Act Is Already Killing Potential Cures” (probably gated). Grogan cites the cancellation of drugs under development or testing by three different companies: one for an eye disease, another for certain blood cancers, and one for gastric cancer. These cancellations won’t be the last.

Big Pharma Critiques

The pharmaceutical industry certainly has other grounds for criticism. Some of it has to do with government extensions of patent protection, which prolong guaranteed monopolies beyond points that may exceed what’s necessary to compensate for the high risk inherent in original investments in R&D. It can also be argued, however, that the FDA approval process increases drug development costs unreasonably, and it sometimes prevents or delays good drugs from coming to market. See here for some findings on the FDA’s excessive conservatism, limiting choice in dire cases for which patients are more than willing to risk complications. Pricing transparency has been another area of criticism. The refusal to release detailed data on the testing of Covid vaccines represents a serious breach of transparency, given what many consider to have been inadequate testing. Big pharma has also been condemned for the opioid crisis, but restrictions on opioid prescriptions were never a logical response to opioid abuse. (Also see here, including some good news from the Supreme Court on a more narrow definition of “over-prescribing”.)

Bad policy is often borne of short-term political objectives and a neglect of foreseeable long-term consequences. It’s also frequently driven by a failure to understand the fundamental role of profit incentives in driving innovation and productivity. This is a manifestation of the short-term focus afflicting many politicians and members of the public, which is magnified by the desire to demonize a sector of the economy that has brought undeniable benefits to the public over many years. The price controls in Biden’s Inflation Reduction Act are a sure way to short-circuit those benefits. Those interventions effectively destroy other incentives for innovation created by legislation over several decades, as Joe Grogan describes in his piece. If you dislike pharma pricing, look to reform of patenting and the FDA approval process. Those are far better approaches.


Note: The image above was created by “Alexa” for this Washington Times piece from 2019.

Wind and Solar Power: Brittle, Inefficient, and Destructive


, , , , , , , , , , , , , , , , , , , , ,

Just how renewable is “renewable” energy, or more specifically solar and wind power? Intermittent though they are, the wind will always blow and the sun will shine (well, half a day with no clouds). So the possibility of harvesting energy from these sources is truly inexhaustible. Obviously, it also takes man-made hardware to extract electric power from sunshine and wind — physical capital— and it is quite costly in several respects, though taxpayer subsidies might make it appear cheaper to investors and (ultimately) users. Man-made hardware is damaged, wears out, malfunctions, or simply fails for all sorts of reasons, and it must be replaced from time to time. Furthermore, man-made hardware such as solar panels, wind turbines, and the expansions to the electric grid needed to bring the power to users requires vast resources and not a little in the way of fossil fuels. The word “renewable” is therefore something of a misnomer when it comes to solar and wind facilities.

Solar Plant

B. F. Randall (@Mining_Atoms) has a Twitter thread on this topic, or actually several threads (see below). The first thing he notes is that solar panels require polysilicon, which not recyclable. Disposal presents severe hazards of its own, and to replace old solar panels, polysilicon must be produced. For that, Randall says you need high-purity silica from quartzite rock, high-purity coking coal, diesel fuel, and large flows of dispatchable (not intermittent) electric power. To get quartzite, you need carbide drilling tools, which are not renewable. You also need to blast rock using ammonium nitrate fuel oil derived from fossil fuels. Then the rock must be crushed and often milled into fine sand, which requires continuous power. The high temperatures required to create silicon are achieved with coking coal, which is also used in iron and steel making, but coking coal is non-renewable. The whole process requires massive amounts of electricity generated with fossil fuels. Randall calls polysilicon production “an electricity beast”.


The resulting carbon emissions are, in reality, unlikely to be offset by any quantity of carbon credits these firms might purchase, which allow them to claim a “zero footprint”. Blake Lovewall describes the sham in play here:

The biggest and most common Carbon offset schemes are simply forests. Most of the offerings in Carbon marketplaces are forests, particularly in East Asian, African and South American nations. …

The only value being packaged and sold on these marketplaces is not cutting down the trees. Therefore, by not cutting down a forest, the company is maintaining a ‘Carbon sink’ …. One is paying the landowner for doing nothing. This logic has an acronym, and it is slapped all over these heralded offset projects: REDD. That is a UN scheme called ‘Reduce Emissions from Deforestation and Forest Degradation’. I would re-name it to, ‘Sell off indigenous forests to global investors’.

Lovewall goes on to explain that these carbon offset investments do not ensure that forests remain pristine by any stretch of the imagination. For one thing, the requirements for managing these “preserves” are often subject to manipulation by investors working with government; as such, the credits are often vehicle for graft. In Indonesia, for example, carbon credited forests have been converted to palm oil plantations without any loss of value to the credits! Lovewall also cites a story about carbon offset investments in Brazil, where the credits provided capital for a massive dam in the middle of the rainforest. This had severe environmental and social consequences for indigenous peoples. It’s also worth noting that planting trees, wherever that might occur under carbon credits, takes many years to become a real carbon sink.

While I can’t endorse all of Lovewall’s points of view, he makes a strong case that carbon credits are a huge fraud. They do little to offset carbon generated by entities that purchase them as offsets. Again, the credits are very popular with the manufacturers and miners who participate in the fabrication of physical capital for renewable energy installations who wish to “greenwash” their activities.

Wind Plant

Randall discusses the non-renewability of wind turbines in a separate thread. Turbine blades, he writes, are made from epoxy resins, balsa wood, and thermoplastics. They wear out, along with gears and other internal parts, and must be replaced. Land disposal is safe and cheap, but recycling is costly and requires even greater energy input than the use of virgin feedstocks. Randall’s thread on turbines raised some hackles among wind energy defenders and even a few detractors, and Randall might have overstated his case in one instance, but the main thrust of his argument is irrefutable: it’s very costly to recycle these components into other usable products. Entrepreneurs are still trying to work out processes for doing so. It’s not clear that recycling the blades into other products is more efficient than sending them to landfills, as the recycling processes are resource intensive.

But even then, the turbines must be replaced. Recycling the old blades into crates and flooring and what have you, and producing new wind turbines, requires lots of power. And as Randall says, replacement turbines require huge ongoing quantities of zinc, copper, cement, and fossil fuel feedstocks.

The Non-Renewability of Plant

It shouldn’t be too surprising that renewable power machinery is not “renewable” in any sense, despite the best efforts of advocates to convince us of their ecological neutrality. Furthermore, the idea that the production of this machinery will be “zero carbon” any time in the foreseeable future is absurd. In that respect, this is about like the ridiculous claim that electric vehicles (EVs) are “zero emission”, or the fallacy that we can achieve a zero carbon world based on renewable power.

It’s time the public came to grips with the reality that our heavy investments in renewables are not “renewable” in the ecological sense. Those investments, and reinvestments, merely buy us what Randall calls “garbage energy”, by which he means that it cannot be relied upon. Burning garbage to create steam is actually a more reliable power source.

Highly Variable With Low Utilization

Randall links to information provided by Martian Data (@MartianManiac1) on Europe’s wind energy generation as of September 22, 2022 (see the tweet for Martian Data’s sources):

Hourly wind generation in Europe for past 6 months:
Max: 122GW
Min: 10.2GW
Mean: 41.0
Installed capacity: ~236GW

That’s a whopping 17.4% utilization factor! That’s pathetic, and it means the effective cost is quintuple the value at nameplate capacity. Take a look at this chart comparing the levels and variations in European power demand, nuclear generation, and wind generation over the six months ending September 22nd (if you have trouble zooming in here, try going to the thread):

The various colors represent different countries. Here’s a larger view of the wind component:

A stable power grid cannot be built upon this kind of intermittency. Here is another comparison that includes solar power. This chart is daily covering 2021 through about May 26, 2022.

As for solar capacity utilization, it too is unimpressive. Here is Martian Data’s note on this point, followed by a chart of solar generation over the course of a few days in June:

so ~15% solar capacity is whole year average. ~5% winter ~20% summer. And solar is brief in summer too…, it misses both both morning and evening peaks in demand.

Like wind, the intermittency of solar power makes it an impractical substitute for traditional power sources. Check out Martian Data’s Twitter feed for updates and charts from other parts of the world.

Nuclear Efficiency

Nuclear power generation is an excellent source of baseload power. It is dispatchable and zero carbon except at plant construction. It also has an excellent safety record, and newer, modular reactor technologies are safer yet. It is cheaper in terms of generating capacity and it is more flexible than renewables. In fact, in terms of the resource costs of nuclear power vs. renewables over plant cycles, it’s not even close. Here’s a chart recently posted by Randall showing input quantities per megawatt hour produced over the expected life of each kind of power facility (different power sources are labeled at bottom, where PV = photovoltaic (solar)):

In fairness, I’m not completely satisfied with these comparisons. They should be stated in terms of current dollar costs, which would neutralize differences in input densities and reflect relative scarcities. Nevertheless, the differences in the chart are stark. Nuclear produces cheap, reliable power.

The Real Dirt

Solar and wind power are low utilization power sources and they are intermittent. Heavy reliance on these sources creates an extremely brittle power grid. Also, we should be mindful of the vast environmental degradation caused by the mining of minerals needed to produce solar panels and wind turbines, including their inevitable replacements, not to mention the massive land use requirements of wind and solar power. Also disturbing is the hazardous dumping of old solar panels from the “first world” now taking place in less developed countries. These so-called clean-energy sources are anything but clean or efficient.

Stealth Hiring Quotas Via AI


, , , , , , , , , , , , , , , ,

Hiring quotas are of questionable legal status, but for several years, some large companies have been adopting quota-like “targets” under the banner of Diversity, Equity and Inclusion (DEI) initiatives. Many of these so-called targets apply to the placement of minority candidates into “leadership positions”, and some targets may apply more broadly. Explicit quotas have long been viewed negatively by the public. Quotas have also been proscribed under most circumstances by the Supreme Court, and the EEOC’s Compliance Manual still includes rigid limits on when the setting of minority hiring “goals” is permissible.

Yet large employers seem to prefer the legal risks posed by aggressive DEI policies to the risk of lawsuits by minority interests, unrest among minority employees and “woke” activists, and “disparate impact” inquiries by the EEOC. Now, as Stewart Baker writes in a post over at the Volokh Conspiracy, employers have a new way of improving — or even eliminating — the tradeoff they face between these risks: “stealth quotas” delivered via artificial intelligence (AI) decisioning tools.

Skynet Smiles

A few years ago I discussed the extensive use of algorithms to guide a range of decisions in “Behold Our Algorithmic Overlords“. There, I wrote:

Imagine a world in which all the information you see is selected by algorithm. In addition, your success in the labor market is determined by algorithm. Your college admission and financial aid decisions are determined by algorithm. Credit applications are decisioned by algorithm. The prioritization you are assigned for various health care treatments is determined by algorithm. The list could go on and on, but many of these ‘use-cases’ are already happening to one extent or another.

That post dealt primarily with the use of algorithms by large tech companies to suppress information and censor certain viewpoints, a danger still of great concern. However, the use of AI to impose de facto quotas in hiring is a phenomenon that will unequivocally reduce the efficiency of the labor market. But exactly how does this mechanism work to the satisfaction of employers?

Machine Learning

As Baker explains, AI algorithms are “trained” to find optimal solutions to problems via machine learning techniques, such as neural networks, applied to large data sets. These techniques are are not as straightforward as more traditional modeling approaches such as linear regression, which more readily lend themselves to intuitive interpretation of model results. Baker uses the example of lung x-rays showing varying degrees of abnormalities, which range from the appearance of obvious masses in the lungs to apparently clear lungs. Machine learning algorithms sometimes accurately predict the development of lung cancer in individuals based on clues that are completely non-obvious to expert evaluators. This, I believe, is a great application of the technology. It’s too bad that the intuition behind many such algorithmic decisions are often impossible to discern. And the application of AI decisioning to social problems is troubling, not least because it necessarily reduces the richness of individual qualities to a set of data points, and in many cases, defines individuals based on group membership.

When it comes to hiring decisions, an AI algorithm can be trained to select the “best” candidate for a position based on all encodable information available to the employer, but the selection might not align with a hiring manager’s expectations, and it might be impossible to explain the reasons for the choice to the manager. Still, giving the AI algorithm the benefit of the doubt, it would tend to make optimal candidate selections across reasonably large sets of similar, open positions.

Algorithmic Bias

A major issue with respect to these algorithms has been called “algorithmic bias”. Here, I limit the discussion to hiring decisions. Ironically, “bias” in this context is a rather slanted description, but what’s meant is that the algorithms tend to select fewer candidates from “protected classes” than their proportionate shares of the general population. This is more along the lines of so-called “disparate impact”, as opposed to “bias” in the statistical sense. Baker discusses the attacks this has provoked against algorithmic decision techniques. In fact, a privacy bill is pending before Congress containing provisions to address “AI bias” called the American Data Privacy and Protection Act (ADPPA). Baker is highly skeptical of claims regarding AI bias both because he believes they have little substance and because “bias” probably means that AIs sometimes make decisions that don’t please DEI activists. Baker elaborates on these developments:

“The ADPPA was embraced almost unanimously by Republicans as well as Democrats on the House energy and commerce committee; it has stalled a bit, but still stands the best chance of enactment of any privacy bill in a decade (its supporters hope to push it through in a lame-duck session). The second is part of the AI Bill of Rights released last week by the Biden White House.

What the hell are the Republicans thinking? Whether or not it becomes a matter of law, misplaced concern about AI bias can be addressed in a practical sense by introducing the “right” constraints to the algorithm, such as a set of aggregate targets for hiring across pools of minority and non-minority job candidates. Then, the algorithm still optimizes, but the constraints impinge on the selections. The results are still “optimal”, but in a more restricted sense.

Stealth Quotas

As Baker says, these constrains on algorithmic tools would constitute a way of imposing quotas on hiring that employers won’t really have to explain to anyone. That’s because: 1) the decisioning rationale is so obtuse that it can’t readily be explained; and 2) the decisions are perceived as “fair” in the aggregate due to the absence of disparate impacts. As to #1, however, the vendors who create hiring algorithms, and specific details regarding algorithm development, might well be subject to regulatory scrutiny. In the end, the chief concern of these regulators is the absence of disparate impacts, which is cinched by #2.

About a month ago I posted about the EEOC’s outrageous and illegal enforcement of disparate impact liability. Should I welcome AI interventions because they’ll probably limit the number of enforcement actions against employers by the EEOC? After all, there is great benefit in avoiding as much of the rigamarole of regulatory challenges as possible. Nonetheless, as a constraint on hiring, quotas necessarily reduce productivity. By adopting quotas, either explicitly or via AI, the employer foregoes the opportunity to select the best candidate from the full population for a certain share of open positions, and instead limits the pool to narrow demographics.

Demographics are dynamic, and therefore stealth quotas must be dynamic to continue to meet the demands of zero disparate impact. But what happens as an increasing share of the population is of mixed race? Do all mixed race individuals receive protected status indefinitely, gaining preferences via algorithm? Does one’s protected status depend solely upon self-identification of racial, ethnic, or gender identity?

For that matter, do Asians receive hiring preferences? Sometimes they are excluded from so-called protected status because, as a minority, they have been “too successful”. Then, for example, there are issues such as the classification of Hispanics of European origin, who are likely to help fill quotas that are really intended for Hispanics of non-European descent.

Because self-identity has become so critical, quotas present massive opportunities for fraud. Furthermore, quotas often put minority candidates into positions at which they are less likely to be successful, with damaging long-term consequences to both the employer and the minority candidate. And of course there should remain deep concern about the way quotas violate the constitutional guarantee of equal protection to many job applicants.

The acceptance of AI hiring algorithms in the business community is likely to depend on the nature of the positions to be filled, especially when they require highly technical skills and/or the pool of candidates is limited. Of course, there can be tensions between hiring managers and human resources staff over issues like screening job candidates, but HR organizations are typically charged with spearheading DEI initiatives. They will be only too eager to adopt algorithmic selection and stealth quotas for many positions and will probably succeed, whether hiring departments like it or not.

The Death of Merit

Unfortunately, quotas are socially counter-productive, and they are not a good way around the dilemma posed by the EEOC’s aggressive enforcement of disparate impact liability. The latter can only be solved only when Congress acts to more precisely define the bounds of illegal discrimination in hiring. Meanwhile, stealth quotas cede control over important business decisions to external vendors selling algorithms that are often unfathomable. Quotas discard judgements as to relevant skills in favor of awarding jobs based on essentially superficial characteristics. This creates an unnecessary burden on producers, even if it goes unrecognized by those very firms and is self-inflicted. Even worse, once these algorithms and stealth quotas are in place, they are likely to become heavily regulated and manipulated in order to achieve political goals.

Baker sums up a most fundamental objection to quotas thusly:

Most Americans recognize that there are large demographic disparities in our society, and they are willing to believe that discrimination has played a role in causing the differences. But addressing disparities with group remedies like quotas runs counter to a deep-seated belief that people are, and should be, judged as individuals. Put another way, given a choice between fairness to individuals and fairness on a group basis, Americans choose individual fairness. They condemn racism precisely for its refusal to treat people as individuals, and they resist remedies grounded in race or gender for the same reason.”

Quotas, and stealth quotas, substitute overt discrimination against individuals in non-protected classes, and sometimes against individuals in protected classes as well, for the imagined sin of a disparate impact that might occur when the best candidate is hired for a job. AI algorithms with protection against “algorithmic bias” don’t satisfy this objection. In fact, the lack of accountability inherent in this kind of hiring solution makes it far worse than the status quo.

Hurricane—Warming Link Is All Model, No Data


, , , , , , , , , , , , , , , , , , , ,

There was deep disappointment among political opponents of Florida Governor Ron DeSantis at their inability to pin blame on him for Hurricane Ian’s destruction. It was a terrible hurricane, but they so wanted it to be “Hurricane Hitler”, as Glenn Reynolds noted with tongue in cheek. That just didn’t work out for them, given DeSantis’ competent performance in marshaling resources for aid and cleanup from the storm. Their last ditch refuge was to condemn DeSantis for dismissing the connection they presume to exist between climate change and hurricane frequency and intensity. That criticism didn’t seem to stick, however, and it shouldn’t.

There is no linkage to climate change in actual data on tropical cyclones. It is a myth. Yes, models of hurricane activity have been constructed that embed assumptions leading to predictions of more hurricanes, and more intense hurricanes, as temperatures rise. But these are models constructed as simplified representations of hurricane development. The following quote from the climate modelers at the Geophysical Fluid Dynamics Laboratory (GFDL) (a division of the National Oceanic and Atmospheric Administration (NOAA)) is straightforward on this point (emphases are mine):

Through research, GFDL scientists have concluded that it is premature to attribute past changes in hurricane activity to greenhouse warming, although simulated hurricanes tend to be more intense in a warmer climate. Other climate changes related to greenhouse warming, such as increases in vertical wind shear over the Caribbean, lead to fewer yet more intense hurricanes in the GFDL model projections for the late 21st century.

Models typically are said to be “calibrated” to historical data, but no one should take much comfort in that. As a long-time econometric modeler myself, I can say without reservation that such assurances are flimsy, especially with respect to “toy models” containing parameters that aren’t directly observable in the available data. In such a context, a modeler can take advantage of tremendous latitude in choosing parameters to include, sensitivities to assume for unknowns or unmeasured relationships, and historical samples for use in “calibration”. Sad to say, modelers can make these models do just about anything they want. The cautious approach to claims about model implications is a credit to GFDL.

Before I get to the evidence on hurricanes, it’s worth remembering that the entire edifice of climate alarmism relies not just on the temperature record, but on models based on other assumptions about the sensitivity of temperatures to CO2 concentration. The models relied upon to generate catastrophic warming assume very high sensitivity, and those models have a very poor track record of prediction. Estimates of sensitivity are highly uncertain, and this article cites research indicating that the IPCC’s assumptions about sensitivity are about 50% too high. And this article reviews recent findings that carbon sensitivity is even lower, about one-third of what many climate models assume. In addition, this research finds that sensitivities are nearly impossible to estimate from historical data with any precision because the record is plagued by different sources and types of atmospheric forcings, accompanying aerosol effects on climate, and differing half-lives of various greenhouse gases. If sensitivities are as low as discussed at the links above, it means that predictions of warming have been grossly exaggerated.

The evidence that hurricanes have become more frequent or severe, or that they now intensify more rapidly, is basically nonexistent. Ryan Maue and Roger Pielke Jr. of the University of Colorado have both researched hurricanes extensively for many years. They described their compilation of data on land-falling hurricanes in this Forbes piece in 2020. They point out that hurricane activity in older data is much more likely to be missing and undercounted, especially storms that never make landfall. That’s one of the reasons for the focus on landfalling hurricanes to begin with. With the advent of satellite data, storms are highly unlikely to be missed, but even landfalls have sometimes gone unreported historically. The farther back one goes, the less is known about the extent of hurricane activity, but Pielke and Maue feel that post-1970 data is fairly comprehensive.

The chart at the top of this post is a summery of the data that Pielke and Maue have compiled. There are no obvious trends in terms of the number of storms or their strength. The 1970s were quiet while the 90s were more turbulent. The absence of trends also characterizes NOAA’s data on U.S. landfalling hurricanes since 1851, as noted by Pail Driessen. Here is Driessen on Florida hurricane history:

Using pressure, Ian was not the fourth-strongest hurricane in Florida history but the tenth. The strongest hurricane in U.S. history moved through the Florida Keys in 1935. Among other Florida hurricanes stronger than Ian was another Florida Keys storm in 1919. This was followed by the hurricanes in 1926 in Miami, the Palm Beach/Lake Okeechobee storm in 1928, the Keys in 1948, and Donna in 1960. We do not know how strong the hurricane in 1873 was, but it destroyed Punta Rassa with a 14-foot storm surge. Punta Rassa is located at the mouth of the river leading up to Ft. Myers, where Ian made landfall.

Neil L. Frank, veteran meteorologist and former head of the National Hurricane Center, bemoans the changed conventions for assigning names to storms in the satellite era. A typical clash of warm and cold air will often produce thunderstorms and wind, but few of these types of systems were assigned names under older conventions. They are not typical of systems that usually produce tropical cyclones, although they can. Many of those kinds of storms are named today. Right or wrong, that gives the false impression of a trend in the number of named storms. Not only is it easier to identify storms today, given the advent of satellite data, but storms are assigned names more readily, even if they don’t strictly meet the definition of a tropical cyclone. It’s a wonder that certain policy advocates get away with saying the outcome of all this is a legitimate trend!

As Frank insists, there is no evidence of a trend toward more frequent and powerful hurricanes during the last several decades, and there is no evidence of rapid intensification. More importantly, there is no evidence that climate change is leading to more hurricane activity. It’s also worth noting that today we suffer far fewer casualties from hurricanes owing to much earlier warnings, better precautions, and better construction.

Hiring Discrimination In the U.S., Canada, and Western Europe


, , , , , , , , ,

Some people have the impression that the U.S. is uniquely bad in terms of racial, ethnic, gender, and other forms of discrimination. This misapprehension is almost as grossly in error as the belief held in some circles that the history of slavery is uniquely American, when in fact the practice has been so common historically, and throughout the world, as to be the rule rather than the exception.

This week, Alex Tabarrok shared some research I’d never seen on one kind of discriminatory behavior. In his post, “The US has Relatively Low Rates of Hiring Discrimination”, he cites the findings of a 2019 meta-study of “… 97 Field Experiments of Racial Discrimination in Hiring”. The research focused on several Western European countries, Canada, and the U.S. The experiments involved the use of “faux applicants” for actual job openings. Some studies used applications only and were randomized across different racial or ethnic cues for otherwise similar applicants. Other studies paired similar individuals of different racial or ethnic background for separate in-person interviews.

The authors found that hiring discrimination is fairly ubiquitous against non-white groups across employers in these countries. The authors were careful to note that the study did not address levels of hiring discrimination in countries outside the area of the study. They also disclaimed any implication about other forms of discrimination within the covered countries, such as bias in lending or housing.

The study’s point estimates indicated “ubiquitous hiring discrimination”, though not all the estimates were statistically significant. My apologies if the chart below is difficult to read. If so, try zooming in, clicking on it, or following the link to the study above.

Some of the largest point estimates were highly imprecise due to less coverage by individual studies. The impacted groups and severity varied across countries. Blacks suffered significant discrimination in the U.S., Canada, France, and Great Britain. For Hispanics, the only coverage was in the U. S. and sparsely in Canada. The point estimates showed discrimination in both counties, but it was (barely) significant only in the U.S. For Middle Eastern and North African (MENA) applicants, discrimination was severe in France, the Netherlands, Belgium, and Sweden. Asian applicants faced discrimination in France, Norway, Canada, and Great Britain.

Across all countries, the group suffering the least hiring discrimination was white immigrants, followed by Latin Americans / Hispanics (but only two countries were covered). Asians seemed to suffer the most discrimination, though not significantly more than Blacks (and less in the U.S. than in France, Norway, Canada, and Great Britain). Blacks and MENA applicants suffered a bit less than Asians from hiring discrimination, but again, not significantly less.

Comparing countries, the authors used U.S. hiring discrimination as a baseline, assigning a value of one. France had the most severe hiring discrimination and at a high level of significance. Sweden was next highest, but it was not significantly higher than in the U.S. Belgium, Canada, the Netherlands and Great Britain had higher point estimates of overall discrimination than the U. S., though none of those differences were significant. Employers in Norway were about as discriminatory as the U.S., and German employers were less discriminatory, though not significantly.

The upshot is that as a group, U.S. employers are generally at the low end of the spectrum in terms of discriminatory hiring. Again, the intent of this research was not to single out the selected countries. Rather, these countries were chosen because relevant studies were available. In fact, Tabarrok makes the following comment, which the authors probably wouldn’t endorse and is admittedly speculative, but I suspect it’s right:

I would bet that discrimination rates would be much higher in Japan, China and Korea not to mention Indonesia, Iraq, Nigeria or the Congo. Understanding why discrimination is lower in Western capitalist democracies would reorient the literature in a very useful way.

So the U.S. is not on the high-side of this set of Western countries in terms of discriminatory hiring practices. While discrimination against blacks and Hispanics in the U.S. appears to be a continuing phenomenon, overall hiring discrimination in the U.S. is, at worst, comparable to many European countries.

To anticipate one kind of response to this emphasis, the U.S. is not alone in its institutional efforts to reduce discrimination. In fact, the study’s authors say:

A fairly similar set of antidiscrimination laws were adopted in North America and many Western European countries from the 1960s to the 1990s. In 2000, the European Union passed a series of race directives that mandated a range of antidiscrimination measures to be adopted by all member states, putting their legislative frameworks on racial discrimination on highly similar footing.”

Despite these similarities, there are a few institutional details that might have some bearing on the results. For example, France bans the recording and “formal discussion” of race and ethnicity during the hiring process. (However, photos are often included in job applications in European countries.) Does this indicate that reporting mandates and prohibiting certain questions reduce hiring discrimination? That might be suggestive, but the evidence is not as clear cut as the authors seem to believe. They cite one piece of conflicting literature on that point. Moreover, it does not explain why Great Britain had a greater (and highly significant) point estimate of discrimination against Asians, or why Canada and Norway were roughly equivalent to France on this basis. Nor does it explain why Sweden and Belgium did not differ from France significantly in terms of discrimination against MENA applicants. Or why Canada was not significantly different from France in terms of hiring discrimination against Blacks. Overall, discrimination in Sweden was not significantly less than in France. Still, at least based on the three applicant groups covered by studies of France, that country had the highest overall level of discrimination. France also had the most significant departure from the U.S., where recording the race and ethnicity of job applicants is institutionalized.

Germany had the lowest overall point estimates of hiring discrimination in the study. According to the authors, employers in German-speaking countries tend to collect a fairly thorough set of background information on job applications. This detail can actually work against discrimination in hiring. Tabarrok notes that so-called “ban the box” policies, or laws that prohibit employers from asking about an applicant’s criminal record, are known to result in greater racial disparities in hiring. The same is true of policies that threaten sanctions against the use of objective job qualifications which might have disparate impacts on “protected” groups. That’s because generalized proxies based on race are often adopted by hiring managers, consciously or subconsciously.

Discrimination in hiring based on race and ethnicity might actually be reasonable when a job entails sensitive interactions requiring high levels of trust with members of a minority community. This statement acknowledges that we do not live in a perfect world in which racial and ethnic differences are irrelevant. Still, aside from exceptions of that kind, overt hiring discrimination based on race or ethnicity is a negative social outcome. The conundrum we face is whether it is more or less negative than efforts to coerce nondiscrimination on those bases across a broad range of behaviors, most of which are nondiscriminatory to begin with, and when interventions often have perverse discriminatory effects. Policymakers and observers in the U.S. should maintain perspective. Discriminatory behavior persists in the U.S., especially against Blacks, but some of this discrimination is likely caused by prohibitions on objective tests of relevant job skills. And as the research discussed above shows, employers here appear to be a bit less discriminatory than those in most other Western democracies.