• About

Sacred Cow Chips

Sacred Cow Chips

Category Archives: Uncategorized

Vax Results, Biden Boosters, Delta, and the Mask Charade

19 Thursday Aug 2021

Posted by Nuetzel in Coronavirus, Public Health, Uncategorized, Vaccinations

≈ Leave a comment

Tags

Aerosols, Antibody Response, Biden Administration, Case Counts, City Journal, Covid-19, Delta Variant, Follow the Science, Hope-Simpson, Hospitalizations, Israeli Vaccinations, Jeffrey H. Anderson, Jeffrey Morris, Mask Mandates, Moderna, mRNA Vaccines, Pfizer, Randomized Control Trials, Reproduction Rates, The American Reveille, Transmissability, Vaccinations, Vaccine Efficacy

If this post has an overarching theme, it might be “just relax”! That goes especially for those inclined to prescribe behavioral rules for others. People can assess risks for themselves, though it helps when empirical information is presented without bias. With that brief diatribe, here are a few follow-ups on COVID vaccines, the Delta wave, and the ongoing “mask charade”.

Israeli Vax Protection

Here is Jeffrey Morris’ very good exposition as to why the Israeli reports of COVID vaccine inefficacy are false. First, he shows the kind of raw data we’ve been hearing about for weeks: almost 60% of the country’s severe cases are in vaccinated individuals. This is the origin of the claim that the vaccines don’t work. 

Next, Morris notes that 80% of the Israeli population 12 years and older are vaccinated (predominantly if not exclusively with the Pfizer vaccine). This causes a distortion that can be controlled by normalizing the case counts relative to the total populations of the vaccinated and unvaccinated subgroups. Doing so shows that the unvaccinated are 3.1 times more likely to have contracted a severe case than the vaccinated. Said a different way, this shows that the vaccines are 67.5% effective in preventing severe disease. But that’s not the full story!

Morris goes on to show case rates in different age strata. For those older than 50 (over 90% of whom are vaccinated and who have more co-morbidities), there are 23.6 times more severe cases among the unvaccinated than the vaccinated. That yields an efficacy rate of 85.2%. Vaccine efficacy is even better in the younger age group: 91.8%. 

These statistics pertain to the Delta variant. However, it’s true they are lower than the 95% efficacy rate achieved in the Pfizer trials. Is Pfizer’s efficacy beginning to fade? That’s possible, but this is just one set of results and declining efficacy has not been proven. Israel’s vaccination program got off to a fast start, so the vaccinated population has had more time for efficacy to decay than in most countries. And as I discussed in an earlier post, there are reasons to think that the vaccines are still highly protective after a minimum of seven months.

Biden Boosters

IIn the meantime, the Biden Administration has recommended that booster shots be delivered eight months after original vaccinations. There is empirical evidence that boosters of similar mRNA vaccine (Pfizer and Moderna) might not be a sound approach, both due to side effects and because additional doses might reduce the “breadth” of the antibody response. We’ll soon know whether the first two jabs are effective after eight months, and my bet is that will be the case.

Is Delta Cresting?

Meanwhile, the course of this summer’s Delta wave appears to be turning a corner. The surge in cases has a seasonal component, mimicking the summer 2020 wave as well as the typical Hope-Simpson pattern, in which large viral waves peak in mid-winter but more muted waves occur in low- to mid-latitudes during the summer months.

Therefore, we might expect to see a late-summer decline in new cases. There are now 21 states with COVID estimated reproduction rates less than one (this might change by the time you see the charts at the link). In other words, each new infected person transmits to an average of less than one other person, which shows that case growth may be near or beyond a peak. Another 16 states have reproduction rates approaching or very close to one. This is promising.

Maskholes

Finally, I’m frustrated as a resident of a county where certain government officials are bound and determined to impose a mask mandate, though they have been slowed by a court challenge. The “science” does NOT support such a measure: masks have not been shown to mitigate the spread of the virus, and they cannot stop penetration of aerosols in either direction. This recent article in City Journal by Jeffrey H. Anderson is perhaps the most thorough treatment I’ve seen on the effectiveness of masks. Anderson makes this remark about the scientific case made by mask proponents:

“Mask supporters often claim that we have no choice but to rely on observational studies instead of RCTs [randomized control trials], because RCTs cannot tell us whether masks work or not. But what they really mean is that they don’t like what the RCTs show.”

Oh, how well I remember the “follow-the-science” crowd insisting last year that only RCTs could be trusted when it came to evaluating certain COVID treatments. In any case, the observational studies on masks are quite mixed and by no means offer unequivocal support for masking. 

A further consideration is that masks can act to convert droplets to aerosols, which are highly efficient vehicles of transmission. The mask debate is even more absurd when it comes to school children, who are at almost zero risk of severe COVID infection (also see here), and for whom masks are highly prone to cause developmental complications.

Closing Thoughts

The vaccines are still effective. Data purporting to show otherwise fails to account for the most obvious of confounding influences: vaccination rates and age effects. In fact, the Biden Administration has made a rather arbitrary decision about the durability of vaccine effects by recommending booster shots after eight months. The highly transmissible Delta variant has struck quickly but the wave now shows signs of cresting, though that is no guarantee for the fall and winter season. However, Delta cases have been much less severe on average than earlier variants. Masks did nothing to protect us from those waves, and they won’t protect us now. I, for one, won’t wear one if I can avoid it.

Herd Immunity To Public Health Bullshitters and To COVID

16 Monday Aug 2021

Posted by Nuetzel in Coronavirus, Herd Immunity, Uncategorized

≈ Leave a comment

Tags

Acquired Immunity, Aerosols, AstraZeneca, Border Control, Breakthrough Infections, Case Counts, Covid-19, Delta Variant, Endemicity, Herd Immunity, Hospitalizations, Immunity, Lockdowns, Mask Mandates, Oxford University, Paul Hunter, PCR Tests, School Closings, ScienceAlert, Sir Andrew Pollard, T-Cell Immunity, Transmissability, University of East Anglia, Vaccinations, Vaccine Hesitancy

My last post had a simple message about the meaning of immunity: you won’t get very sick or die from an infection to which you are immune, including COVID-19. Like any other airborne virus, that does NOT mean you won’t get it lodged in your eyeballs, sinuses, throat, or lungs. If you do, you are likely to test positive, though your immunity means the “case” is likely to be inconsequential.

As noted in that last post, we’ve seen increasing COVID case counts with the so-called Delta variant, which is more highly transmissible than earlier variants. (This has been abetted by an uncontrolled southern border as well.) However, as we’d expect with a higher level of immunity in the population, the average severity of these cases is low relative to last year’s COVID waves. But then I saw this article in ScienceAlert quoting Sir Andrew Pollard, a scientist affiliated AstraZeneca and the University of Oxford. He says with Delta, herd immunity “is not a possibility” — everyone will get it.

Maybe everyone will, but that doesn’t mean everyone will get sick. His statement raises an obvious question about the meaning of herd immunity. If our working definition of the term is that the virus simply disappears, then Pollard is correct: we know that COVID is endemic. But the only virus that we’ve ever completely eradicated is polio. Would Pollard say we’ve failed to achieve herd immunity against all other viruses? I doubt it. Endemicity and herd immunity are not mutually exclusive. The key to herd immunity is whether a virus does or does not remain a threat to the health of the population generally.

Active COVID infections will be relatively short-lived in individuals with “immunity”. Moreover, viral loads tend to be lower in immune individuals who happen to get infected. Therefore, the “infected immune” have less time and less virus with which to infect others. That creates resistance to further contagion and contributes to what we know as herd immunity. While immune individuals can “catch” the virus, they won’t get sick. Likewise, a large proportion of the herd can be immune and still catch the virus without getting sick. That is herd immunity.

One open and controversial question is whether uninfected individuals will require frequent revaccination to maintain their immunity. A further qualification has to do with asymptomatic breakthrough infections. Those individuals won’t see any reason to quarantine, and they may unwittingly transmit the virus.

I also acknowledge that the concept of herd immunity is often discussed strictly in terms of transmission, or rather its failure. The more contagious a new virus, like the Delta variant, the more difficult it is to achieve herd immunity. Models predicting low herd immunity thresholds due to heterogeneity in the population are predicated on a given level of transmissibility. Those thresholds would be correspondingly higher given greater transmissibility.

A prominent scientist quoted in this article is Paul Hunter of the University of East Anglia. After backing-up Pollard’s dubious take on herd immunity, Hunter drops this bit of real wisdom:

“We need to move away from reporting infections to actually reporting the number of people who are ill. Otherwise we are going to be frightening ourselves with very high numbers that don’t translate into disease burden.”

Here, here! Ultimately, immunity has to do with the ability of our immune systems to fight infections. Vaccinations, acquired immunity from infections, and pre-existing immunity all reduce the severity of later infections. They are associated with reductions in transmission, but those immune responses are more basic to herd immunity than transmissability alone. Herd immunity does not mean that severe cases will never occur. In fact, more muted seasonal waves will come and go, inflicting illness on a limited number of vulnerables, but most people can live their lives normally while viral reproduction is contained. Herd immunity!

Sadly, we’re getting accustomed to hearing misstatements and bad information from public health officials on everything from mask mandates, lockdowns, and school closings to hospital capacity and vaccine hesitancy. Dr. Pollard’s latest musing is not unique in that respect. It’s almost as if these “experts” have become victims of their own flawed risk assessments insofar as their waning appeal to “the herd” is concerned. Professor Hunter’s follow-up is refreshing, however. Public health agencies should quit reporting case counts and instead report only patients who present serious symptoms, COVID ER visits, or hospitalizations.

Effective Immunity Means IF YOU CATCH IT, You Won’t Get Sick

12 Thursday Aug 2021

Posted by Nuetzel in Coronavirus, Uncategorized, Vaccinations

≈ 5 Comments

Tags

Acquired Immunity, Aerosols, Alpha Variant, Antibodies, Base Rate Bias, Breakthrough Infections, Covid-19, Delta Variant, Immunity, Issues & Insights, Kappa Variant, Kelly Brown, Lambda Variant, Larry Brilliant, Mayo Clinic, Our World In Data, PCR Tests, Phil Kerpen, T-Cell Immunity, Vaccinations, WHO

Listen very carefully: immunity does NOT mean you won’t get COVID, though an infection is less likely. Immunity simply means your immune system will be capable of dealing with an infection successfully. This is true whether the immunity is a product of vaccination or a prior infection. Immunity means you are unlikely to have worse than mild symptoms, and you are very unlikely to be hospitalized. (My disclaimer: I am opposed to vaccine mandates, but vaccination is a good idea if you’ve never been infected.)

I emphasize this because the recent growth in case numbers has prompted all sorts of nonsensical reactions. People say, “See? The vaccines don’t work!” That is a brazenly stupid response to the facts. Even more dimwitted are claims that the vaccines are killing everyone! Yes, there are usually side effects, and the jabs carry a risk of serious complications, but it is minuscule.

Vaccine Efficacy

Right out of the gate, we must recognize that our PCR testing protocol is far too sensitive to viral remnants, so the current surge in cases is probably exaggerated by false positives, as was true last year. Second, if a large share of the population is vaccinated, then vaccinated individuals will almost certainly account for a large share of infected individuals even if they have a lower likelihood of being infected. It’s simple math, as this explanation of base rate bias shows. In fact, according to the article at the link:

“… vaccination confers an eightfold reduction in the risk of getting infected in the first place; a 25-fold reduction in risk of getting hospitalized; and a 25-fold reduction in the risk for death.”

The upshot is that if you are vaccinated, or if you have acquired immunity from previous exposure, or if you have pre-existing immunity from contact with an earlier COVID strain, you can still “catch” the virus AND you can still spread it. Both are less likely, and you don’t have as much to worry about for your own health as those having no immunity.

As for overall vaccine efficacy in preventing death, here are numbers from the UK, courtesy of Phil Kerpen:

The vertical axis is a log scale, so each successive gridline is a fatality rate 100x as large as the one below it. Obviously, as the chart title asserts, the “vaccines have made COVID-19 far less lethal.” Also, at the bottom, see the information on fatality among children under age 18: it is almost zero! This reveals the absurdity of claims that children must be masked for schools to reopen! In any case, masks offer little protection to anyone against a virus that spreads via fine aerosols. Nevertheless, many school officials are pushing unnecessary but politically expedient masking policies

Delta

Ah, but we have the so-called Delta variant, which is now dominant and said to be far more transmissible than earlier variants. Yet the Delta variant is not as dangerous as earlier strains, as this UK report demonstrates. Delta had a case fatality rate among unvaccinated individuals that was at least 40% less than the so-called Alpha variant. This is a typical pattern of virus mutation: the virus becomes less dangerous because it wants to survive, and it can only survive in the long run by NOT killing its hosts! The decline in lethality is roughly demonstrated by Kelly Brown with data on in-hospital fatality rates from Toronto, Canada:

The case numbers in the U.S. have been climbing over the past few weeks, but as epidemiologist Larry Brilliant of WHO said recently, Delta spreads so fast it essentially “runs out of candidates.” In other words, the current surge is likely to end quickly. This article in Issues & Insights shows the more benign nature of recent infections. I think a few of their charts contain biases, but the one below on all-cause mortality by age group is convincing:

The next chart from Our World In Data shows the infection fatality rate continuing its decline in the U.S. The great majority of recent infections have been of the Delta variant, which also was much less virulent in the UK than earlier variants.

Furthermore, it turns out that the vaccines are roughly as effective against Delta and other new variants as against earlier strains. And the newest “scary” variants, Kappa and Lambda, do not appear to be making strong inroads in the U.S. 

Fading Efficacy?

There have been questions about whether the effectiveness of the vaccines is waning, which is behind much of the hand-wringing about booster shots. For example, Israeli health officials are insisting that the effectiveness of vaccines is “fading”, though I’ll be surprised if there isn’t some sort of confounding influence on the data they’ve cited, such as age and co-morbidities. 

Here is a new Mayo Clinic study of so-called “breakthrough” cases in the vaccinated population in Minnesota. It essentially shows that the rate of case diagnosis among the vaccinated rose between February and July of this year (first table below, courtesy of Phil Kerpen). However, the vaccines appear only marginally less effective against hospitalization than in March (second table below).

The bulk of the vaccinated population in the U.S. received their jabs three to six months ago, and according to this report, evidence of antibodies remains strong after seven months. In addition, T-cell immunity may continue for years, as it does for those having acquired immunity from an earlier infection. 

Breakthroughs

It’s common to hear misleading reports of high numbers of “breakthrough” cases. Not only will these cases be less menacing, but the reports often exaggerate their prevalence by taking the numbers out of context. Relative to the size of the vaccinated population, breakthrough cases are about where we’d expect based on the original estimates of vaccine efficacy. This report on Massachusetts breakthrough hospitalizations and deaths confirms that the most vulnerable among the vaxed population are the same as those most vulnerable in the unvaxed population: elderly individuals with comorbidities. But even that subset is at lower risk post-vaccination. It just so happens that the elderly are more likely to have been vaccinated in the first place, which implies that the vaccinated should be over-represented in the case population.

Conclusion

The COVID-19 vaccines do what they are supposed to do: reduce the dangers associated with infection. The vaccines remain very effective in reducing the severity of infection. However, they cannot and were not engineered to prevent infection. They also pose risks, but individuals should be able to rationally assess the tradeoffs without coercion. Poor messaging from public health authorities and the crazy distortions promoted in some circles does nothing to promote public health. Furthermore, there is every reason to believe that the current case surge in Delta infections will be short-lived and have less deadly consequences than earlier variants.

The Anti-CRT Revolt: Banning a Racist Curriculum

16 Wednesday Jun 2021

Posted by Nuetzel in Critical Race Theory, Education, racism, Uncategorized

≈ Leave a comment

Tags

1619 Project, Black Lives Matter, Critical Race Theory, Disparate impact, Food Deserts, Jim Crow, Living Wage, New York Times, racism, Systemic Racism, Unconscious Bias, Zinn Education Project

Suddenly it’s dawned on many people of good faith that our educational, business, and other institutions have been commandeered by adherents to critical race theory (CRT), which teaches that all social interactions and outcomes must be viewed through the lens of racial identity and exploitation. In fact, it teaches that racism is endemic, whether conscious or unconscious, among people deemed to have privilege. They are labeled as oppressors, especially anyone with white skin. Furthermore, CRT holds that racism is systemic, and therefore the “system”, meaning all of our institutions and social arrangements, must be radically transformed. Some or all of these tenets are taught to our children in public and private schools, and they are embedded in anti-bias and diversity training delivered to employees of government, non-profits, and private companies.

Standing Up To It

It’s easy to see why many have come to view CRT as a racist philosophy in its own right. Teaching children that they are either “oppressors” or “victims” based on the color of their skin, is a deeply flawed and dangerous practice. The revelation of CRT’s cultural inroads has prompted an angry counter-revolution by parents who hope to purge CRT from the curricula in their children’s schools… schools that they PAY FOR as taxpayers. Many other fair-minded people are offended by the sweeping racism and identity politics inherent in CRT. And yet its proponents continue in attempts to gaslight the public. More on that below.

The groundswell of opposition to CRT is evident in explosive meetings of school boards across the country, as well as recent school board elections in which slates of candidates opposed to the teaching of CRT have been victorious (see here, here, and here).

In addition, we’ve seen a number of recent legislative or administrative initiatives at the state level. There are now, or recently have been, efforts in 22 states to ban or restrict the instruction of CRT. In some cases, institutions found to be in violation of the new laws are subject to deadlines to remedy the situation. Otherwise, funding dispersed by their state’s Department of Education may be cut by ten percent, for example.

But It’s Speech

As happy as I am to witness the pushback, it’s fair to ask whether the most severe restrictions are reasonable from an educational point of view. For example, as a social philosophy, and as wrong-headed as I believe it to be, there is no reason CRT can’t be discussed alongside other social philosophies, failed and otherwise, without endorsement. For that matter, we should not insist that schools shield children from the fact that racism exists, and CRT certainly has its place along the spectrum of racism.

For my own part, I believe elective classes covering CRT as one philosophical position among others should be defended, as should instruction in the history of American slavery and Jim Crow laws, for example. However, mandatory training in CRT is unacceptable and, to the extent that students or employees are required to accept its tenets, it constitutes compelled speech. To the extent that certain groups of students are identified as inherently biased, it is a form of defamation and a personal attack. 

Legislation

Some states are attempting to ban CRT outright. Others have imposed strictures on certain messages arising from the CRT curriculum. The Florida Department of Education just passed an extremely brief rule stating: 

“Instruction on the required topics must be factual and objective, and may not suppress or distort significant historical events, such as the Holocaust, and may not define American history as something other than the creation of a new nation based largely on universal principles stated in the Declaration of Independence.”

The Florida rule prohibits teaching the 1619 Project as part of the history curriculum. This revised “history” of our nation’s founding was sponsored by the New York Times. It insists that the Revolutionary War was fought to preserve American slavery, an assertion that has been condemned as false by many historians (see here and here), though the Left still desperately clings to it. I have no problem with a prohibition on false histories, though again, it’s important for students to learn that slavery was the subject of much debate at the nation’s founding and that it persisted beyond that time. No one kept those facts from us when I was a child. And they didn’t brand white students as oppressors.

While a rulemaking by a state Department of Education is better than nothing, it’s a far cry from an actual piece of legislation. A bill signed into law in Idaho in late March contained substantially the same provisions as the rule promulgated in Florida, but it didn’t proscribe the 1619 Project. The same is true of the bill signed into law in Oklahoma in early May. 

In Texas, the state senate passed a bill in May that would ban instruction in any public school or state agency of any of the following:

“… one race or sex is inherently superior to another race or sex

an individual, by virtue of his or her race or sex, is inherently racist, sexist, or oppressive, whether consciously or unconsciously;

an individual, by virtue of his or her race or sex, bears responsibility for actions committed in the past by other members of the same race or sex;

meritocracy or traits such as a hard work ethic are racist or sexist, or were created by … members of a particular race to oppress members of another race.”

A new law in Iowa and abill signed by the governor of Tennessee in late May contained similar provisions, essentially banning instruction of some highly objectionable tenets of CRT. However, the Iowa and Tennessee laws are careful to spell out what the law should not be construed to do. For example, these laws do not:

“—Inhibit or violate the first amendment rights of students or faculty, or undermine a school district’s duty to protect to the fullest degree intellectual freedom and free expression.
—Prohibit discussing specific defined concepts as part of a larger course of academic instruction.
—Prohibit the use of curriculum that teaches the topics of sexism, slavery, racial oppression, racial segregation, or racial discrimination, including topics relating to the enactment and enforcement of laws resulting in sexism, racial oppression, segregation, and discrimination.
“

A bill in the Missouri House mentions a few such protections. However, the Missouri bill is general in the sense that it explicitly bans the instruction of CRT by name, rather than simply blocking a few unsavory messages of CRT, as detailed by Texas and a few other states. Utah’s legislation, which is awaiting the governor’s signature, is also quite brief and explicit in its prohibition of CRT. I greatly prefer the Texas approach, however, as it makes clear that discussions of CRT in the classroom are not precluded, as might be inferred from the language of the Missouri bill. 

But, But… You Just Don’t Get It!

PProtests against these legislative actions have shown a certain tone-deaf belligerence. According to an organization called Black Lives Matter at School and the Zinn Education Project, all the protesters want is a curriculum that illuminates:

“… full and accurate U.S. history and current events … rais[ing] awareness of the dangers of lying to students about systemic racism and other forms of oppression.”

One advocate says they must be free to teach the “truth” of our nation’s foundational and ongoing structural racism. The Missouri bill, they say, “fails to note ‘a single lesson’ which is ‘inaccurate’ or ‘misleads’ students.” It’s not as if it’s necessary for legislation to provide a series of examples, but be that as it may, these CRT advocates know exactly what many find objectionable. Essentially, their response is, “You don’t understand CRT! WE are the experts on systemic, institutional racism.” What they believe is somehow, every negative outcome is actuated by racism of one kind or another, past or present.

Divining the “Fault” Line

Are you below the poverty line? Earning less than a “living wage”? Are you unemployed? Is your credit score lousy? Do you live in a high crime area? In a “food desert”? Are you a single parent? Did you receive a failing grade? Is your rent going up? Did someone fail to defer to you? Did they “disrespect” you, whatever your definition? Were you scolded for being late? 

Of course, none of those “outcomes” is exclusive to people of color or minorities. But wait! Someone else is earning a decent income. They got good grades. They have a high credit score. They drive a nice car. They have skills. 

Does any of that make them guilty of oppression? Does this have something to do with YOU?

Well, you see, CRT teaches us that every unequal outcome must be the consequence of unjust, “disparate impacts” inherent to the social and economic order. To be clear, outcomes are a legitimate subject of policy debate, and we should aim for improved well-being across the board. The point that defenders of CRT miss is that unequal outcomes are seldom diabolic in and of themselves. Real indications of injustice, past or present, do not imply that any one class of individuals is inherently racist or behaves in a discriminatory manner.

Critical Theory Is a Fraud

Critical race “theory” is nothing but blame in fraudulent “search” of perpetrators. It is fraudulent because the perps are already identified in advance. It is “critical” because someone or something deserves blame. The real exercise is to spin a tale of misused privilege and biased conduct by the privileged perps against a set of oppressed victims.

CRT is not just one theory, but a whole slew of theories of blame. The very attitudes of the purveyors of CRT show they do not believe their “theories” are falsifiable. And indeed, allegations of unconscious bias are impossible to falsify. Thus, CRT is not a theory, as such. It amounts to a polemic, and it should only be discussed as such. It certainly shouldn’t be taught as “truth” to children, university students, or employees. More states should jump on-board to restrict the CRT putsch to propagandize.

An Internet for Users, Not Gatekeepers and Monopolists

09 Wednesday Jun 2021

Posted by Nuetzel in Censorship, Social Media, Uncategorized

≈ Leave a comment

Tags

Alphabet, Amazon, Anti-Trust, Biden v. Knight First Amendment Institute, Big Tech, Censor Track, Censorship, Clarence Thomas, Clubhousse, Common Carrier, Communications Decency Act, Daniel Oliver, Department of Justice, Exclusivity, Facebook, Fairness Doctrine, Gab, Google, Google Maps, Internet Accountability Project, Josh Hawley, Katherine Mangu-Ward, Media Research Center, MeWe, monopoly, Muhammadu Buhari, Murray Rothbard, My Space, Net Neutrality, Public Accommodation, Public Forum, Quillet, Right to Exclude, Ron DeSantis, Scholar, Section 230, Social Media, Statista, Street View, Telegram, TikTok, Twitter, Tying Arrangement

Factions comprising a majority of the public want to see SOMETHING done to curb the power of Big Tech, particularly Google/Alphabet, Facebook, Amazon, and Twitter. The apprehensions center around market power, censorship, and political influence, and many of us share all of those concerns. The solutions proposed thus far generally fall into the categories of antitrust action and legislative changes with the intent to protect free speech, but it is unlikely that anything meaningful will happen under the current administration. That would probably require an opposition super-majority in Congress. Meanwhile, some caution the problem is blown out of proportion and that we should not be too eager for government to intervene. 

Competition

There are problems with almost every possible avenue for reining in the tech oligopolies. From a libertarian perspective, the most ideal solution to all dimensions of this problem is organic market competition. Unfortunately, the task of getting competitive platforms off the ground seems almost insurmountable. In social media, the benefits to users of a large, incumbent network are nearly overwhelming. That’s well known to anyone who’s left Facebook and found how difficult it is to gain traction on other social media platforms. Hardly anyone you know is there!

Google is the dominant search engine by far, and the reasons are not quite as wholesome as the “don’t-be-evil” mantra goes. There are plenty of other search engines, but some are merely shells using Google’s engine in the background. Others have privacy advantages and perhaps more balanced search results than Google, but with relatively few users. Google’s array of complementary offerings, such as Google Maps, Street View, and Scholar, make it hard for users to get away from it entirely.

Amazon has been very successful in gaining retail market share over the years. It now accounts for an estimated 50% of retail e-commerce sales in the U.S., according to Statista. That’s hardly a monopoly, but Amazon’s scale and ubiquity in the online retail market creates massive advantages for buyers in terms of cost, convenience, and the scope of offerings. It creates advantages for online sellers as well, as long as Amazon itself doesn’t undercut them, which it is known to do. As a buyer, you almost have to be mad at them to bother with other online retail platforms or shopping direct. I’m mad, of course, but I STILL find myself buying through Amazon more often than I’d like. But yes, Amazon has competition.

Anti-Trust

Quillette favors antitrust action against Big Tech. Amazon and Alphabet are most often mentioned in the context of anti-competitive behavior, though the others are hardly free of complaints along those lines. Amazon routinely discriminates in favor of products in which it has a direct or indirect interest, and Google discriminates in favor of its own marketplace and has had several costly run-ins with EU antitrust enforcers. Small businesses are often cited as victims of Google’s cut-throat business tactics.

The Department of Justice filed suit against Google in October, 2020 for anti-competitive and exclusionary practices in the search and search advertising businesses. The main thrust of the charges are:

  • Exclusivity agreements prohibiting preinstallation of other search engines;
  • Tying arrangements forcing preinstallation of Google and no way to delete it;
  • Suppressing competition in advertising;

There are two other antitrust cases filed by state attorneys general against Google alleging monopolistic practices benefitting its own services at the expense of sellers in various lines of business. All of these cases, state and federal, are likely to drag on for years and the outcomes could take any number of forms: fines, structural separation of different parts of the business, and divestiture are all possibilities. Or perhaps nothing. But I suppose one can hope that the threat of anti-trust challenges, and of prolonged battles defending against such charges, will have a way of tempering anti-competitive tendencies, that is, apart from actual efficiency and good service.

These cases illustrate the fundamental tension between our desire for successful businesses to be rewarded and antitrust. As free market economists such as Murray Rothbard have said, there is something “arbitrary and capricious” about almost any anti-trust action. Legal thought on the matter has evolved to recognize that monopoly itself cannot be viewed as a crime, but the effort to monopolize might be. But as Rothbard asserted, claims along those lines tend to be rather arbitrary, and he was quite right to insist that the only true monopoly is one granted by government. In this case, many conservatives believe Section 230 of the Communications Decency Act of 1996 was the enabling legislation. But that is something anti-trust judgements cannot rectify.

Revoking Immunity

Section 230 gives internet service providers immunity against prosecution for any content posted by users on their platforms. While this provision is troublesome (see below), it is not at all clear why it might have encouraged monopolization, especially for web search services. At the time of the Act’s passage, Larry Page and Sergey Brin had barely begun work on Backrub, the forerunner to Google. Several other search engines had already existed and others have sprung up since then with varying degrees of success. Presumably, all of them have benefitted from Section 230 immunity, as have all social media platforms: not just Facebook, but Twitter, MeWe, Gab, Telegram, and others long forgotten, like MySpace.

Nevertheless, while private companies have free speech rights of their own, Section 230 confers undeserved protection against liability for the tech giants. That protection was predicated on the absence of editorial positioning and/or viewpoint curation of content posted by users. Instead, Section 230 often seems designed to put private companies in charge of censoring the kind of speech that government might like to censor. Outright repeal has been used as a threat against these companies, but what would it accomplish? The tech giants insist it would mean even more censorship, which is likely to be the result. 

Other Legislative Options

Other legislative solutions might hold the key to establishing true freedom of speech on the internet, a project that might have seemed pointless a decade ago. Justice Clarence Thomas’s concurring opinion in Biden v. Knight First Amendment Institute suggested the social media giants might be treated as common carriers or made accountable under laws on public accommodation. This seems reasonable in light of the strong network effects under which social media platforms operate as “public squares.” Common carrier law or a law designating a platform as a public accommodation would prohibit the platform from discriminating on the basis of speech.

I do not view such restrictions in the same light as so-called net neutrality, as some do. The latter requires carriers of data to treat all traffic equally in terms of priority and pricing of network resources, despite the out-sized demands created by some services. It is more of a resource allocation issue and not at all like managing traffic based on its political content.

The legislation contemplated by free speech activists with respect to big tech has to do with prohibiting viewpoint discrimination. That could be accomplished by laws asserting protections similar to those granted under the so-called Fairness Doctrine. As Daniel Oliver explains:

“A law prohibiting viewpoint discrimination (Missouri Senator Josh Hawley has introduced one such bill) would be just as constitutional as the Fairness Doctrine, an FCC policy which adjusted the overall balance of broadcast programming, or the Equal Time Rule, which first emerged in the Radio Act of 1927 and was established by the Communications Act of 1934. Under such a law, a plaintiff could sue for viewpoint discrimination. That plaintiff would be someone whose message had been suppressed by a tech company or whose account had been blocked or cancelled….”

Ron DeSantis just signed a new law giving the state of Florida or individuals the right to sue social media platforms for limiting, altering or deleting content posted by users, as well as daily fines for blocking candidates for political office. It will be interesting to see whether any other states pass similar legislation. However, the fines amount to a pittance for the tech giants, and the law will be challenged by those who say it compels speech by social media companies. That argument presupposes an implicit endorsement of all user content, which is absurd and flies in the face of the very immunity granted by Section 230. 

Justice Thomas went to pains to point out that when the government restricts a platform’s “right to exclude,” the accounts of public officials can more clearly be delineated as public forums. But in an act we wouldn’t wish to emulate, the government of Nigeria just shut down Twitter for blocking President Buhari’s tweet threatening force against rebels in one part of the country. Still, any law directly restricting a platform’s editorial discretion must be enforceable, whether that involves massive financial penalties for violations or some other form of discipline.

Private Action

There are private individuals who care enough about protecting speech online to do something about it. For example, these tech executives are fighting against internet censorship. You can also complain directly to the platforms when they censor content, and there are ways to react to censored posts by following prompts — tell them the information provided on their decision was NOT helpful and why. You can follow and support groups like the Media Research Center and its Censor Track service, or the Internet Accountability Project. Complain to your state and federal legislators about censorship and tell them what kind of changes you want to see. Finally, if you are serious about weakening the grip of the Big Tech, ditch them. Close your accounts on Facebook and Twitter. Stop using Google. Cancel your Prime membership. Join networks that are speech friendly and stick it out.

Individual action and a sense of perspective are what Katherine Mangu-Ward urges in this excellent piece:

“Ousted from Facebook and Twitter, Trump has set up his own site. This is a perfectly reasonable response to being banned—a solution that is available to virtually every American with access to the internet. In fact, for all the bellyaching over the difficulty of challenging Big Tech incumbents, the video-sharing app TikTok has gone from zero users to over a billion in the last five years. The live audio app Clubhouse is growing rapidly, with 10 million weekly active users, despite being invite-only and less than a year old. Meanwhile, Facebook’s daily active users declined in the last two quarters. And it’s worth keeping in mind that only 10 percent of adults are daily users of Twitter, hardly a chokehold on American public discourse.

Every single one of these sites is entirely or primarily free to use. Yes, they make money, sometimes lots of it. But the people who are absolutely furious about the service they are receiving are, by any definition, getting much more than they paid for. The results of a laissez-faire regime on the internet have been remarkable, a flowering of innovation and bountiful consumer surplus.”

Conclusion

The fight over censorship by Big Tech will continue, but legislation will almost certainly be confined to the state level in the short-term. It might be some time before federal law ever recognizes social media platforms as the public forums most users think they should be. Federal legislation might someday call for the wholesale elimination of Section 230 or an adjustment to its language. A more direct defense of First Amendment rights would be strict prohibitions of online censorship, but that won’t happen. Instead, the debate will become mired in controversy over appropriate versus inappropriate moderation, as Mangu-Ward alludes. Antitrust action should always be viewed with suspicion, though some argue that it is necessary to establish a more competitive environment, one in which free speech and fair search-engine treatment can flourish.

Organic competition is the best outcome of all, but users must be willing to vote with their digital feet, as it were, rejecting the large tech incumbents and trying new platforms. And when you do, try to bring your friends along with you!

Note: This post also appears at The American Reveille.

The Futility and Falsehoods of Climate Heroics

01 Tuesday Jun 2021

Posted by Nuetzel in Climate science, Environmental Fascism, Global Warming, Uncategorized

≈ Leave a comment

Tags

Atmospheric Carbon, Biden Administration, Carbon forcing, Carbon Mitigation, Climate Change, Climate Sensitivity, ExxonMobil, Fossil fuels, global warming, Green Energy, Greenhouse Gas, IPPC, John Kerry, Judith Curry, Natural Gas, Netherlands Climate Act, Nic Lewis, Nuclear power, Putty-Clay Technology, Renewables, Ross McKitrick, Royal Dutch Shell, Social Cost of Carbon, William Nordhaus

The world’s gone far astray in attempts to battle climate change through forced reductions in carbon emissions. Last Wednesday, in an outrageously stupid ruling,a Dutch court ordered Royal Dutch Shell to reduce its emissions by 45% by 2030 relative to 2019 levels. It has nothing to do with Shell’s historical record on the environment. Rather, the Court said Shell’s existing climate action plans did not meet “the company’s own responsibility for achieving a CO2 reduction.” The decision will be appealed, but it appears that “industry agreements” under the Netherlands’ Climate Act of 2019 are in dispute.

Later that same day, a shareholder dissident group supporting corporate action on climate change won at least two ExxonMobil board seats. And then we have the story of John Kerry’s effort to stop major banks from lending to the fossil fuel industry. Together with the Biden Administration’s other actions on energy policy, we are witnessing the greatest attack on conventional power sources in history, and we’ll all pay dearly for it. 

The Central Planner’s Conceit

Technological advance is a great thing, and we’ve seen it in the development of safe nuclear power generation, but the environmental left has successfully placed roadblocks in the way of its deployment. Instead, they favor the mandated adoption of what amount to beta versions of technologies that might never be economic and create extreme environmental hazards of their own (see here, here, here, and here). To private adopters, green energy installations are often subsidized by the government, disguising their underlying inefficiencies. These premature beta versions are then embedded in our base of productive capital and often remain even as they are made obsolete by subsequent advances. The “putty-clay” nature of technology decisions should caution us against premature adoptions of this kind. This is just one of the many curses of central planning.

Not only have our leftist planners forced the deployment of inferior technologies: they are actively seeking to bring more viable alternatives to ruination. I mentioned nuclear power and even natural gas offer a path for reducing carbon emissions, yet climate alarmists wage war against it as much as other fossil fuels. We have Kerry’s plot to deny funding for the fossil fuel industry and even activist “woke” investors, attempting to override management expertise and divert internal resources to green energy. It’s not as if renewable energy sources are not already part of these energy firms’ development portfolios. Allocations of capital and staff to these projects are usually dependent upon a company’s professional and technical expertise, market forces, and (less propitiously) incentives decreed by the government. Yet, the activist investors are there to impose their will.

Placing Faith and Fate In Models

All these attempts to remake our energy complex and the economy are based on the presumed external costs associated with carbon emissions. Those costs, and the potential savings achievable through the mitigation efforts of government and private greenies around the globe, have been wildly exaggerated.

The first thing to understand about the climate “science” relied upon by the environmental left is that it is almost exclusively model-dependent. In other words, it is based on mathematical relationships specified by the researchers. Their projections depend on those specs, the selection of parameter values, and the scenarios to which they are subjected. The models are usually calibrated to be roughly consistent with outcomes over some historical time period, but as modelers in almost any field can attest, that is not hard to do. It’s still possible to produce extreme results out-of-sample. The point is that these models are generally not estimated statistically from a lengthy sample of historical data. Even when sound statistical methodologies are employed, the samples are blinkingly short on climatological timescales. That means they are highly sample-specific and likely to propagate large errors out-of-sample. But most of these are what might be called “toy models” specified by the researcher. And what are often billed as “findings” are merely projections based on scenarios that are themselves manufactured by imaginative climate “researchers” cum grant-seeking partisans. In fact, it’s much worse than that because even historical climate data is subject to manipulation, but that’s a topic for another day.

Key Assumptions

What follows are basic components of the climate apocalypse narrative as supported by “the science” of man-made or anthropomorphic global warming (AGW):

(A) The first kind of model output to consider is the increase in atmospheric carbon concentration over time, measured in parts per million (PPM). This is a function of many natural processes, including volcanism and other kinds of outgassing from oceans and decomposing biomass, as well absorption by carbon sinks like vegetation and various geological materials. But the primary focus is human carbon generating activity, which depends on the carbon-intensity of production technology. As Ross McKitrick shows (see chart below), projections from these kinds of models have demonstrated significant upside bias over the years. Whether that is because of slower than expected economic growth, unexpected technological efficiencies, an increase in the service-orientation of economic activity worldwide, or feedback from carbon-induced greening or other processes, most of the models have over-predicted atmospheric carbon PPM. Those errors tend to increase with the passage of time, of course.

(B) Most of the models promoted by climate alarmists are carbon forcing models, meaning that carbon emissions are the primary driver of global temperatures and other phenomena like storm strength and increases in sea level. With increases in carbon concentration predicted by the models in (A) above, the next stage of models predicts that temperatures must rise. But the models tend to run “hot.” This chart shows the mean of several prominent global temperature series contrasted with 1990 projections from the Intergovernmental Panel on Climate Change (IPCC).

The following is even more revealing, as it shows the dispersion of various model runs relative to three different global temperature series:

And here’s another, which is a more “stylized” view, showing ranges of predictions. The gaps show errors of fairly large magnitude relative to the mean trend of actual temperatures of 0.11 degrees Celsius per decade.

(C) Climate sensitivity to “radiative forcing” is a key assumption underlying all of the forecasts of AGW. A simple explanation is that a stronger greenhouse effect, and increases in the atmosphere’s carbon concentration, cause more solar energy to be “trapped” within our “greenhouse,” and less is radiated back into space. Climate sensitivity is usually measured in degrees Celsius relative to a doubling of atmospheric carbon. 

And how large is the climate’s sensitivity to a doubling of carbon PPM? The IPCC says it’s in a range of 1.5C to 4.5C. However, findings published by Nic Lewis and Judith Curry are close to the low end of that range, and are those found by the author of the paper described here. 

In separate efforts, Finnish and Japanese researchers have asserted that the primary cause of recent warming is an increase in low cloud cover, which the Japanese team attributes to increases in the Earth’s bombardment by cosmic rays due to a weakening magnetic field. The Finnish authors note that most of the models used by the climate establishment ignore cloud formation, an omission they believe leads to a massive overstatement (10x) of sensitivity to carbon forcings. Furthermore, they assert that carbon forcings are mainly attributable to ocean discharge as opposed to human activity.

(D) Estimates of the Social Cost of Carbon (SCC) per ton of emissions are used as a rationale for carbon abatement efforts. The SCC was pioneered by economist William Nordhaus in the 1990s, and today there are a number of prominent models that produce distributions of possible SCC values, which tend to have high dispersion and extremely long upper tails. Of course, the highest estimates are driven by the same assumptions about extreme climate sensitivities discussed above. The Biden Administration is using an SCC of $51 per ton. Some recommend the adoption of even higher values for regulatory purposes in order to achieve net-zero emissions at an early date, revealing the manipulative purposes to which the SCC concept is put. This is a raw attempt to usurp economic power, not any sort of exercise in optimization, as this admission from a “climate expert” shows. In the midst of a barrage of false climate propaganda (hurricanes! wildfires!), he tells 60 Minutes that an acceptable limit on warming of 1.5C is just a number they “chose” as a “tipping point.”

As a measurement exercise, more realistic climate sensitivities yield much lower SCCs. McKitrick presents a chart from Lewis-Curry comparing their estimates of the SCC at lower climate sensitivities to an average of earlier estimates used by IPCC:

High levels of the SCC are used as a rationale for high-cost carbon abatement efforts. If the SCC is overstated, however, then costly abatements represent waste. And there is no guarantee that spending an amount on abatements equal to the SCC will eliminate the presumed cost of a ton’s worth of anthropomorphic warming. Again, there are strong reasons to believe that the warming experienced over the past several decades has had multiple causes, and human carbon emissions might have played a relatively minor role. 

Crisis Is King

Some people just aren’t happy unless they have a crisis over which to harangue the rest of us. But try as they might, the vast resources dedicated to carbon reduction are largely wasted. I hesitate to say their effort is quixotic because they want more windmills and are completely lacking in gallantry. As McKitrick notes, it takes many years for abatement to have a meaningful impact on carbon concentrations, and since emissions mix globally, unilateral efforts are practically worthless. Worse yet, the resource costs of abatement and lost economic growth are unacceptable, especially when some of the most promising alternative sources of “clean” energy are dismissed by activists. So we forego economic growth, rush to adopt immature energy alternatives, and make very little progress toward the stated goals of the climate alarmists.

What’s To Like About Income Inequality?

22 Saturday May 2021

Posted by Nuetzel in Uncategorized

≈ 2 Comments

Tags

Capital Gains, David Splinter, Emmanuel Saez, Fiscal Income, Founders, Gerald Auten, Hoover Institution, Income Redistribution, Inequality, Inheritance, Joel Kotkin, John Cochrane, Joint Committee on Taxation, Omitted Income, Paul Graham, Progressive Taxes, Thomas Piketty, Transfer Oayments

What’s to like about inequality?

That depends on how it happened and on the conditions governing its future evolution. Inequality is a fact of life, and no social or economic system known to man can avoid or eliminate it. It’s “bad” in the sense that “not everybody gets a prize,” but inequality in a free market economic system arises out of the same positive dynamic that fosters achievement in any kind of competition. Even the logic underlying the view that inequality is “bad” is not consistent: we can be more equal if the rich all lose $1,000,000 and the poor all lose $1,000, but that won’t make anyone happy.

Unequal Rewards Are Natural

Many activities contribute to general prosperity and create unequal rewards as a by-product. A capitalist system rewards knowledge, effort, creativity, and risk-taking. Those who are very good at creating value earn commensurate rewards, and in turn, they often create rewarding opportunities for others who might participate in their enterprises. A system of just incentives and rewards also requires that property rights be secure, and that implies that wealth can be accumulated more readily by those earning the greatest rewards.

Equality can be decreed only by severely restricting the rewards to productive effort, and that requires a massive imbalance of power. The state, and those who direct its actions, always have a monopoly on legal coercion. In practice, the power to commandeer value created by others means that economic benefits will waft under the noses of apparatchiks. The raw power and economic benefits usurped under such an authoritarian regime cannot be competed away, and efficiency and value are seldom prioritized by state monopolists. The egalitarian pretense thus masks its own form of extreme inequality and decline. Inequality is unavoidable in a very real sense.

Measuring Trends in Inequality

Beyond those basic truths, the facts do not support the conventional wisdom that inequality has grown more extreme. A research paper by Gerald Auten and David Splinter corrects many of the shortcomings of commonly-cited sources on income inequality. Auten works for the U.S. Treasury, and Splinter is employed by the congressional Joint Committee on Taxation. They find that higher transfer payments and growing tax progressivity since the early 1960s kept the top after-tax income share stable.

John Cochrane shares the details of a recent presentation made by Auten and Splinter (AS) at the Hoover Institution. A few interesting charts follow:

The blue “Piketty-Saez” (PS) line at the top uses an income measure from well-known research by Thomas Piketty and Emanuel Saez that contributed to the narrative of growing income inequality. The PS line is based on tax return information (fiscal income), but it embeds several distortions.

Realized capital gains are counted there, which misrepresents income shares because the realization of gains does not mark the point at which the true gains occur. Typically, the wealth exists before and after the gains are realized. Realized gains are often a function of changes in tax law and investor reaction to those changes. Moreover, neither realized nor unrealized gains represent income earned in production; instead, they capture changes in asset prices.

Income earned in production is about a third more than the income measure used by PS, even with the capital gains distortion. This omitted income and its allocation across earners is the subject of detailed analysis by AS. Their analysis is consistent in its focus on individual taxpayers, rather than households, which eliminates another upward bias in the PS line created by a secular decline in marriage rates. Then, AS consider the reallocation of income shares due to taxes and transfer payments. After all that, the income share of the top 1% shifts all the way down to the red line in the chart. The most recent observations put the share about where it was in the 1960s. 

The next chart shows income shares for broader segments: the top, middle, and lowest 20% of the income distribution. Taxes and transfers cause massive changes in the calculated shares and their trends over time. Again, these shares remain about where they were in the 1960s, contradicting the popular narrative that high earners are gobbling up ever larger pieces of the pie.

If income shares have remained about the same since the 1960s, that means high and low earners have made roughly equivalent income gains over that time. The next chart demonstrates that the bottom half of the income distribution has indeed seen significant growth in real incomes, despite the false impression created by PS and the common misperception of stagnant income growth among the working class. 

More Distributional Tidbits 

In a sense, all this is misleading because there is so much migration across the income distribution over time. Traditional calculations of income shares are “cross-sectional”, meaning they compare the same slices of the distribution at different points in time. But people near the low end in 1990 are not the same people near the bottom today. The same is true of those near the top and those in the middle. Income grows over time, and those lower in the distribution typically migrate upward as they age and acquire skills and work experience. Upward migration in income share is the general tendency, but there is some downward migration as well. Abandoning the cross-sectional view causes the typical story-line of rising income inequality to unravel.

There are many other interesting facts (and some great charts) in the AS paper and in Cochran’s post. One in particular shows that the average federal tax rate paid by the top 1% trended upward from the 1960s through the mid-1990s before flattening and trending slightly downward. This contradicts the assertion that high earners paid much higher taxes before the 1960s than today. In fact, the tax base broadened over that time, more than compensating for declines in marginal tax rates. 

Given the fact that more exacting measures of inequality haven’t changed much over the years, does that imply that redistributional policies have worked to keep the income distribution from worsening? That seems plausible on its face. If anything, taxes on high earners have increased, as have transfers to low earners and non-earners. Those changes appear to have offset other factors that would have led to greater inequality. However, the framing of the question is inappropriate. Maintaining a given income distribution is not a good thing if it inhibits economic growth. In fact, faster growth in production and greater well-being might well have led to a more unequal distribution of income. In other words, the whole question of offsetting inequality via redistribution is something of a chimera in the absence of a reliable counter-factual.

Wealth

Cochrane has a related post on the sources of wealth in America. Increasingly over the past few decades, wealth has been accumulated by self-made entrepreneurs, rather than through inheritance. That might come as a surprise to many on the left, to the extent that they care. Cochrane quotes Paul Graham on this point:

“In 1982 the most common source of wealth was inheritance. Of the 100 richest people, 60 inherited from an ancestor. There were 10 du Pont heirs alone. By 2020 the number of heirs had been cut in half, accounting for only 27 of the biggest 100 fortunes.

Why would the percentage of heirs decrease? Not because inheritance taxes increased. In fact, they decreased significantly during this period. The reason the percentage of heirs has decreased is not that fewer people are inheriting great fortunes, but that more people are making them.

How are people making these new fortunes? Roughly 3/4 by starting companies and 1/4 by investing. Of the 73 new fortunes in 2020, 56 derive from founders’ or early employees’ equity (52 founders, 2 early employees, and 2 wives of founders), and 17 from managing investment funds.”

The picture that emerges is one of great opportunity and dynamism. While the accumulation of massive fortunes might enrage the Left, these are the kinds of outcomes we should hope for, especially because the success of these new titans of industry is inextricably linked to tremendous value captured by their customers and lucrative opportunities for their employees. 

Here’s the best part of Cochrane’s post:

“We should not think about more or less inequality, we should think about the right amount of inequality, or productive vs. rent-seeking sources of inequality. Or, better, whether inequality is a symptom of health or sickness in the economy. Take Paul’s picture of the US economy at face value. What’s a better economy and society? One in which a few oligopolies … , deeply involved with government, run everything — think GM, Ford, IBM, AT&T, defense contractors — and it’s hard to start new innovative fast growing companies? Or the world in which the Bill Gates and Steve Jobs of the world can start new companies, deliver fabulous products and get insanely rich in the process? “

No doubt about it! However, today’s tremendously successful tech entrepreneurs also give us something to worry about. They have become oligarchs capable of suppressing competitive forces through sheer market power, influence, and even control over politicians and regulators. As I said at the top, whether inequality is benign depends upon the conditions governing its evolution. And today, we see the ominous development of a corporate-state tyranny, as decried by Joel Kotkin in this excellent post. Many of the daring tech entrepreneurs who benefitted from advantages endowed by our capitalist system have become autocrats who seek to plan our future with their own ideologies and self-interest in mind.

Conclusion

For too long we’ve heard the Left bemoan an increasingly “unfair” distribution of income. This includes propaganda intended to distort poverty levels in the U.S. The fine points of measuring shifts in the income distribution show that narrative to be false. Moreover, attempting to equalize the distribution of income, or even preventing changes that might occur as a natural consequence of innovation and growth, is not a valid policy objective if our goal is to maximize economic well-being.

The worst thing about inequality is that the poorest individuals are likely to be destitute and with no ability or means of supporting themselves. There is certainly such an underclass in the U.S., and our social safety net helps keep the poorest and least capable individuals above the poverty line after transfer payments. But too often our efforts to provide support interfere with incentives for those who are capable of productive work, which is both demeaning for them and a drain on everyone else. The best prescription for improving the well-being of all is economic growth, regardless of its impact on the distribution of income or wealth.

Allocating Vaccine Supplies: Lives or “Justice”?

29 Tuesday Dec 2020

Posted by Nuetzel in Pandemic, Public Health, Uncategorized, Vaccinations

≈ 1 Comment

Tags

Alex Tabarrok, CDC, Chicago, Co-Morbidities, Covid-19, Emma Woodhouse, Essential Workers, Historical Inequities, Infection Fatality Rate, Long-Term Care, Megan McArdle, Super-Spreaders, Transmission, Vaccinations, Vaccine Allocation, Vaccine Passports

There are currently two vaccines in limited distribution across the U.S. from Pfizer and Moderna, but the number and variety of different vaccines will grow as we move through the winter. For now, the vaccine is in short supply, but that’s even more a matter of administering doses in a timely way as it is the quantity on hand. There are competing theories about how best to allocate the available doses, which is the subject of this post. I won’t debate the merits of refusing to take a vaccine except to say that I support anyone’s right to refuse it without coercion by public authorities. I also note that certain forms of discrimination on that basis are not necessarily unreasonable.

The vaccines in play all seem to be highly effective (> 90%, which is incredible by existing standards). There have been a few reports of side effects — certainly not in large numbers — but it remains to be seen whether the vaccines will have any long-term side effects. I’m optimistic, but I won’t dismiss the possibility.

Despite competing doctrines about how the available supplies of vaccine should be allocated, there is widespread acceptance that health care workers should go first. I have some reservations about this because, like Emma Woodhouse, I believe staff and residents at long-term care facilities should have at least equal priority. Yet they do not in the City of Chicago and probably in other areas. I have to wonder whether unionized health care workers there are the beneficiaries of political favoritism.

Beyond that question, we have the following competing priorities: 1) the vulnerable in care homes and other elderly individuals (75+, while younger individuals with co-morbidities come later); 2) “essential” workers of all ages (from police to grocery store clerks — decidedly arbitrary); and 3) basically the same as #2 with priority given to groups who have suffered historical inequities.

#1 is clearly the way to save the most lives, at least in the short-run. Over 40% of the deaths in the U.S. have been in elder-care settings, and COVID infection fatality rates mount exponentially with age:

To derive the implications of #1 and #2, it’s more convenient to look at the share of deaths within each age cohort, since it incorporates the differences in infection rates and fatality rates across age groups (the number of “other” deaths is much larger than COVID deaths, of course, despite similar death shares):

The 75+ age group has accounted for about 58% of all COVID deaths in the U.S., and ages 25 – 64 accounted for about 20% (an approximate age range for essential workers). This implies that nearly three times as many lives can be saved by prioritizing the elderly, at least if deaths among so-called essential workers mimic deaths in the 25 – 64 age cohorts. However, the gap would be smaller and perhaps reversed in terms of life-years saved.

Furthermore, this is a short-run calculation. Over a longer time frame, if essential workers are responsible for more transmission across all ages than the elderly, then it might throw the advantage to prioritizing essential workers over the elderly, but it would take a number of transmission cycles for the differential to play out. Yes, essential workers are more likely to be “super-spreaders” than work-at-home, corporate employees, or even the unemployed, but identifying true super-spreaders would require considerable luck. Moreover, care homes generally house a substantial number of elderly individuals and staff in a confined environment, where spread is likely to be rampant. So the transmission argument for #2 over #1 is questionable.

The over-riding problem is that of available supply. Suppose enough vaccine is available for all elderly individuals within a particular time frame. That’s about 6.6% of the total U.S. population. The same supply would cover only about 13% of the younger age group identified above. Essential workers are a subset of that group, but the same supply would fall far short of vaccinating all of them; lives saved under #2 would then fall far short of the lives saved under #1. Quantities of the vaccine are likely to increase over the course of a few months, but limited supplies at the outset force us to focus the allocation decision on the short-term, making #1 the clear winner.

Now let’s talk about #3, minority populations, historical inequities, and the logic of allocating vaccine on that basis. Minority populations have suffered disproportionately from COVID, so this is really a matter of objective risk, not historical inequities… unless the idea is to treat vaccine allocations as a form of reparation. Don’t laugh — that might not be far from the intent, and it won’t count as a credit toward the next demand for “justice”.

For the sake of argument, let’s assume that minorities have 3x the fatality rate of whites from COVID (a little high). Roughly 40% of the U.S. population is non-white or Hispanic. That’s more than six times the size of the full 75+ population. If all of the available doses were delivered to essential workers in that group, it would cover less than half of them and save perhaps 30% of minority COVID deaths over a few months. In contrast, minorities might account for up to two-thirds of the deaths among the elderly. Therefore, vaccinating all of the elderly would save 58% of elderly COVID deaths and about 39% of minority deaths overall!

The COVID mortality risk to the average white individual in the elderly population is far greater than that faced by the average minority individual in the working age population. Therefore, no part of #3 is sensible from a purely mathematical perspective. Race/ethnicity overlaps significantly with various co-morbidities and the number of co-morbidities with which individuals are afflicted. Further analysis might reveal whether there is more to be gained by prioritizing by co-morbidities rather than race/ethnicity.

Megan McArdle has an interesting column on the CDC’s vaccination guidelines issued in November, which emphasized equity, like #3 above. But the CDC walked back that decision in December. The initial November decision was merely the latest of the the agency’s fumbles on COVID policy. In her column, McArdle notes that the public has understood that the priority was to save lives since the very start of the pandemic. Ideally, if objective measures show that identifiable characteristics are associated with greater vulnerability, then those should be considered in prioritizing individuals who desire vaccinations. This includes age, co-morbidities, race/ethnicity, and elements of occupational risk. But lesser associations with risk should not take precedence over greater associations with risk unless an advantage can be demonstrated in terms of lives saved, historical inequities or otherwise.

The priorities for the early rounds of vaccinations may differ by state or jurisdiction, but they are all heavily influenced by the CDC’s guidelines. Some states pay lip service to equity considerations (if they simply said race/ethnicity, they’d be forced to operationalize it), while others might actually prioritize doses by race/ethnicity to some degree. Once the initial phase of vaccinations is complete, there are likely to be more granular prioritizations based on different co-morbidities, for example, as well as race/ethnicity. Thankfully, the most severe risk gradient, advanced age, will have been addressed by then.

One last point: the Pfizer and Moderna vaccines both require two doses. Alex Tabarrok points out that first doses appear to be highly effective on their own. In his opinion, while supplies are short, the second dose should be delayed until all groups at substantially elevated risk can be vaccinated…. doubling the supply of initial doses! The idea has merit, but it is unlikely to receive much consideration in the U.S. except to the extent that supply chain problems make it unavoidable, and they might.

COVID Testing: Cycle Thresholds and Coffee Grounds

19 Saturday Dec 2020

Posted by Nuetzel in Coronavirus, Public Health, Uncategorized

≈ 2 Comments

Tags

Andrew Bostom, Coffee Grounds Test, Covid-19, Ct, Cycle Threshold, False Positives, FDA, PCR Test, Rapid Tests, Rhode Island, Viral RNA

Here’s some incredible data on PCR tests demonstrating a radically excessive lab practice that generates false positives. I’m almost tempted to say we’d do just as well using a thermometer and the coffee ground test. Open a coffee tin and take a sniff. Can you smell the distinct aroma of the grounds? If not, and if you have other common symptoms, there’s a decent chance you have an active COVID infection. That test is actually in use in some parts of the globe!

The data shown below on PCR tests are from the Rhode Island Department of a Health and the Rhode Island State Health Lab. They summarize over 5,000 positive COVID PCR tests (collected via deep nasal swabs) taken from late March through early July. The vertical axis in the chart measures the cycle threshold (Ct) value of each positive test. Ct is the number of times the RNA in a sample must be replicated before any COVID-19 (or COVID-like) RNA is detected. It might be from a live virus or perhaps a fragment of a dead virus. A positive test with a low Ct value indicates that the subject is likely infected with billions of live COVID-19 viruses, while a high Ct value indicates perhaps a handful or no live virus at all.

The range of red dots in the chart (< 28 Ct) indicates relatively low Ct values and active infections. The yellow range of dots, for which 28 < Ct <= 32, indicates possible infections, and the upper range of green dots, where Ct > 32, indicates that active infections were highly unlikely. It’s important to note that all of these tests were recorded as new COVID cases, so the range of Ct values suggest that testing in Rhode Island was unreasonably sensitive. That’s broadly true across the U.S. as well, which means that COVID cases are over-counted by perhaps 30% or more. And yet it is extremely difficult for subjects testing positive to learn their Ct values. You can ask, but you probably won’t get an answer, which is absurd and counterproductive.

Notice that the concentration of red dots diminished over time, and we know that the spring wave of the virus in the Northeast was waning as the summer approached. The share of positives tests with high Ct values increased over that time frame, however. This is borne out by the next chart, which shows the daily mean Ct of these positive tests. The chart shows that active infections became increasingly rare over that time frame both because positive tests decreased and the average Ct value rose. What we don’t know is whether labs bumped up the number of cycles or replications to which samples were subjected. Still, the trend is rather disturbing because most of the positive cases in May and the first half of June were more likely to be virus remnants than live viruses.

It’s also worth noting that COVID deaths declined in concert with the upward trend in Ct values. This is shown in the chart below (where the Ct scale is inverted). This demonstrates the truly benign nature of positive tests having high Ct values.

This is also demonstrated by the following data from a New York City academic hospital, which was posted by Andrew Bostom. It shows that a more favorable “clinical status” of COVID patients is associated with higher Ct values.

It’s astounding that the U.S. has relied so heavily on a diagnostic tool that gets so many subjects wrong. And it’s nearly impossible for subjects testing positive to obtain their Ct values. Instead, they are subject to self-quarantine for up to two weeks. Even worse, until recently there were delays in reporting the results of these tests of up to a week or more. That made them extremely unhelpful. On the other hand, the coffee ground test is fast and cheap, and it might enhance the credibility of a subsequent positive PCR test, if one is necessary … and especially if the lab won’t report the Ct value.

The PCR test has identified far too many false infections, but it wouldn’t have been quite so damaging if 1) a reasonably low maximum cycle threshold had been established; 2) test results had not been subject to such long delays; and 3) rapid retests had been available for confirmation. The cycle threshold issue is starting to receive more attention, quite belatedly, and more rapid tests have become available. As I’ve emphasized in the past, cheap, rapid tests exist. But having dithered in February and March in approving even the PCR test, the FDA has remained extremely grudging in approving newer tests, and it persists in creating obstacles to their use. The FDA needs to wake up and smell the coffee!

November Pandemic Perspective

18 Wednesday Nov 2020

Posted by Nuetzel in Coronavirus, Pandemic, Uncategorized

≈ Leave a comment

Tags

@tlowdon, Actual Date of Death, COVID, COVID Testing, COVID-Like Illness, Don Wolt, Excess Deaths, False Positives, Hospitalizations, ILI, Influenza-Like Illness, PCR Tests, Reported Deaths

I hope readers share my compulsion to see updated COVID numbers. It’s become a regular feature on this blog and will probably remain one until infections subside, vaccine or otherwise. Or maybe when people get used to the idea of living normally again in the presence of an endemic pathogen, as they have with many other pathogens and myriad risks of greater proportions, and as they should. That might require more court challenges, political changes, and plain old civil disobedience.

So here, then, is an update on the U.S. COVID numbers released over the past few days. The charts below are attributable to Don Wolt (@tlowdon on Twitter).

First, reported deaths began to creep up again in the latter half of October and have escalated in November. They’ve now reached the highs of the mid-summer wave in the south, but this time the outbreak is concentrated in the midwest and especially the upper midwest.

Reported deaths are the basis of claims that we are seeing 1,500 people dying every day, which is an obvious exaggeration. There have been recent days when reported deaths exceeded that level, but the weekly average of reported deaths is now between 1,100 and 1,200 a day.

It’s important to understand that deaths reported in a given week actually occurred earlier, sometimes eight or more weeks before the week in which they are reported. Most occur within three weeks of reporting, but sometimes the numbers added from four-plus weeks earlier are significant.

The following chart reproduces weekly reported deaths from above using blue bars, ending with the week of November 14th. Deaths by actual date-of-death (DOD) are shown by the orange bars. The most recent three-plus weeks always show less than complete counts of deaths by DOD. But going back to mid-October, actual weekly deaths were running below reported deaths. If the pattern were to follow the upswings of the first two waves of infections, then actual weekly deaths would exceed reported deaths by perhaps the end of October. However, it’s doubtful that will occur, in part because we’ve made substantial progress since the spring and summer in treating the disease.

To reinforce the last point, it’s helpful to view deaths relative to COVID case counts. Deaths by DOD are plotted below by the orange line using the scale on the right-hand vertical axis. New positive tests are represented by the solid blue line, using the left-hand axis, along with COVID hospitalizations. There is no question that the relationship between cases, hospitalizations, and deaths has weakened over time. My suspicions were aroused somewhat by the noticeable compression of the right axis for deaths relative to the two charts above, but on reviewing the actual patterns (peak relative to troughs) in those charts, I’m satisfied that the relationships have indeed “decoupled”, as Wolt puts it.

Cases are going through the roof, but there is strong evidence that a large share of these cases are false positives. COVID hospitalizations are up as well, but their apparent co-movement with new cases appears to be dampening with successive waves of the virus. That’s at least partly a consequence of the low number of tests early in the pandemic.

So where is this going? The next chart again shows COVID deaths by DOD using orange bars. Wolt has concluded, and I have reported here, that the single-best short-term predictor of COVID deaths by DOD is the percentage of emergency room visits at which patients presented symptoms of either COVID-like illness (CLI) or influenza-like illness (ILI). The sum of these percentages, CLI + ILI, is shown below by the dark blue line, but the values are shifted forward by three weeks to better align with deaths. This suggests that actual COVID deaths by DOD will be somewhere around 7,000 a week by the end of November, or about 1,000 a day. Beyond that time, the path will depend on a number of factors, including the weather, prevalence and immunity levels, and changes in mobility.

I am highly skeptical that lockdowns have any independent effect in knocking down the virus, though interventionists will try to take credit if the wave happens to subside soon for any other reason. They won’t take credit for the grim lockdown deaths reaped by their policies.

Despite the bleak prospect of 1,000 or more COVID-attributed deaths a day by the end of November, the way in which these deaths are counted is suspect. Early in the pandemic, the CDC significantly altered guidelines for the completion of death certificates for COVID such that deaths are often improperly attributed to the virus. Some COVID deaths stem from false-positive PCR tests, and again, almost since the beginning of the pandemic, hospitals were given a financial incentive to classify inpatients as COVID-infected.

It’s also important to remember that while any true COVID fatality is premature, they are generally not even close to the prematurity of lockdown deaths. That’s a simple consequence of the age profile of COVID deaths, which indicate relatively few life-years lost, and the preponderance of co-morbidities among COVID fatalities.

Again, COVId deaths are bad enough, but we are seeing an unacceptable and ongoing level of lockdown deaths. This is now to the point where they may account for almost all of the continuing excess deaths, even with the fall COVID wave. It’s probable that public health would be better served with reduced emphasis on COVID-mitigation for the general population and more intense focus on protecting the vulnerable, including the distribution of vaccines.

← Older posts
Newer posts →
Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • The Case Against Interest On Reserves
  • Immigration and Merit As Fiscal Propositions
  • Tariff “Dividend” From An Indigent State
  • Almost Looks Like the Fed Has a 3% Inflation Target
  • Government Malpractice Breeds Health Care Havoc

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library
  • Scattered Showers and Quicksand

Blog at WordPress.com.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

Scattered Showers and Quicksand

Musings on science, investing, finance, economics, politics, and probably fly fishing.

  • Subscribe Subscribed
    • Sacred Cow Chips
    • Join 128 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...