• About

Sacred Cow Chips

Sacred Cow Chips

Search results for: Fitting Data

Fitting Data To Models At NOAA

08 Monday Jun 2015

Posted by Nuetzel in Global Warming

≈ 6 Comments

Tags

AGW, Anthony WAtts, Anthropomorphic Global Warming, buoy vs ship temperatures, Carl Beisner, Global Mean Temperature, Global Warming Hiatus, Judith Curry, National Oceanic and Atmospheric Administration, NOAA, Ross McKitrick, Temperature adjustments, Watt's Up With That?

Dilbert Made Up Numbers

If the facts don’t suit your agenda, change them! The 18-year “hiatus” in global warming, which has made a shambles of climate model predictions, is now said to have been based on “incorrect data”, according to researchers at National Oceanic and Atmospheric Administration (NOAA). Translation: they have created new data “adjustments” that tell a story more consistent with their preferred narrative, namely, that man-made carbon emissions are forcing global temperatures upward, more or less steadily. The New York Times’ report on the research took a fairly uncritical tone, despite immediate cautions and rebuttals from a number of authorities. On balance, the NOAA claims seem rather laughable.

Ross McKitrick has an excellent discussion of the NOAA adjustments on the Watts Up With That? blog (WUWT). His post reinforces the difficulty of aggregating temperature data in a meaningful way. A given thermometer in a fixed location can yield drifting temperatures over time due to changes in the surrounding environment, such as urbanization. In addition, weather stations are dispersed in irregular ways with extremely uneven coverage, and even worse, they have come and gone over time. There are gaps in the data that must be filled. There might be international differences in reporting practices as well. Sea surface temperature measurement is subject to even greater uncertainty. They can be broadly classified into temperatures collected on buoys and those collected by ships, and the latter have been taken in a variety of ways, from samples collected in various kinds of buckets, hull sensors, engine room intakes, and deck temperatures. The satellite readings, which are a recent development, are accurate in tracking changes, but the levels must be calibrated to other data. Here’s McKitrick on the measurements taken on ships:

“… in about half the cases people did not record which method was used to take the sample (Hirahari et al. 2014). In some cases they noted that, for example, ERI readings were obtained but they not indicate the depth. Or they might not record the height of the ship when the MAT reading is taken.“

The upshot is that calculating a global mean temperature is a statistical exercise fraught with uncertainty. A calculated mean at any point in time is an estimate of a conceptual value. The estimate is one of many possible estimates around the “true” value. Given the measurement difficulties, any meaningful confidence interval for the true mean would likely be so broad as to render inconsequential the much-discussed temperature trends of the past 50 years.

McKitrick emphasizes the three major changes made by NOAA, all having to do with sea surface temperatures:

  1. NOAA has decided to apply an upward adjustment to bring buoy temperature records into line with ship temperatures. This is curious, because most researchers have concluded that the ship temperatures are subject to greater bias. Also, the frequency of buoy records has been rising as a share of total sea temperature readings.
  2. NOAA added extra weight to the buoy readings, a decision which was unexplained.
  3. They applied a relatively large downward adjustment to temperatures collected by ships during 1998-2000.

Even the difference between the temperatures measured by ships and buoys (0.12 degrees Celsius), taken at face value, has a confidence interval (95%?) that is about 29 times as large as the difference. That adjustments such as those above are made with a straight face is nothing short of preposterous.

A number of other researchers have weighed in on the NOAA adjustments. Carl Beisner summarizes some of this work. He quotes McKitrick as well as Judith Curry:

“I think that uncertainties in global surface temperature anomalies is [sic] substantially understated. The surface temperature data sets that I have confidence in are the UK group and also Berkeley Earth. This short paper in Science is not adequate to explain and explore the very large changes that have been made to the NOAA data set. The global surface temperature datasets are clearly a moving target.“

There are a number of other posts this week on WUWT regarding the NOAA adjustments. Some of the experts, like Judith Curry, emphasize the new disparities created by NOAA’s adjustments with other well-regarded temperature series. It will be interesting to see how these differences are debated. Let’s hope that the discussion is driven wholly by science and not politics, but I fear that the latter will have a major impact on the debate. It has already.

Climate Summit Success? Let’s Talk In Five Years

02 Wednesday Dec 2015

Posted by Nuetzel in Global Warming, Human Welfare

≈ 1 Comment

Tags

AGW, Benny Peisner, Carbon Emissions, Carbon Verification, Climate Alarmism, Climate and Terrorism, Climate Hysteria, Climate Summit, COP 21, global warming, IPCC, Joel Kotkin, Matt Ridley, Regressive Climate Policy

Moudakis Cartoon

Misplaced priorities are on full display in Paris for the next ten days at the climate conference known as COP-21 (“Conference of the Parties”). Joel Kotkin makes note of the hysteria in evidence among climate activists fostered by political opportunists, economic illiteracy and fraudulent climate research. Of course, climate alarmism offers handsome rewards for politician-cronyists and rent-seeking corporatists. With that seemingly in mind, President Barack Obama is playing the role of opportunist-in-chief, claiming that climate change is the biggest threat to U.S. security while blithely asserting that the climate is responsible for the growing danger from terrorism. Here is Kotkin on such tenuous claims:

“… this reflects the growing tendency among climate change activists to promote their cause with sometimes questionable assertions. Generally level-headed accounts, such as in the Economist and in harder-edge publications like the Daily Telegraph, have demonstrated that many claims of climate change activists have already been disproven or are somewhat exaggerated.“

“Somewhat exaggerated” is an understatement, given the scandals that have erupted in the climate research community, the miserable predictive record of carbon forcing models, and the questionable practices employed by NASA and NOAA researchers in adjusting surface temperature data (see below for links). When it comes to climate activism, the Orwellian aspect of Groupthink is palpable:

“Rather than address possible shortcomings in their models, climate change activists increasingly tend to discredit critics as dishonest and tools of the oil companies. There is even a move to subject skeptics to criminal prosecution for deceiving the public.“

This is thoroughly contrary to the spirit of scientific inquiry, to say nothing of free speech. As if to parody their questionable approach to an issue of science, climate-change devotees have come out in full force to attack the excellent Matt Ridley, a sure sign that they find his message threatening to the power of their mantra. Ridley and Benny Peiser have an op-ed in the Wall Street Journal this week entitled “Your Complete Guide to the Climate Debate” (should be ungated for now). The authors discuss the weakness of the scientific case for anthropomorphic global warming (AGW); the fact that they use findings of the Intergovernmental Panel on Climate Change (IPCC) to make this critique must be particularly galling to the alarmists. Ridley and Peisner cover the correspondingly flimsy case for draconian environmental policies to deal with the perceived threat of AGW. Also, they emphasize the regressive nature of the demands made by the environmental left, who are either ignorant or unfazed by the following truths:

“… there are a billion people with no grid electricity whose lives could be radically improved—and whose ability to cope with the effects of weather and climate change could be greatly enhanced—with the access to the concentrated power of coal, gas or oil that the rich world enjoys. Aid for such projects has already been constrained by Western institutions in the interest of not putting the climate at risk. So climate policy is hurting the poor.“

Finally, Ridley and Peisner explain the economic incentives that are likely to undermine any meaningful international agreement in Paris. Less developed countries have been asked to reduce their carbon emissions, which they can ill afford, and to agree to a verification framework. Those parties might agree if they view the framework as sufficiently easy to game (and it will be), and if they are compensated handsomely by the developed world. The latter will represent an insurmountable political challenge for the U.S. and other developed countries, who are already attempting to promulgate costly new restrictions on carbon emissions.

“Concerned about the loss of industrial competitiveness, the Obama administration is demanding an international transparency-and-review mechanism that can verify whether voluntary pledges are met by all countries. Developing countries, however, oppose any outside body reviewing their energy and industrial activities and carbon-dioxide emissions on the grounds that such efforts would violate their sovereignty.

… China, India and the ‘Like-Minded Developing Countries’ group are countering Western pressure by demanding a legally binding compensation package of $100 billion a year of dedicated climate funds, as promised by President Obama at the U.N. climate conference in Copenhagen in 2009.

However, developing nations are only too aware that the $100 billion per annum funding pledge is never going to materialize, not least because the U.S. Congress would never agree to such an astronomical wealth transfer. This failure to deliver is inevitable, but it will give developing nations the perfect excuse not to comply with their own national pledges.“

These conflicting positions may mean that the strongest point of accord at the Paris conference will be to meet again down the road.

“Expect an agreement that is sufficiently vague and noncommittal for all countries to sign and claim victory. Such an agreement will also have to camouflage deep and unbridgeable divisions while ensuring that all countries are liberated from legally binding targets a la Kyoto.“

This morning, an apparently sleepy and deluded President Obama spoke at the Paris conference before heading back to the U.S. He insisted again that the agreement he expects to come out of Paris will be a “powerful rebuke” to terrorists. Yeah, that’ll show ’em! Even a feeble agreement will be trumpeted as a great victory by the conference parties; Obama and the Left will attempt to wield it as a political cudgel, a brave accomplishment if it succeeds in any way, and a vehicle for blame if it is blocked by the principled opponents of climate alarmism. The media will play along without considering scientific evidence running contrary to the hysterical global warming narrative. Meanwhile, the frailty of the agreement will represent something of a win for humanity.

Here are some links to previous posts on this topic from Sacred Cow Chips:

Climate Negotiators To Discuss Economic Cannibalism

A Cooked Up Climate Consensus

Fitting Data To Models At NOAA

Carbon Farce Meets Negative Forcings

Subsidized Waste: The Renewable Irony

Manipulating Temperatures, People & Policy

Record Hot Baloney

Alluring Apocalypse Keeps Failing To Materialize

The Stench of Green Desperation

Cut CO2, But What About the Environment?

Live Long and Prosper With Fossil Fuels

Divesting of Human Well-Being

 

 

Manipulating Temperatures, People & Policy

21 Friday Aug 2015

Posted by Nuetzel in Global Warming, Tyranny

≈ 2 Comments

Tags

Bob Tisdale, Climate fraud, crony capitalism, global warming, Matt Ridley, NASA, NOAA, Robert Brown, Ronald Bailey, Satellite Temperatures, Surface Temperatures, Temperature adjustments, UK Met Office, Werner Brozek

image

The heavily-manipulated global surface temperatures quoted by NOAA and NASA point to another “hottest month on record” in July, but the satellite temperature measurements do not agree. Nor do several other widely-followed global temperature series maintained elsewhere, such as the UK Meteorological Office (UK Met Office). I wrote about the manipulation of surface temperatures by NOAA and NASA in January in “Record Hot Baloney“, and in “Fitting Data To Models At NOAA” in June:

“If the facts don’t suit your agenda, change them! The 18-year “hiatus” in global warming, which has made a shambles of climate model predictions, is now said to have been based on “incorrect data”, according to researchers at National Oceanic and Atmospheric Administration (NOAA). Translation: they have created new data “adjustments” that tell a story more consistent with their preferred narrative, namely, that man-made carbon emissions are forcing global temperatures upward, more or less steadily.“

The last link provides detail on the nature of the manipulations. Perhaps surprisingly, rather large downward adjustments have been made to historical temperature data, reinforcing any upward trend in the late 20th century and hiding the current 18-year pause in that trend. Suffice it to say that the “adjustments” made by these agencies are at fairly detailed levels; some of the before-and-after comparisons shown by gifs at this link are rather astonishing. Some climate researchers have started to refer to the temperature series as “reconstructions” instead of “data”, out of respect for the legitimacy of actual data.

In the meantime, the “warmist” propaganda keeps flowing from NOAA and NASA, and it is hungrily swallowed and then regurgitated by media alarmists. The media love a good scare story. They are so complicit in reinforcing the warmist narrative they will ignore the revelation of a faulty temperature sensor at National Airport in Washington, D.C. (another hat tip to John Crawford). It has been recording temperatures averaging 1.7 degrees Fahrenheit too warm for the past 19 months. Now that the sensor has been changed, NOAA states that it will not make any adjustments to the past 19 months of recorded temperatures from the National weather station, despite the fact that they have routinely made many other changes, often without any real explanation.

Here is a recent opinion from Duke University Professor Robert Brown on the divergence of satellite and NASA/NOAA surface temperatures and the adjustments to the latter:

“The two data sets should not be diverging, period, unless everything we understand about atmospheric thermal dynamics is wrong. That is, I will add my “opinion” to Werner’s and point out that it is based on simple atmospheric physics taught in any relevant textbook. …

This does not mean that they cannot and are not systematically differing; it just means that the growing difference is strong evidence of bias in the computation of the surface record.“

Every new report issued by NOAA/NASA on record warm temperatures should be severely discounted. They are toiling in the service of a policy agenda; it will cost you dearly, and it will severely punish the less fortunate here and especially in less developed parts of the world; and it will reward the statist elite, bureaucrats and Green crony capitalists. Ronald Bailey in Reason recently weighed in on the consequences of this “apocalyptic anti-progress ideology“. Or read the wise words of Matt Ridley on “The recurrent problem of green scares that don’t live up to the hype“. Hey greens, relax! And don’t waste our resources and our well being on precautions against exaggerated risks.

Cassandras Feel An Urgent Need To Crush Your Lifestyle

12 Thursday Jan 2023

Posted by Nuetzel in Climate science, Environmental Fascism

≈ 1 Comment

Tags

Atmospheric Aerosols, Capacity Factors, Carbon Emissions, Carbon-Free Buildings, Chicken Little, Climate Alarmism, Coercion, Electric Vehicles, Elon Musk, Extreme Weather Events, Fossil fuels, Gas Stoves, Judith Curry, Land Use, Model Bias, Nuclear power, Paul Ehrlich, Renewable energy, rent seeking, Sea Levels, Settled Science, Solar Irradience, Solar Panels, Subsidies, Temperature Manipulation, Toyota Motors, Urban Heat Islands, Volcanic activity, Wind Turbines

Appeals to reason and logic are worthless in dealing with fanatics, so it’s too bad that matters of public policy are so often subject to fanaticism. Nothing is more vulnerable on this scale than climate policy. Why else would anyone continue to listen to prognosticators of such distinguished failure as Paul Ehrlich? Perhaps most infamously, his 1970s forecasts of catastrophe due to population growth were spectacularly off-base. He’s a man without any real understanding of human behavior and how markets deal efficiently and sustainably with scarcity. Here’s a little more detail on his many misfires. And yet people believe him! That’s blind faith.

The foolish acceptance of chicken-little assertions leads to coercive and dangerous policy prescriptions. These are both unnecessary and very costly in direct and hidden ways. But we hear a frantic chorus that we’d better hurry or… we’re all gonna die! Ironically, the fate of the human race hardly matters to the most radical of the alarmists, who are concerned only that the Earth itself be in exactly the same natural state that prevailed circa 1800. People? They don’t belong here! One just can’t take this special group of fools too seriously, except that they seem to have some influence on an even more dangerous group of idiots called policymakers.

Judith Curry, an esteemed but contrarian climate expert, writes of the “faux urgency” of climate action, and how the rush to implement supposed climate mitigations is a threat to our future:

“Rapid deployment of wind and solar power has invariably increased electricity costs and reduced reliability, particularly with increasing penetration into the grid. Allegations of human rights abuses in China’s Xinjiang region, where global solar voltaic supplies are concentrated, are generating political conflicts that threaten the solar power industry. Global supply chains of materials needed to produce solar and wind energy plus battery storage are spawning new regional conflicts, logistical problems, supply shortages and rising costs. The large amount of land use required for wind and solar farms plus transmission lines is causing local land use conflicts in many regions.”

Curry also addresses the fact that international climate authorities have “moved the goalposts” in response to the realization that the so-called “crisis” is not nearly as severe as we were told not too long ago. And she has little patience for delusions that authorities can reliably force adjustments in human behavior so as to to reduce weather disasters:

“Looking back into the past, including paleoclimatic data, there has been more extreme weather [than today] everywhere on the planet. Thinking that we can minimize severe weather through using atmospheric carbon dioxide as a control knob is a fairy tale.”

The lengths to which interventionists are willing to go should make consumer/taxpayers break out their pitchforks. It’s absurd to entertain mandates forcing vehicles powered by internal combustion engines (ICEs) off the road, and automakers know it. Recently, the head of Toyota Motors acknowledged his doubts that electric vehicles (EVs) can meet our transportation demands any time soon:

“People involved in the auto industry are largely a silent majority. That silent majority is wondering whether EVs are really OK to have as a single option. But they think it’s the trend so they can’t speak out loudly. Because the right answer is still unclear, we shouldn’t limit ourselves to just one option.”

In the same article, another Toyota executive says that neither the market nor the infrastructure is ready for a massive transition to EVs, a conclusion only a dimwit could doubt. Someone should call the Big 3 American car companies!

No one is a bigger cheerleader for EVs than Elon Musk. In the article about Toyota, he is quoted thusly:

“At this time, we actually need more oil and gas, not less. Realistically I think we need to use oil and gas in the short term, because otherwise civilization will crumble. One of the biggest challenges the world has ever faced is the transition to sustainable energy and to a sustainable economy. That will take some decades to complete.”

Of course, for the foreseeable future, EVs will be powered primarily by electricity generated from burning fossil fuels. So why the fuss? But as one wag said, that’s only until the government decides to shut down those power plants. After that, good luck with your EV!

Gas stoves are a new target of our energy overlords, but this can’t be about fuel efficiency, and it’s certainly not about the quality of food preparation. The claim by an environmental think tank called “Carbon-Free Buildings” is that gas stoves are responsible for dangerous indoor pollutants. Of course, the Left was quick to rally around this made-up problem, despite the fact that they all seem to use gas stoves and didn’t know anything about the issue until yesterday! And, they insist, racial minorities are hardest hit! Well, they might consider using exhaust fans, but the racialist rejoinder is that minorities aren’t adequately informed about the dangers and mitigants. Okay, start a safe-use info campaign, but keep government away from an embedded home technology that is arguably superior to the electric alternative in several respects.

Renewable energy mandates are a major area of assault. If we were to fully rely on today’s green energy technologies, we’d not just threaten our future, but our immediate health and welfare. Few people, including politicians, have any awareness of the low rates at which green technologies are actually utilized under real-world conditions.

“Worldwide average solar natural capacity factor (CF) reaches about ~11-13%. Best locations in California, Australia, South Africa, Sahara may have above 25%, but are rare. (see www.globalsolaratlas.info, setting direct normal solar irradiance)

Worldwide average wind natural capacity factors (CF) reach about ~21-24%. Best off-shore locations in Northern Europe may reach above 40%. Most of Asia and Africa have hardly any usable wind and the average CF would be below 15%, except for small areas on parts of the coasts of South Africa and Vietnam. (see www.globalwindatlas.info, setting mean power density)”

Those CFs are natural capacity factors (i.e., the wind doesn’t always blow or blow at “optimal” speeds, and the sun doesn’t always shine or shine at the best angle), The CFs don’t even account for “non-natural” shortfalls in actual utilization and other efficiency losses. It would be impossible for investors to make these technologies profitable without considerable assistance from taxpayers, but they couldn’t care less about whether their profits are driven by markets or government fiat. You see, they really aren’t capitalists. They are rent seekers playing a negative-sum game at the expense of the broader society.

There are severe environmental costs associated with current wind and solar technologies. Awful aesthetics and the huge inefficiencies of land use are bad enough. Then there are deadly consequences for wildlife. Producing inputs to these technologies requires resource-intensive and environmentally degrading mining activities. Finally, the costs of disposing of spent, toxic components of wind turbines and solar panels are conveniently ignored in most public discussions of renewables.

There is still more hypocritical frosting on the cake. Climate alarmists are largely opposed to nuclear power, a zero-carbon and very safe energy source. They also fight to prevent development of fossil fuel energy plant for impoverished peoples around the world, which would greatly aid in economic development efforts and in fostering better and safer living conditions. Apparently, they don’t care. Climate activists can only be counted upon to insist on wasteful and unreliable renewable energy facilities.

Before concluding, it’s good to review just a few facts about the “global climate”:

1) the warming we’ve seen in forecasts and in historical surface temperature data has been distorted by urban heat island effects, and weather instruments are too often situated in local environments rich in concrete and pavement.

2) Satellite temperatures are only available for the past 43 years, and they have to be calibrated to surface measurements, so they are not independent measures. But the trend in satellite temperatures over the past seven years has been flat or negative at a time when global carbon emissions are at all-time highs.

3) There have been a series of dramatic adjustments to historical data that have “cooled the past” relative to more recent temperatures.

4) The climate models producing catastrophic long-term forecasts of temperatures have proven to be biased to the high side, having drastically over-predicted temperature trends over the past two- to three decades.

5) Sea levels have been rising for thousands of years, and we’ve seen an additional mini-rebound since the mini-ice age of a few hundred years ago. Furthermore, the rate of increase in sea levels has not accelerated in recent decades, contrary to the claims of climate alarmists.

6) Storms and violent weather have shown no increase in frequency or severity, yet models assure us that they must!

Despite these facts, climate change fanatics will only hear of climate disaster. We should be unwilling to accept the climatological nonsense now passing for “settled science”, itself a notion at odds with the philosophy of science. I’m sad to say that climate researchers are often blinded by the incentives created by publication bias and grant money from power-hungry government bureaucracies and partisan NGOs. They are so blinded, in fact, that research within the climate establishment now almost completely ignores the role of other climatological drivers such as the solar irradiance, volcanic activity, and the role and behavior of atmospheric aerosols. Yes, only the global carbon dial seems to matter!

No one is more sympathetic to “the kids” than me, and I’m sad that so much of the “fan base” for climate action is dominated by frightened members of our most youthful generations. It’s hard to blame them, however. Their fanaticism has been inculcated by a distinctly non-scientific community of educators and journalists who are willing to accept outrageous assertions based on “toy models” concocted on weak empirical grounds. That’s not settled science. It’s settled propaganda.

The Great Unmasking: Take Back Your Stolen Face!

28 Friday Jan 2022

Posted by Nuetzel in Masks, Pandemic

≈ Leave a comment

Tags

Aerosols, Anthony Fauci, City Journal, Cloth Masks, Cochrane Library, Dr. Robert Lending, Filtration Efficiency, Influenza, Jeffrey H. Anderson, Joe Biden, KN95, Mask Efficacy, Mask Fit, Mask Leaks, Mask Mandates, N95, Omicron Variant, OSHA, P95, Physics of Fluids, R95, Randomized Control Trial, RCT, Surgical Masks, Teachers Unions, Viral Transmission

Right at the start of the pandemic, Dr. Anthony Fauci insisted that masks were unnecessary, which was in line with the preponderance of earlier evidence. Later, he sowed confusion — and distrust — by claiming he said that to discourage a run on masks, thus preserving supplies for the medical community. That mix-up put a stain on his credibility among those who were paying attention, and the reversal was simply bad policy given what is well established by the evidence on mask efficacy.

No Mas, No Mask!

Despite my own doubts about the efficacy of masks, I went along with masking for a while. It gave me a chuckle to see people wearing them outside, especially runners, or solo drivers. We knew by then that contracting Covid outside was highly unlikely. I was also amused by the idiotic protocols in place at many restaurants, where it was just fine to remove them once you walked a few feet to sit at your table, as if aerosols indoors were bound within narrow bands of altitude. Finally, I had reservations about the health consequences of frequent masking, which have certainly been borne out. Restricting air flow is generally not good for human health! Neither is trapping bits of sputum and hot, exhaled moisture rich in microbes right up against one’s muzzle. Still, I thought it polite to wear a mask in places of business, and I did so for a number of months.

In time it became apparent that the cloth and paper masks we were all wearing were a waste of effort. Covid is spread via fine aerosols and generally not droplets. That’s important because the masks in common use cannot block a sufficient level of Covid particles from escaping nor from penetrating through gaps and through the fiber itself. Neither can N95s if not fitted properly, as so many are not. And none of these masks can protect your eyeballs! When tens of thousands of tiny beads of aerosol are released with each cough or exhalation, a mask that stops 70% of them will not accomplish much.

The evidence began to accumulate that mask mandates were completely ineffective at “stopping the spread” of Covid. I then became an ardent anti-masker. I generally don’t wear them anywhere except medical buildings, and then only because I refuse to defer normal medical care, the consequences of which have been tragic during the pandemic. I have told clerks “I don’t need a mask”, which is true, and they have backed off. I have turned on my heal at stores that refuse to give on the issue, but like masks themselves, the signs on the doors are usually more for show than anything else. So I walk right past them.

Now, the Biden Administration has decided to provide to the public 400 million N95 masks — on the taxpayer! It’s a waste of time and money. But the timing is incredible, just as the Omicron wave crashes on it’s own. It will be one more worthless act of theatre. But don’t doubt for a moment that Joe Biden, when no one remembers the timing, will claim that this action helped defeat Omicron.

Mask Varieties

What is the real efficacy of masks in stopping the spread of Covid aerosol emissions? Cloth masks, including bandanas and scarves, are still the most popular masks. Based on casual observation, I suspect most of those masks aren’t washed as frequently as they should be. People hang them from their rear view mirrors for God knows how long. Beyond that, cloth masks tend to fit loosely and protect from aerosols about as well as the disposable medical or surgical masks that are now so common. Which is to say they don’t provide much protection at all.

But can that be? Don’t surgeons think they help? Well yes, because operating rooms can be very splattery places. Besides, it’s rude to sneeze into your patient’s chest cavity. Protection against fine aerosols? Not so much. “Oh, but should I double mask?”, you might ask? Gross! Just Shut*Up!

Face shields are “transparently” useless, offering no barrier against floating aerosols whatsoever except a fleeting moment’s protection against those blown directly into the wearer’s face. Then there are respirator masks: N95 and KN95, which are essentially the same thing. The difference is that KN95s must meet Chinese performance standards rather than U.S. standards. Both must filter and capture 95% of airborne particles as small as 0.3 microns. Covid particles are smaller than that, but the aerosol “beadlets” in which they are swathed may be larger, so the respirators would appear to be a big step up from cloth or surgical masks. R95 and P95 masks are made for protection against oil-based particles. They seem to be better overall due to thicker material and tighter fit with an overhead strap and extra padding.

Measuring Mask Efficacy

A thorough assessment of these mask types is documented in a 2021 paper published in The Physics of Fluids. Here are the baseline filtration efficiencies measured by the authors with an ideal mask fit relative to exhalation of 1 micron aerosols:

  • Cloth_______40%
  • Surgical____47%
  • KN95_______95%
  • R95_________96%

These are simply the filtration efficiencies of the respective barrier materials used in each type of mask, as measured by the researcher’s tests. Obviously, cloth and surgical masks don’t do too well. Unfortunately, even the N95 and KN95 masks never fit perfectly:

“It is important to note that, while masks … decrease the forward momentum of the respiratory jet, a significant fraction of aerosol escapes the masks, particularly at the bridge of the nose.”

Next, the authors assess the “apparent” filtration efficiencies of masks measured by relative aerosol concentrations in an enclosed space, measured two meters away from the source, after an extended period. This is a tough test for a mask, but it amounts to what people hope masks can accomplish: trapping aerosols containing bits of crap on material surrounding the nose and mouth, and for many hours. Here are the results:

  • Cloth___________9.8%
  • Surgical_______12.4%
  • KN95__________46.3%
  • R95____________60.2%
  • KN95-Gap______3.4%
  • KN95-Valve____20.3%

Cloth and surgical masks don’t do much to reduce the aerosol concentrations. Both the KN95 and R95 masks capture a meaningful share of the aerosols, but the R95 is a bit more effective. Remember, however, that the uncaptured share is a stand-in for the many thousands of virus particles that would remain suspended within the indoor space, so the filtration efficiency of the R95, while far superior to cloth or surgical masks, would do little to mitigate the spread of the virus. The KN95-Gap case is a test of a more “loosely fitted” mask with 3 mm gaps, which the authors say is realistic. Under those circumstances, the KN95 is about as good as nothing. Finally, the authors tested a well-fitted KN95 equipped with a one-way discharge valve. While its efficiency was better than cloth or surgical masks, it still performed poorly. The authors also found that various degrees of air filtration were far more effective in reducing aerosol concentrations than masks.

On the subject of mask fit, I quote Dr. Robert Lending, who has regularly chronicled pandemic developments for patients in his practice since the start of the pandemic:

“N95 type masks cannot be worn by men with beards. They must be so tightly fitted that they leave deep creases in your face. Prior to Covid-19, when hospital employees had to wear them for TB exposure prevention, they were told not to wear them for more than 3 hours at a time. They had to be fit-tested and gas leak-tested. … The N95 knockoffs such as the KN95s are not as good. N95 with valves do not protect others from you. There are now many counterfeit N95s for sale. … Obviously, N95s were never meant to be worn for 8-12 hours; and certainly not by youth and school children. If you are wearing an N95 and you can smell anything, such as aroma in a restaurant when you walk in, perfume, cologne, coffee, citrus, foul odors, etc.; then your fit is not correct and that N95 is worthless.”

Other Evidence

Another kind of evidence on mask efficacy is offered by randomized control trials (RCTs) in mitigating transmission of the influenza virus across a variety of settings, including hospital wards, schools, and neighborhoods of varying characteristics. A meta-analysis of 44 such RCTs published in the Cochran Library in late 2020 found that surgical masks make little or no difference to the spread of the virus. In a small set of RCTs from health care settings, the authors found that N95 and P95 masks perform about as well as surgical masks in limiting transmission.

An excellent review of research on mask efficacy appeared in City Journal last August. The author, Jeffrey H. Anderson, was fairly awestruck at the uniformity of RCT evidence that masks are ineffective. One well-publicized RCT purporting to show the opposite relied on effects that were negligible. Meanwhile, other research has shown that state-level mask mandates are ineffective at reducing the spread of the virus. Finally, here is a nice “cheat sheet” containing links to a number of mask studies.

Children

Children in many parts of the country are forced to wear masks at school. It’s well-established, however, despite wailing from teachers’ unions, that Covid poses extremely low risks to children. And there is no shortage of evidence that constant masking has extremely negative effects on children. The stupidity has reached grotesque proportions. Now, some school districts are proposing that children wear N95 masks! This is unnecessary and cruel, and it is ineffective precisely because children will be even less likely to use them properly than adults, who are generally not very good at it. From the last link:

“If N95s filter so well, why are respirators an ineffective intervention? Because masking is a behavioral intervention as much as a physical one. For respirators to work, they must be well fitting, must be tested by OSHA, and must be used for only short time windows as their effectiveness diminishes as they get wet from breathing.

“Fit requirements and comfort issues are untenable in children who have small faces and are required to wear masks for six or more hours each day. For these reasons, NIOSH specifically states that children should not use respirators, and there are no respirators that are approved for children. These views are shared by the California Department of public health. Concerns about impaired breathing and improper use outweigh potential benefits. There are no studies on the effectiveness of respirators on children because they are not approved for pediatric use.”

Rip It Off

At this point in the Omicron wave, which appears to have crested, we’re basically dealing with a virus that is less lethal than the flu and, for most people, comparable to the common cold. It’s a good time for the timid to shed their masks, which don’t help contain the spread of the virus to begin with. And masks do more harm than has generally been acknowledged, especially to children. So stop the bullshit. Take off your mask, and leave it off!

Climate Alarmism and Junk Science

02 Thursday Dec 2021

Posted by Nuetzel in Climate, Research Bias, Uncategorized

≈ 7 Comments

Tags

Carbon Forcing Models, Climate Alarmism, Green Subsidies, Intergovernmental Panel on Climate Change, IPCC, Kevin Trenberth, Model Bias, Model Ensembles, National Center for Atmospheric Research, Norman Rogers, Redistribution, rent seeking

The weak methodology and accuracy of climate models is the subject of an entertaining Norman Rogers post. I want to share just a few passages along with a couple of qualifiers.

Rogers quotes Kevin Trenberth, former Head of Climate Analysis at the National Center for Atmospheric Research, with apparent approval. Oddly, Rogers does not explain that Trenberth is a strong proponent of the carbon-forcing models used by the UN’s Intergovernmental Panel on Climate Change (IPCC). He should have made that clear, but Trenberth actually did say the following:

“‘[None of the] models correspond even remotely to the current observed climate [of the Earth].’“

I’ll explain the context of this comment below, but it constitutes a telling admission of the poor foundations on which climate alarmism rests. The various models used by the IPCCc are all a little different and they are calibrated differently. I’ve noted elsewhere that their projections are consistently biased toward severe over-predictions of temperature trends. Rogers goes on from there:

“The models can’t properly model the Earth’s climate, but we are supposed to believe that, if carbon dioxide has a certain effect on the imaginary Earths of the many models it will have the same effect on the real earth.”

But how on earth can a modeler accept the poor track record of these models? It’s not as if the bias is difficult to detect! On this question, Rogers says:

“The climate models are an exemplary representation of confirmation bias, the psychological tendency to suspend one’s critical facilities in favor of welcoming what one expects or desires. Climate scientists can manipulate numerous adjustable parameters in the models that can be changed to tune a model to give a ‘good’ result.“

And why are calamitous projections desirable from the perspective of climate modelers? Follow the money and the status rewards of reinforcing the groupthink:

“Once money and status started flowing into climate science because of the disaster its denizens were predicting, there was no going back. Imagine that a climate scientist discovers gigantic flaws in the models and the associated science. Do not imagine that his discovery would be treated respectfully and evaluated on its merits. That would open the door to reversing everything that has been so wonderful for climate scientists. Who would continue to throw billions of dollars a year at climate scientists if there were no disasters to be prevented? “

Indeed, it has been a gravy train. Today, it is reinforced by green-preening politicians, the many billions of dollars committed by investors seeking a continuing flow of public subsidies for renewables, tempting opportunities for international redistribution (and graft), and a mainstream media addicted to peddling scare stories. The parties involved all rely on, and profit by, alarmist research findings.

Rogers’ use of the Trenberth quote above might suggest that Trenberth is a critic of the climate models used by the IPCC. However, the statement was in-line with Trenberth’s long-standing insistence that the IPCC models are exclusively for constructing “what-if” scenarios, not actual forecasting. Perhaps his meaning also reflected his admission that climate models are “low resolution” relative to weather forecasting models. Or maybe he was referencing longer-term outcomes that are scenario-dependent. Nevertheless, the quote is revealing to the extent that one would hope these models are well-calibrated to initial conditions. That is seldom the case, however.

As a modeler, I must comment on a point made by Rogers about the use of ensembles of models. That essentially means averaging the predictions of multiple models that differ in structure. Rogers denigrates the approach, and while it is agnostic with respect to theories of the underlying process generating the data, it certainly has its uses in forecasting. Averaging the predictions of two different models with statistically independent and unbiased predictions will generally produce more accurate forecasts than the individual models. Rogers may or may not be aware of this, but he has my sympathies in this case because the IPCC is averaging across a large number of models that are clearly biased in the same direction! Rogers adds this interesting tidbit on the IPCC’s use of model ensembles:

“There is a political reason for using ensembles. In order to receive the benefits flowing from predicting a climate catastrophe, climate science must present a unified front. Dissenters have to be canceled and suppressed. If the IPCC were to select the best model, dozens of other modeling groups would be left out. They would, no doubt, form a dissenting group questioning the authority of those that gave the crown to one particular model.”

Rogers discusses one more aspect of the underpinnings of climate models, one that I’ve covered several times on this blog. That is the extent to which historical climate data is either completely lacking, plagued by discontinuities or coverage, or distorted by imperfections in measurement. The data used to calibrate climate models has been manipulated, adjusted, infilled, and estimated over lengthy periods by various parties to produce “official” and unofficial temperature series. While these efforts might seem valiant as exercises in understanding the past, they are fraught with uncertainty. Rogers provides a link to the realclimatescience blog, which details many of the data shortcomings as well as shenanigans perpetrated by researchers and agencies who have massaged, imputed, or outright created these historical data sets out of whole cloth. Rogers aptly notes:

“The purported climate catastrophe ahead is 100% junk science. If the unlikely climate catastrophe actually happens, it will be coincidental that it was predicted by climate scientists. Most of the supporting evidence is fabricated.”

An Internet for Users, Not Gatekeepers and Monopolists

09 Wednesday Jun 2021

Posted by Nuetzel in Censorship, Social Media, Uncategorized

≈ Leave a comment

Tags

Alphabet, Amazon, Anti-Trust, Biden v. Knight First Amendment Institute, Big Tech, Censor Track, Censorship, Clarence Thomas, Clubhousse, Common Carrier, Communications Decency Act, Daniel Oliver, Department of Justice, Exclusivity, Facebook, Fairness Doctrine, Gab, Google, Google Maps, Internet Accountability Project, Josh Hawley, Katherine Mangu-Ward, Media Research Center, MeWe, monopoly, Muhammadu Buhari, Murray Rothbard, My Space, Net Neutrality, Public Accommodation, Public Forum, Quillet, Right to Exclude, Ron DeSantis, Scholar, Section 230, Social Media, Statista, Street View, Telegram, TikTok, Twitter, Tying Arrangement

Factions comprising a majority of the public want to see SOMETHING done to curb the power of Big Tech, particularly Google/Alphabet, Facebook, Amazon, and Twitter. The apprehensions center around market power, censorship, and political influence, and many of us share all of those concerns. The solutions proposed thus far generally fall into the categories of antitrust action and legislative changes with the intent to protect free speech, but it is unlikely that anything meaningful will happen under the current administration. That would probably require an opposition super-majority in Congress. Meanwhile, some caution the problem is blown out of proportion and that we should not be too eager for government to intervene. 

Competition

There are problems with almost every possible avenue for reining in the tech oligopolies. From a libertarian perspective, the most ideal solution to all dimensions of this problem is organic market competition. Unfortunately, the task of getting competitive platforms off the ground seems almost insurmountable. In social media, the benefits to users of a large, incumbent network are nearly overwhelming. That’s well known to anyone who’s left Facebook and found how difficult it is to gain traction on other social media platforms. Hardly anyone you know is there!

Google is the dominant search engine by far, and the reasons are not quite as wholesome as the “don’t-be-evil” mantra goes. There are plenty of other search engines, but some are merely shells using Google’s engine in the background. Others have privacy advantages and perhaps more balanced search results than Google, but with relatively few users. Google’s array of complementary offerings, such as Google Maps, Street View, and Scholar, make it hard for users to get away from it entirely.

Amazon has been very successful in gaining retail market share over the years. It now accounts for an estimated 50% of retail e-commerce sales in the U.S., according to Statista. That’s hardly a monopoly, but Amazon’s scale and ubiquity in the online retail market creates massive advantages for buyers in terms of cost, convenience, and the scope of offerings. It creates advantages for online sellers as well, as long as Amazon itself doesn’t undercut them, which it is known to do. As a buyer, you almost have to be mad at them to bother with other online retail platforms or shopping direct. I’m mad, of course, but I STILL find myself buying through Amazon more often than I’d like. But yes, Amazon has competition.

Anti-Trust

Quillette favors antitrust action against Big Tech. Amazon and Alphabet are most often mentioned in the context of anti-competitive behavior, though the others are hardly free of complaints along those lines. Amazon routinely discriminates in favor of products in which it has a direct or indirect interest, and Google discriminates in favor of its own marketplace and has had several costly run-ins with EU antitrust enforcers. Small businesses are often cited as victims of Google’s cut-throat business tactics.

The Department of Justice filed suit against Google in October, 2020 for anti-competitive and exclusionary practices in the search and search advertising businesses. The main thrust of the charges are:

  • Exclusivity agreements prohibiting preinstallation of other search engines;
  • Tying arrangements forcing preinstallation of Google and no way to delete it;
  • Suppressing competition in advertising;

There are two other antitrust cases filed by state attorneys general against Google alleging monopolistic practices benefitting its own services at the expense of sellers in various lines of business. All of these cases, state and federal, are likely to drag on for years and the outcomes could take any number of forms: fines, structural separation of different parts of the business, and divestiture are all possibilities. Or perhaps nothing. But I suppose one can hope that the threat of anti-trust challenges, and of prolonged battles defending against such charges, will have a way of tempering anti-competitive tendencies, that is, apart from actual efficiency and good service.

These cases illustrate the fundamental tension between our desire for successful businesses to be rewarded and antitrust. As free market economists such as Murray Rothbard have said, there is something “arbitrary and capricious” about almost any anti-trust action. Legal thought on the matter has evolved to recognize that monopoly itself cannot be viewed as a crime, but the effort to monopolize might be. But as Rothbard asserted, claims along those lines tend to be rather arbitrary, and he was quite right to insist that the only true monopoly is one granted by government. In this case, many conservatives believe Section 230 of the Communications Decency Act of 1996 was the enabling legislation. But that is something anti-trust judgements cannot rectify.

Revoking Immunity

Section 230 gives internet service providers immunity against prosecution for any content posted by users on their platforms. While this provision is troublesome (see below), it is not at all clear why it might have encouraged monopolization, especially for web search services. At the time of the Act’s passage, Larry Page and Sergey Brin had barely begun work on Backrub, the forerunner to Google. Several other search engines had already existed and others have sprung up since then with varying degrees of success. Presumably, all of them have benefitted from Section 230 immunity, as have all social media platforms: not just Facebook, but Twitter, MeWe, Gab, Telegram, and others long forgotten, like MySpace.

Nevertheless, while private companies have free speech rights of their own, Section 230 confers undeserved protection against liability for the tech giants. That protection was predicated on the absence of editorial positioning and/or viewpoint curation of content posted by users. Instead, Section 230 often seems designed to put private companies in charge of censoring the kind of speech that government might like to censor. Outright repeal has been used as a threat against these companies, but what would it accomplish? The tech giants insist it would mean even more censorship, which is likely to be the result. 

Other Legislative Options

Other legislative solutions might hold the key to establishing true freedom of speech on the internet, a project that might have seemed pointless a decade ago. Justice Clarence Thomas’s concurring opinion in Biden v. Knight First Amendment Institute suggested the social media giants might be treated as common carriers or made accountable under laws on public accommodation. This seems reasonable in light of the strong network effects under which social media platforms operate as “public squares.” Common carrier law or a law designating a platform as a public accommodation would prohibit the platform from discriminating on the basis of speech.

I do not view such restrictions in the same light as so-called net neutrality, as some do. The latter requires carriers of data to treat all traffic equally in terms of priority and pricing of network resources, despite the out-sized demands created by some services. It is more of a resource allocation issue and not at all like managing traffic based on its political content.

The legislation contemplated by free speech activists with respect to big tech has to do with prohibiting viewpoint discrimination. That could be accomplished by laws asserting protections similar to those granted under the so-called Fairness Doctrine. As Daniel Oliver explains:

“A law prohibiting viewpoint discrimination (Missouri Senator Josh Hawley has introduced one such bill) would be just as constitutional as the Fairness Doctrine, an FCC policy which adjusted the overall balance of broadcast programming, or the Equal Time Rule, which first emerged in the Radio Act of 1927 and was established by the Communications Act of 1934. Under such a law, a plaintiff could sue for viewpoint discrimination. That plaintiff would be someone whose message had been suppressed by a tech company or whose account had been blocked or cancelled….”

Ron DeSantis just signed a new law giving the state of Florida or individuals the right to sue social media platforms for limiting, altering or deleting content posted by users, as well as daily fines for blocking candidates for political office. It will be interesting to see whether any other states pass similar legislation. However, the fines amount to a pittance for the tech giants, and the law will be challenged by those who say it compels speech by social media companies. That argument presupposes an implicit endorsement of all user content, which is absurd and flies in the face of the very immunity granted by Section 230. 

Justice Thomas went to pains to point out that when the government restricts a platform’s “right to exclude,” the accounts of public officials can more clearly be delineated as public forums. But in an act we wouldn’t wish to emulate, the government of Nigeria just shut down Twitter for blocking President Buhari’s tweet threatening force against rebels in one part of the country. Still, any law directly restricting a platform’s editorial discretion must be enforceable, whether that involves massive financial penalties for violations or some other form of discipline.

Private Action

There are private individuals who care enough about protecting speech online to do something about it. For example, these tech executives are fighting against internet censorship. You can also complain directly to the platforms when they censor content, and there are ways to react to censored posts by following prompts — tell them the information provided on their decision was NOT helpful and why. You can follow and support groups like the Media Research Center and its Censor Track service, or the Internet Accountability Project. Complain to your state and federal legislators about censorship and tell them what kind of changes you want to see. Finally, if you are serious about weakening the grip of the Big Tech, ditch them. Close your accounts on Facebook and Twitter. Stop using Google. Cancel your Prime membership. Join networks that are speech friendly and stick it out.

Individual action and a sense of perspective are what Katherine Mangu-Ward urges in this excellent piece:

“Ousted from Facebook and Twitter, Trump has set up his own site. This is a perfectly reasonable response to being banned—a solution that is available to virtually every American with access to the internet. In fact, for all the bellyaching over the difficulty of challenging Big Tech incumbents, the video-sharing app TikTok has gone from zero users to over a billion in the last five years. The live audio app Clubhouse is growing rapidly, with 10 million weekly active users, despite being invite-only and less than a year old. Meanwhile, Facebook’s daily active users declined in the last two quarters. And it’s worth keeping in mind that only 10 percent of adults are daily users of Twitter, hardly a chokehold on American public discourse.

Every single one of these sites is entirely or primarily free to use. Yes, they make money, sometimes lots of it. But the people who are absolutely furious about the service they are receiving are, by any definition, getting much more than they paid for. The results of a laissez-faire regime on the internet have been remarkable, a flowering of innovation and bountiful consumer surplus.”

Conclusion

The fight over censorship by Big Tech will continue, but legislation will almost certainly be confined to the state level in the short-term. It might be some time before federal law ever recognizes social media platforms as the public forums most users think they should be. Federal legislation might someday call for the wholesale elimination of Section 230 or an adjustment to its language. A more direct defense of First Amendment rights would be strict prohibitions of online censorship, but that won’t happen. Instead, the debate will become mired in controversy over appropriate versus inappropriate moderation, as Mangu-Ward alludes. Antitrust action should always be viewed with suspicion, though some argue that it is necessary to establish a more competitive environment, one in which free speech and fair search-engine treatment can flourish.

Organic competition is the best outcome of all, but users must be willing to vote with their digital feet, as it were, rejecting the large tech incumbents and trying new platforms. And when you do, try to bring your friends along with you!

Note: This post also appears at The American Reveille.

The Futility and Falsehoods of Climate Heroics

01 Tuesday Jun 2021

Posted by Nuetzel in Climate science, Environmental Fascism, Global Warming, Uncategorized

≈ Leave a comment

Tags

Atmospheric Carbon, Biden Administration, Carbon forcing, Carbon Mitigation, Climate Change, Climate Sensitivity, ExxonMobil, Fossil fuels, global warming, Green Energy, Greenhouse Gas, IPPC, John Kerry, Judith Curry, Natural Gas, Netherlands Climate Act, Nic Lewis, Nuclear power, Putty-Clay Technology, Renewables, Ross McKitrick, Royal Dutch Shell, Social Cost of Carbon, William Nordhaus

The world’s gone far astray in attempts to battle climate change through forced reductions in carbon emissions. Last Wednesday, in an outrageously stupid ruling,a Dutch court ordered Royal Dutch Shell to reduce its emissions by 45% by 2030 relative to 2019 levels. It has nothing to do with Shell’s historical record on the environment. Rather, the Court said Shell’s existing climate action plans did not meet “the company’s own responsibility for achieving a CO2 reduction.” The decision will be appealed, but it appears that “industry agreements” under the Netherlands’ Climate Act of 2019 are in dispute.

Later that same day, a shareholder dissident group supporting corporate action on climate change won at least two ExxonMobil board seats. And then we have the story of John Kerry’s effort to stop major banks from lending to the fossil fuel industry. Together with the Biden Administration’s other actions on energy policy, we are witnessing the greatest attack on conventional power sources in history, and we’ll all pay dearly for it. 

The Central Planner’s Conceit

Technological advance is a great thing, and we’ve seen it in the development of safe nuclear power generation, but the environmental left has successfully placed roadblocks in the way of its deployment. Instead, they favor the mandated adoption of what amount to beta versions of technologies that might never be economic and create extreme environmental hazards of their own (see here, here, here, and here). To private adopters, green energy installations are often subsidized by the government, disguising their underlying inefficiencies. These premature beta versions are then embedded in our base of productive capital and often remain even as they are made obsolete by subsequent advances. The “putty-clay” nature of technology decisions should caution us against premature adoptions of this kind. This is just one of the many curses of central planning.

Not only have our leftist planners forced the deployment of inferior technologies: they are actively seeking to bring more viable alternatives to ruination. I mentioned nuclear power and even natural gas offer a path for reducing carbon emissions, yet climate alarmists wage war against it as much as other fossil fuels. We have Kerry’s plot to deny funding for the fossil fuel industry and even activist “woke” investors, attempting to override management expertise and divert internal resources to green energy. It’s not as if renewable energy sources are not already part of these energy firms’ development portfolios. Allocations of capital and staff to these projects are usually dependent upon a company’s professional and technical expertise, market forces, and (less propitiously) incentives decreed by the government. Yet, the activist investors are there to impose their will.

Placing Faith and Fate In Models

All these attempts to remake our energy complex and the economy are based on the presumed external costs associated with carbon emissions. Those costs, and the potential savings achievable through the mitigation efforts of government and private greenies around the globe, have been wildly exaggerated.

The first thing to understand about the climate “science” relied upon by the environmental left is that it is almost exclusively model-dependent. In other words, it is based on mathematical relationships specified by the researchers. Their projections depend on those specs, the selection of parameter values, and the scenarios to which they are subjected. The models are usually calibrated to be roughly consistent with outcomes over some historical time period, but as modelers in almost any field can attest, that is not hard to do. It’s still possible to produce extreme results out-of-sample. The point is that these models are generally not estimated statistically from a lengthy sample of historical data. Even when sound statistical methodologies are employed, the samples are blinkingly short on climatological timescales. That means they are highly sample-specific and likely to propagate large errors out-of-sample. But most of these are what might be called “toy models” specified by the researcher. And what are often billed as “findings” are merely projections based on scenarios that are themselves manufactured by imaginative climate “researchers” cum grant-seeking partisans. In fact, it’s much worse than that because even historical climate data is subject to manipulation, but that’s a topic for another day.

Key Assumptions

What follows are basic components of the climate apocalypse narrative as supported by “the science” of man-made or anthropomorphic global warming (AGW):

(A) The first kind of model output to consider is the increase in atmospheric carbon concentration over time, measured in parts per million (PPM). This is a function of many natural processes, including volcanism and other kinds of outgassing from oceans and decomposing biomass, as well absorption by carbon sinks like vegetation and various geological materials. But the primary focus is human carbon generating activity, which depends on the carbon-intensity of production technology. As Ross McKitrick shows (see chart below), projections from these kinds of models have demonstrated significant upside bias over the years. Whether that is because of slower than expected economic growth, unexpected technological efficiencies, an increase in the service-orientation of economic activity worldwide, or feedback from carbon-induced greening or other processes, most of the models have over-predicted atmospheric carbon PPM. Those errors tend to increase with the passage of time, of course.

(B) Most of the models promoted by climate alarmists are carbon forcing models, meaning that carbon emissions are the primary driver of global temperatures and other phenomena like storm strength and increases in sea level. With increases in carbon concentration predicted by the models in (A) above, the next stage of models predicts that temperatures must rise. But the models tend to run “hot.” This chart shows the mean of several prominent global temperature series contrasted with 1990 projections from the Intergovernmental Panel on Climate Change (IPCC).

The following is even more revealing, as it shows the dispersion of various model runs relative to three different global temperature series:

And here’s another, which is a more “stylized” view, showing ranges of predictions. The gaps show errors of fairly large magnitude relative to the mean trend of actual temperatures of 0.11 degrees Celsius per decade.

(C) Climate sensitivity to “radiative forcing” is a key assumption underlying all of the forecasts of AGW. A simple explanation is that a stronger greenhouse effect, and increases in the atmosphere’s carbon concentration, cause more solar energy to be “trapped” within our “greenhouse,” and less is radiated back into space. Climate sensitivity is usually measured in degrees Celsius relative to a doubling of atmospheric carbon. 

And how large is the climate’s sensitivity to a doubling of carbon PPM? The IPCC says it’s in a range of 1.5C to 4.5C. However, findings published by Nic Lewis and Judith Curry are close to the low end of that range, and are those found by the author of the paper described here. 

In separate efforts, Finnish and Japanese researchers have asserted that the primary cause of recent warming is an increase in low cloud cover, which the Japanese team attributes to increases in the Earth’s bombardment by cosmic rays due to a weakening magnetic field. The Finnish authors note that most of the models used by the climate establishment ignore cloud formation, an omission they believe leads to a massive overstatement (10x) of sensitivity to carbon forcings. Furthermore, they assert that carbon forcings are mainly attributable to ocean discharge as opposed to human activity.

(D) Estimates of the Social Cost of Carbon (SCC) per ton of emissions are used as a rationale for carbon abatement efforts. The SCC was pioneered by economist William Nordhaus in the 1990s, and today there are a number of prominent models that produce distributions of possible SCC values, which tend to have high dispersion and extremely long upper tails. Of course, the highest estimates are driven by the same assumptions about extreme climate sensitivities discussed above. The Biden Administration is using an SCC of $51 per ton. Some recommend the adoption of even higher values for regulatory purposes in order to achieve net-zero emissions at an early date, revealing the manipulative purposes to which the SCC concept is put. This is a raw attempt to usurp economic power, not any sort of exercise in optimization, as this admission from a “climate expert” shows. In the midst of a barrage of false climate propaganda (hurricanes! wildfires!), he tells 60 Minutes that an acceptable limit on warming of 1.5C is just a number they “chose” as a “tipping point.”

As a measurement exercise, more realistic climate sensitivities yield much lower SCCs. McKitrick presents a chart from Lewis-Curry comparing their estimates of the SCC at lower climate sensitivities to an average of earlier estimates used by IPCC:

High levels of the SCC are used as a rationale for high-cost carbon abatement efforts. If the SCC is overstated, however, then costly abatements represent waste. And there is no guarantee that spending an amount on abatements equal to the SCC will eliminate the presumed cost of a ton’s worth of anthropomorphic warming. Again, there are strong reasons to believe that the warming experienced over the past several decades has had multiple causes, and human carbon emissions might have played a relatively minor role. 

Crisis Is King

Some people just aren’t happy unless they have a crisis over which to harangue the rest of us. But try as they might, the vast resources dedicated to carbon reduction are largely wasted. I hesitate to say their effort is quixotic because they want more windmills and are completely lacking in gallantry. As McKitrick notes, it takes many years for abatement to have a meaningful impact on carbon concentrations, and since emissions mix globally, unilateral efforts are practically worthless. Worse yet, the resource costs of abatement and lost economic growth are unacceptable, especially when some of the most promising alternative sources of “clean” energy are dismissed by activists. So we forego economic growth, rush to adopt immature energy alternatives, and make very little progress toward the stated goals of the climate alarmists.

Myth Makers in Lab Coats

02 Friday Apr 2021

Posted by Nuetzel in Climate science, Research Bias, Science

≈ Leave a comment

Tags

Cambridge, Canonization Effect, Citation Bias, Climate Change, Climatology, Lee Jussim, Medical Science, Model Calibration, National Oceanic and Atmospheric Administration, Pandemic, Political Bias, Psychology Today, Publication Bias, Repication Crisis, Reporting Bias, Spin

The prestige of some elements of the science community has taken a beating during the pandemic due to hugely erroneous predictions, contradictory pronouncements, and misplaced confidence in interventions that have proven futile. We know that medical science has suffered from a replication crisis, and other areas of inquiry like climate science have been compromised by politicization. So it seemed timely when a friend sent me this brief exposition of how “scientific myths” are sometimes created, authored by Lee Jussim in Psychology Today. It’s a real indictment of the publication process in scientific journals, and one can well imagine the impact these biases have on journalists, who themselves are prone to exaggeration in their efforts to produce “hot” stories.

The graphic above appears in Jussim’s article, taken from a Cambridge study of reporting and citation biases in research on treatments for depression. But as Jussim asserts, the biases at play here are not “remotely restricted to antidepressant research”.

The first column of dots represent trial results submitted to journals for publication. A green dot signifies a positive result: that the treatment or intervention was associated with significantly improved patient outcomes. The red dots are trials in which the results were either inconclusive or the treatment was associated with detrimental outcomes. The trials were split about equally between positive and non-positive findings, but far fewer of the trials with non-positive findings were published. From the study:

“While all but one of the positive trials (98%) were published, only 25 (48%) of the negative trials were published. Hence, 77 trials were published, of which 25 (32%) were negative.“

The third column shows that even within the set of published trials, certain negative results were NOT reported or secondary outcomes were elevated to primary emphasis:

“Ten negative trials, however, became ‘positive’ in the published literature, by omitting unfavorable outcomes or switching the status of the primary and secondary outcomes.“

The authors went further by classifying whether the published narrative put a “positive spin” on inconclusive or negative results (yellow dots):

“… only four (5%) of 77 published trials unambiguously reported that the treatment was not more effective than placebo in that particular trial.“

Finally, the last column represents citations of the published trials in subsequent research, where the size of the dots corresponds to different levels of citation:

“Compounding the problem, positive trials were cited three times as frequently as negative trials (92 v. 32 citations. … Altogether, these results show that the effects of different biases accumulate to hide non- significant results from view.“

As Jussim concludes, it’s safe to say these biases are not confined to antidepressant research. He also writes of the “canonization effect”, which occurs when certain conclusions become widely accepted by scientists:

“It is not that [the] underlying research is ‘invalid.’ It is that [the] full scope of findings is mixed, but that the mixed nature of those findings does not make it into what gets canonized.“

I would say canonization applies more broadly across areas of research. For example, in climate research, empirics often take a back seat to theoretical models “calibrated” over short historical records. The theoretical models often incorporate “canonized” climate change doctrine which, on climatological timescales, can only be classified as speculative. Of course, the media and public has difficulty distinguishing this practice from real empirics.

All this is compounded by the institutional biases introduced by the grant-making process, the politicization of certain areas of science (another source of publication bias), and mission creep within government bureaucracies. In fact, some of these agencies control the very data upon which much research is based (the National Oceanic and Atmospheric Administration, for example), and there is credible evidence that this information has been systematically distorted over time.

The authors of the Cambridge study discuss efforts to mitigate the biases in published research. Unfortunately, reforms have met with mixed success at best. The anti-depressant research reflects tendencies that are all too human and perhaps financially motivated. Add to that the political motivation underlying the conduct of broad areas of research and the dimensions of the problem seem almost insurmountable without a fundamental revolution of ethics within the scientific community. For now, the biases have made “follow the science” into something of a joke.

Social Media and the Antitrust Reflex

10 Tuesday Apr 2018

Posted by Nuetzel in Antitrust, Regulation, Social Media

≈ Leave a comment

Tags

Anticompetitive Behavior, Antitrust, Brendan KIrby, Cambridge Analytica, Data Privacy, EconTalk, Facebook, Fact-Checking, Geoffrey A. Fowler, Information Fiduciary, John O. McGinnis, Jonathan Zittrain, Judicial Restraint, Mark Zuckerberg, Matt Stoller, MeWe, Navneet Alang, Predatory Pricing, Social Media, Trust- Busting

Falling Zuckerberg

Facebook is under fire for weak privacy protections, its exploitation of users’ data, and the dangers it is said to pose to Americans’ free speech rights. Last week, Mark Zuckerberg, who controls all of the voting stock in Facebook, attempted to address those issues before a joint hearing of the Senate Judiciary and Commerce Committees. It represented a major event in the history of the social media company, and it happened at a time when discussion of antitrust action against social media conglomerates like Facebook, Google, and Amazon is gaining support in some quarters. I hope this intervention does not come to pass.

The Threat

At the heart of the current uproar are Facebook’s data privacy policy and a significant data breach. The recent scandal involving Cambridge Analytica arose because Facebook, at one time, allowed application developers to access user data, and the practice continued for a few developers. Unfortunately, at least one didn’t keep the data to himself. There have been accusations that the company has violated privacy laws in the European Union (EU) and privacy laws in some states. Facebook has also raised ire among privacy advocates by lobbying against stronger privacy laws in some states, but it is within its legal rights to do so. Violations of privacy laws must be adjudicated, but antitrust laws were not intended to address such a threat. Rather, they were intended to prevent dominant producers from monopolizing or restraining trade in a market and harming consumers in the process.

Matt Stoller, in an interview with Russ Roberts on EconTalk, says antitrust action against social media companies may be necessary because they are so pervasive in our lives, have built such dominant market positions, and have made a practice of buying nascent competitors over the years. Steps must be taken to “oxygenate” the market, according to Stoller, promoting competition and protecting new entrants.

Tim Wu, the attorney who coined the misleading term “network neutrality”, is a critic of Facebook, though Wu is more skeptical of the promise of antitrust or regulatory action:

“In Facebook’s case, we are not speaking of a few missteps here and there, the misbehavior of a few aberrant employees. The problems are central and structural, the predicted consequences of its business model. From the day it first sought revenue, Facebook prioritized growth over any other possible goal, maximizing the harvest of data and human attention. Its promises to investors have demanded an ever-improving ability to spy on and manipulate large populations of people. Facebook, at its core, is a surveillance machine, and to expect that to change is misplaced optimism.”

Google has already been subject to antitrust action in the EU due to the alleged anti-competitive nature of its search algorithm, and Facebook’s data privacy policy is under fire there. But the prospect of traditional antitrust action against a social media company like Facebook seems rather odd, as acknowledged by the author at the first link above, Navneet Alang:

“… Facebook specifically doesn’t appear to be doing anything that actively violates traditional antitrust rules. Instead, it’s relying on network effects, that tendency of digital networks to have their own kind of inertia where the more people get on them the more incentive there is to stay. It’s also hard to suggest that Facebook has a monopoly on advertising dollars when Google is also raking in billions of dollars.“

Competition

The size of Facebook’s user base gives it a massive advantage over most of the other platforms in terms of network effects. I offer myself as an example of the inertia described by Alang: I’ve been on Facebook for a number of years. I use it to keep in touch with friends and as a vehicle for attracting readers to my blog. As I contemplated this post, I experimented by opening a MeWe account, where I joined a few user groups. It has a different “feel” than Facebook and is more oriented toward group chats. I like it and I have probably spent as much time on MeWe in the last week as Facebook. I sent MeWe invitations to about 20 of my friends, nearly all of whom have Facebook accounts, and a few days later I posted a link to MeWe on my Facebook wall. Thus far, however, only three of my friends have joined MeWe. Of course, none of us has deactivated our Facebook account, and I speculate that none of us will any time soon. This behavior is consistent with “platform inertia” described by Alang. Facebook users are largely a captive market.

But Facebook is not a monopoly and it is not a necessity. Neither is Google. Neither is Amazon. All of these firms have direct competitors for both users and advertising dollars. It’s been falsely claimed that Google and Facebook together control 90% of online ad revenue, but the correct figure for 2017 is estimated at less than 57%. That’s down a bit from 2016, and another decline is expected in 2018. There are many social media platforms. Zuckerberg claims that the average American already uses eight different platforms, which may include Facebook, Google+, Instagram, LinkedIn, MeWe, Reddit, Spotify, Tinder, Tumblr, Twitter, and many others (also see here). Some of these serve specialized interests such as professional networking, older adults, hook-ups, and shopping. Significant alternatives for users exist, some offering privacy protections that might have more appeal now than ever.

Antitrust vs. Popular, Low-Priced Service Providers

Facebook’s business model does not fit comfortably into the domain of traditional antitrust policy. The company’s users pay a price, but one that is not easily calculated or even perceived: the value of the personal data they give away on a daily basis. Facebook is monetizing that data by allowing advertisers to target individuals who meet specific criteria. Needless to say, many observers are uncomfortable with the arrangement. The company must maintain a position of trust among its users befitting such a sensitive role. No doubt many have given Facebook access to their data out of ignorance of the full consequences of their sacrifice. Many others have done so voluntarily and with full awareness. Perhaps they view participation in social media to be worth such a price. It is also plausible that users benefit from the kind of targeted advertising that Facebook facilitates.

Does Facebook’s business model allow it to engage in an ongoing practice of predatory pricing? It is far from clear that its pricing qualifies as “anti-competitive behavior”, and courts have been difficult to persuade that low prices run afoul of U.S. antitrust law:

“Predatory pricing occurs when companies price their products or services below cost with the purpose of removing competitors from the market. … the courts use a two part test to determine whether they have occurred: (1) the violating company’s production costs must be higher than the market price of the good or service and (2) there must be a ‘dangerous probability’ that the violating company will recover the loss …”

Applying this test to Facebook is troublesome because as we have seen, users exchange something of value for their use of the platform, which Facebook then exploits to cover costs quite easily. Fee-based competitors who might complain that Facebook’s pricing is “unfair” would be better-advised to preach the benefits of privacy and data control, and some of them do just that as part of their value proposition.

More Antitrust Skepticism

John O. McGinnis praises the judicial restraint that has characterized antitrust law over the past 30 years. This practice recognizes that it is not always a good thing for consumers when the government denies a merger, for example, or busts up a firm deemed by authorities to possess “too much power”. An innovative firm might well bring new value to its products by integrating them with features possessed by another firm’s products. Or a growing firm may be able to create economies of scale and scope that are passed along to consumers. Antitrust action, however, too often presumes that a larger market share, however defined, is unequivocally bad beyond some point. Intervention on those grounds can have a chilling effect on innovation and on the value such firms bring to the market and to society.

There are more fundamental reasons to view antitrust enforcement skeptically. For one thing, a product market can be defined in various ways. The more specific the definition, the greater the appearance of market dominance by larger firms. Or worse, the availability of real alternatives is ignored. For example, would an airline be a monopolist if it were the only carrier serving a particular airport or market? In a narrow sense, yes, but that airline would not hold a monopoly over intercity transportation, for which many alternatives exist. Is an internet service provider (ISP) a monopoly if it is the only ISP offering a 400+ Mbs download speed in a certain vicinity? In a very narrow sense, yes, but there may be other ISPs offering slower speeds that most consumers view as adequate. And in all cases, consumers always have one very basic alternative: not buying. Even so-called natural monopolies, such as certain public utilities, offer services for which there are broad alternatives. In those cases, however, a grant of a monopoly franchise is typically seen as a good solution if exchanged for public oversight and regulation, so antitrust is generally not at issue.

One other basic objection that can be made to antitrust is that it violates private property rights. A business that enjoys market dominance usually gets that way by pleasing customers. It’s rewards for excellent performance are the rightful property of its owners, or should be. Antitrust action then becomes a form of confiscation by punishing such a firm and its owners for success.

Political Bias

Another major complaint against Facebook is political bias. It is accused of selectively censoring users and their content and manipulating user news feeds to favor particular points of view. Promises to employ fact-check mechanisms are of little comfort, since the concern involves matters of opinion. Any person or organization held to be in possession of the unadulterated truth on issues of public debate should be regarded with suspicion.

Last Tuesday at the joint session, Zuckerberg acted as if such a bias was quite natural, given that Facebook’s employee base is concentrated in the San Francisco Bay area. But his nonchalance over the matter partly reflects the fact that Facebook is, after all, a private company. It is free to host whatever views it chooses, and that freedom is for the better. Facebook is not like a public square. Instead, the scope of a user’s speech is largely discretionary: users select their own network of friends; they can choose to limit access to their posts to that group or to a broader group of “friends of friends”; they can limit posts to subgroups of friends; or they can allow the entire population of users to see their posts, if interested. No matter how many users it has, Facebook is still a private community. If its “community standards” or their enforcement are objectionable, then users can and should find alternative outlets. And again, as a private company, Facebook can choose to feature particular news sources and censor others without running afoul of the First Amendment.

Revisiting Facebook’s Business Model

The greatest immediate challenge for Facebook is data privacy. Trust among users has been eroded by the improprieties in Facebook’s exploitation of data. It’s as if everyone in the U.S. has suddenly awoken to the simple facts of its business model and the leveraging of user data it requires. But it is not of great concern to some users, who will be happy to continue to use the platform as they have in the past. Zuckerberg did not indicate a willingness to yield on the question of Facebook’s business model in his congressional testimony, but there is a threat that regulation will require steps to protect data that might be inconsistent with the business model. If users opt-out of data sharing in droves, then Facebook’s ability to collect revenue from advertisers will be diminished.

As Jonathan Zittrain points out, Facebook might find new opportunity as an information fiduciary for users. That would require a choice between paying a monthly fee or allowing Facebook to continue targeted advertising on one’s news feed. Geoffrey A. Fowler writes that the idea of paying for Facebook is not an outrageous proposition:

“Facebook collected $82 in advertising for each member in North America last year. Across the world, it’s about $20 per member. … Netflix costs $11 and Amazon Prime is $13 per month. Facebook would need $7 per month from everyone in North America to maintain its current revenue from targeted advertising.”

Given a choice, not everyone would choose to pay, and I doubt that a fee of $7 per month would cost Facebook much in terms of membership anyway. It could probably charge slightly more for regular memberships and price discriminate to attract students and seniors. Fowler contends that a user-paid Facebook would be a better product. It might sharpen the focus on user-provided and user-linked content, rather than content provided by advertisers. As Tim Wu says, “… payment better aligns the incentives of the platform with those of its users.” Fowler also asserts that regulatory headaches would be less likely for the social network because it would not be reliant on exploiting user data.

A noteworthy aspect of Zuckerberg’s testimony at the congressional hearing was his stated willingness to consider regulatory solutions: the “right regulations“, as he put it. That might cover any number of issues, including privacy and political advertising. But as Brendan Kirby warns, regulating Facebook might not be a great idea. Established incumbents are often capable of bending regulatory bodies to their will, ultimately using them to gain a stronger market position. A partnership between the data-rich Facebook and an agency of the government is not one that I’d particularly like to see. Tim Wu believes that what we really need are competitive alternatives to Facebook: he floats a few ideas about how a Facebook competitor might be structured, most prominently the fee-based alternative.

Let It Evolve 

Like many others, I’m possessed by an anxiety about the security of my data on social media, an irritation with the political bias that pervades social media firms, and a suspicion that certain points of view are favored over others on their platforms. But I do not favor government intervention against these firms. Neither antitrust action nor regulation is likely to improve the available platforms or their services, and instead might do quite a bit of damage. “Trust-busting” of social media platforms would present technological challenges, but even worse, it would disrupt millions of complex relationships between firms and users and attempt to replace them with even more numerous and complex relationships, all dictated by central authorities rather than market forces. Significant mergers and acquisitions will continue to be reviewed by the DOJ and the FTC, preferably tempered by judicial restraint. I also oppose the regulatory option. Compliance is costly, of course, but even worse, the social media giants can afford it and will manipulate it. Those costs would inevitably present barriers to market entry by upstart competitors. The best regulation is imposed by customers, who should assert their sovereignty and exercise caution in the relationships they establish with social media platforms … and remember that nothing comes for free.

← Older posts
Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • Oh To Squeeze Fiscal Discipline From a Debt Limit Turnip
  • Conformity and Suppression: How Science Is Not “Done”
  • Grow Or Collapse: Stasis Is Not a Long-Term Option
  • Cassandras Feel An Urgent Need To Crush Your Lifestyle
  • Containing An Online Viper Pit of Antisemites

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Ominous The Spirit
  • Passive Income Kickstart
  • onlyfinance.net/
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library

Blog at WordPress.com.

Ominous The Spirit

Ominous The Spirit is an artist that makes music, paints, and creates photography. He donates 100% of profits to charity.

Passive Income Kickstart

onlyfinance.net/

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The future is ours to create.

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

  • Follow Following
    • Sacred Cow Chips
    • Join 121 other followers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...