Playing Pretend Science Over Cocktails

Tags

, , , , , , , , , , , , , , , , ,

It’s a great irony that our educated and affluent classes have been largely zombified on the subject of climate change. Their brainwashing by the mainstream media has been so effective that these individuals are unwilling to consider more nuanced discussions of the consequences of higher atmospheric carbon concentrations, or any scientific evidence to suggest contrary views. I recently attended a party at which I witnessed several exchanges on the topic. It was apparent that these individuals are conditioned to accept a set of premises while lacking real familiarity with supporting evidence. Except in one brief instance, I avoided engaging on the topic, despite my bemusement. After all, I was there to party, and I did!

The zombie alarmists express their views within a self-reinforcing echo chamber, reacting to each others’ virtue signals with knowing sarcasm. They also seem eager to avoid any “denialist” stigma associated with a contrary view, so there is a sinister undercurrent to the whole dynamic. These individuals are incapable of citing real sources and evidence; they cite anecdotes or general “news-say” at best. They confuse local weather with climate change. Most of them haven’t the faintest idea how to find real research support for their position, even with powerful search engines at their disposal. Of course, the search engines themselves are programmed to prioritize the very media outlets that profit from climate scare-mongering. Catastrophe sells! Those media outlets, in turn, are eager to quote the views of researchers in government who profit from alarmism in the form of expanding programs and regulatory authority, as well as researchers outside of government who profit from government grant-making authority.

The Con in the “Consensus”

Climate alarmists take assurance in their position by repeating the false claim that  97% of climate scientists believe that human activity is the primary cause of warming global temperatures. The basis for this strong assertion comes from an academic paper that reviewed other papers, the selection of which was subject to bias. The 97% figure was not a share of “scientists”. It was the share of the selected papers stating agreement with the anthropomorphic global warming (AGW) hypothesis. And that figure is subject to other doubts, in addition to the selection bias noted above: the categorization into agree/disagree groups was made by “researchers” who were, in fact, environmental activists, who counted several papers written by so-called “skeptics” among the set that agreed with the strong AGW hypothesis. So the “97% of scientists” claim is a distortion of the actual findings, and the findings themselves are subject to severe methodological shortcomings. On the other hand, there are a number of widely-recognized, natural reasons for climate change, as documented in this note on 240 papers published over just the first six months of 2016.

Data Integrity

It’s rare to meet a climate alarmist with any knowledge of how temperature data is actually collected. What exactly is the “global temperature”, and how can it be measured? It is a difficult undertaking, and it wasn’t until 1979 that it could be done with any reliability. According to Roy Spencer, that’s when satellite equipment began measuring:

… the natural microwave thermal emissions from oxygen in the atmosphere. The intensity of the signals these microwave radiometers measure at different microwave frequencies is directly proportional to the temperature of different, deep layers of the atmosphere.

Prior to the deployment of weather satellites, and starting around 1850, temperature records came only from surface temperature readings. These are taken at weather stations on land and collected at sea, and they are subject to quality issues that are generally unappreciated. Weather stations are unevenly distributed and they come and go over time; many of them produce readings that are increasingly biased upward by urbanization. Sea surface temperatures are collected in different ways with varying implications for temperature trends. Aggregating these records over time and geography is a hazardous undertaking, and these records are, unfortunately, the most vulnerable to manipulation.

The urbanization bias in surface temperatures is significant. According to this paper by Ross McKitrick, the number of weather stations counted in the three major global temperature series declined by more than 4,500 since the 1970s (over 75%), and most of those losses were rural stations. From McKitrick’s abstract:

“The collapse of the sample size has increased the relative fraction of data coming from airports to about 50% (up from about 30% in the late 1970s). It has also reduced the average latitude of source data and removed relatively more high altitude monitoring sites. Oceanic data are based on sea surface temperature (SST) instead of marine air temperature (MAT)…. Ship-based readings changed over the 20th century from bucket-and-thermometer to engine-intake methods, leading to a warm bias as the new readings displaced the old.

Think about that the next time you hear about temperature records, especially NOAA reports on a “new warmest month on record”.

Data Manipulation

It’s rare to find alarmists having any awareness of the scandal at East Anglia University, which involved data falsification by prominent members of the climate change “establishment”. That scandal also shed light on corruption of the peer-review process in climate research, including a bias against publishing work skeptical of the accepted AGW narrative. Few are aware now of a very recent scandal involving manipulation of temperature data at NOAA in which retroactive adjustments were applied in an effort to make the past look cooler and more recent temperatures warmer. There is currently an FOIA outstanding for communications between the Obama White House and a key scientist involved in the scandal. Here are Judith Curry’s thoughts on the NOAA temperature manipulation.

Think about all that the next time you hear about temperature records, especially NOAA reports on a “new warmest month on record”.

Other Warming Whoppers

Last week on social media, I noticed a woman emoting about the way hurricanes used to frighten her late mother. This woman was sharing an article about the presumed negative psychological effects that climate change was having on the general public. The bogus premises: we are experiencing an increase in the frequency and severity of storms, that climate change is causing the storms, and that people are scared to death about it! Just to be clear, I don’t think I’ve heard much in the way of real panic, and real estate prices and investment flows don’t seem to be under any real pressure. In fact, the frequency and severity of severe weather has been in decline even as atmospheric carbon concentrations have increased over the past 50 years.

I heard another laughable claim at the party: that maps are showing great areas of the globe becoming increasingly dry, mostly at low latitudes. I believe the phrase “frying” was used. That is patently false, but I believe it’s another case in which climate alarmists have confused model forecasts with fact.

The prospect of rising sea levels is another matter that concerns alarmists, who always fail to note that sea levels have been increasing for a very long time, well before carbon concentrations could have had any impact. In fact, the sea level increases in the past few centuries are a rebound from lows during the Little Ice Age, and levels are now back to where the seas were during the Medieval Warm Period. But even those fluctuations look minor by comparison to the increases in sea levels that occurred over 8,000 years ago. Sea levels are rising at a very slow rate today, so slowly that coastal construction is proceeding as if there is little if any threat to new investments. While some of this activity may be subsidized by governments through cheap flood insurance, real money is on the line, and that probably represents a better forecast of future coastal flooding than any academic study can provide.

Old Ideas Die Hard

Two enduring features of the climate debate are 1) the extent to which so-called “carbon forcing” models of climate change have erred in over-predicting global temperatures, and 2) the extent to which those errors have gone unnoticed by the media and the public. The models have been plagued by a number of issues: the climate is not a simple system. However, one basic shortcoming has to do with the existence of strong feedback effects: the alarmist community has asserted that feedbacks are positive, on balance, magnifying the warming impact of a given carbon forcing. In fact, the opposite seems to be true: second-order responses due to cloud cover, water vapor, and circulation effects are negative, on balance, at least partially offsetting the initial forcing.

Fifty Years Ain’t History

One other amazing thing about the alarmist position is an insistence that the past 50 years should be taken as a permanent trend. On a global scale, our surface temperature records are sketchy enough today, but recorded history is limited to the very recent past. There are recognized methods for estimating temperatures in the more distant past by using various temperature proxies. These are based on measurements of other natural phenomenon that are temperature-sensitive, such as ice cores, tree rings, and matter within successive sediment layers such as pollen and other organic compounds.

The proxy data has been used to create temperature estimates into the distant past. A basic finding is that the world has been this warm before, and even warmer, as recently as 1,000 years ago. This demonstrates the wide range of natural variation in the climate, and today’s global temperatures are well within that range. At the party I mentioned earlier, I was amused to hear a friend say, “Ya’ know, Greenland isn’t supposed to be green”, and he meant it! He is apparently unaware that Greenland was given that name by Viking settlers around 1000 AD, who inhabited the island during a warm spell lasting several hundred years… until it got too cold!

Carbon Is Not Poison

The alarmists take the position that carbon emissions are unequivocally bad for people and the planet. They treat carbon as if it is the equivalent of poisonous air pollution. The popular press often illustrates carbon emissions as black smoke pouring from industrial smokestacks, but like oxygen, carbon dioxide is a colorless gas and a gas upon which life itself depends.

Our planet’s vegetation thrives on carbon dioxide, and increasing carbon concentrations are promoting a “greening” of the earth. Crop yields are increasing as a result; reforestation is proceeding as well. The enhanced vegetation provides an element of climate feedback against carbon “forcings” by serving as a carbon sink, absorbing increasing amounts of carbon and converting it to oxygen.

Matt Ridley has noted one of the worst consequences of the alarmists’ carbon panic and its influence on public policythe vast misallocation of resources toward carbon reduction, much of it dedicated to subsidies for technologies that cannot pass economic muster. Consider that those resources could be devoted to many other worthwhile purposes, like bringing electric power to third-world families who otherwise must burn dung inside their huts for heat; for that matter, perhaps the resources could be left under the control of taxpayers who can put it to the uses they value most highly. The regulatory burdens imposed by these policies on carbon-intensive industries represent lost output that can’t ever be recouped, and all in the service of goals that are of questionable value. And of course, the anti-carbon efforts almost certainly reflect a diversion of resources to the detriment of more immediate environmental concerns, such as mitigating truly toxic industrial pollutants.

The priorities underlying the alarm over climate change are severely misguided. The public should demand better evidence than consistently erroneous model predictions and manipulated climate data. Unfortunately, a media eager for drama and statism is complicit in the misleading narrative.

FYI: The cartoon at the top of this post refers to the climate blog climateaudit.org. The site’s blogger Steve McIntyre did much to debunk the “hockey stick” depiction of global temperature history, though it seems to live on in the minds of climate alarmists. McIntyre appears to be on an extended hiatus from the blog.

Courts and Their Administrative Masters

Tags

, , , , , , , , , , , , ,

IMG_4007

Supreme Court nominee Neil Gorsuch says the judicial branch should not be obliged to defer to government agencies within the executive branch in interpreting law. Gorsuch’s  opinion, however, is contrary to an established principle guiding courts since the 1984 Supreme Court ruling in Chevron USA vs. The Natural Resources Defense Council. In what is known as Chevron deference, courts apply a test of judgement as to whether the administrative agency’s interpretation of the law is “reasonable”, even if other “reasonable” interpretations are possible. This gets particularly thorny when the original legislation is ambiguous with respect to a certain point. Gorsuch believes the Chevron standard subverts the intent of Constitutional separation of powers and judicial authority, a point of great importance in an age of explosive growth in administrative rule-making at the federal level.

Ilya Somin offers a defense of Gorsuch’s position on Chevron deference, stating that it violates the text of the Constitution authorizing the judiciary to decide matters of legal dispute without ceding power to the executive branch. The agencies, for their part, seem to be adopting increasingly expansive views of their authority:

“Some scholars argue that in many situations, agencies are not so much interpreting law, but actually making it by issuing regulations that often have only a tenuous basis in congressional enactments. When that happens, Chevron deference allows the executive to usurp the power of Congress as well as that of the judiciary.”

Jonathan Adler quotes a recent decision by U.S. Appeals Court Judge Kent Jordan in which he expresses skepticism regarding the wisdom of Chevron deference:

Deference to agencies strengthens the executive branch not only in a particular dispute under judicial review; it tends to the permanent expansion of the administrative state. Even if some in Congress want to rein an agency in, doing so is very difficult because of judicial deference to agency action. Moreover, the Constitutional requirements of bicameralism and presentment (along with the President’s veto power), which were intended as a brake on the federal government, being ‘designed to protect the liberties of the people,’ are instead, because of Chevron, ‘veto gates’ that make any legislative effort to curtail agency overreach a daunting task.

In short, Chevron ‘permit[s] executive bureaucracies to swallow huge amounts of core judicial and legislative power and concentrate federal power in a way that seems more than a little difficult to square with the Constitution of the [F]ramers’ design.’

The unchecked expansion of administrative control is a real threat to the stability of our system of government, our liberty, and the health of our economic system. It imposes tremendous compliance costs on society and often violates individual property rights. Regulatory actions are often taken without performing a proper cost-benefit analysis, and the decisions of regulators may be challenged initially only within a separate judicial system in which courts are run by the agencies themselves! I covered this point in more detail one year ago in “Hamburger Nation: An Administrative Nightmare“, based on Philip Hamburger’s book “Is Administrative Law Unlawful?“.

Clyde Wayne Crews of the Competitive Enterprise Institute gives further perspective on the regulatory-state-gone-wild in “Mapping Washington’s Lawlessness: An Inventory of Regulatory Dark Matter“. He mentions some disturbing tendencies that may go beyond the implementation of legislative intent: agencies sometimes choose to wholly ignore some aspects of legislation; agencies tend to apply pressure on regulated entities on the basis of interpretations that stretch the meaning of such enabling legislation as may exist; and as if the exercise of extra-legislative power were not enough, administrative actions have a frequent tendency to subvert the price mechanism in private markets, disrupting the flow of accurate information about resource-scarcity and the operation of incentives that give markets their great advantages. All of these behaviors fit Crews’ description of “regulatory dark matter.”

Chevron deference represents an unforced surrender by the judicial branch to the exercise of power by the executive. As Judge Jordan notes in additional quotes provided by Adler at a link above, this does not deny the usefulness or importance of an agency’s specialized expertise. Nevertheless, the courts should not abdicate their role in reviewing an agency’s developmental evidence for any action, and the reasonability of an agency’s applications of evidence relative to alternative courses of action. Nor should the courts abdicate their role in ruling on the law itself. Judge Gorsuch is right: Chevron deference should be re-evaluated by the courts.

Benefit Mandates Bar Interstate Competition

Tags

, , , , , , , , , , , , ,

The lack of interstate competition in health insurance does not benefit consumers, but promoting that kind of competition requires steps that are not widely appreciated. Most of those steps must take place at the state level. In fact, it is not well known that it is already legal for states to jointly create interstate “compacts” under Obamacare, though none have done so.

The chief problem is that states regulate insurance carriers and the policies they offer in a variety of ways. Coverage mandates vary from state to state, as do rules governing the coverage of pre-existing conditions, renewability, dependents, costs, and risk rating. John Seiler, writing at the Foundation for Economic Education, offers a great perspective on the fractured character of state regulations. Incumbent insurers within a state have natural advantages due to their existing relationships with local providers. Between the difficulty of forming a new network and the costs of customizing policies and obtaining approval in multiple states, there are significant barriers to entry at state lines.

Federalism is a principle I often support, but state benefit mandates and other regulations are perverse examples because they restrict the otherwise voluntary and victimless choices available to a state’s consumers. Well, victimless except perhaps for in-state monopolists and their cronyist protectors in state government. Many powers are reserved to states under the Constitution, while the powers of the federal government are strictly limited. That’s well and good unless state governments infringe on the rights of individuals protected by the Constitution. In particular, the Commerce Clause prohibits state governments from obstructing the flow of interstate commerce.

Here is a bit of history surrounding the evolution of state versus federal control over insurance markets, as told by Pennsylvania Insurance Commissioner Teresa Miller (as quoted by reporter Steve Esack):

Since the 1800s, the U.S. Supreme Court held individual states, not Congress, had the power to regulate insurance companies. The high court overturned that precedent, however, in a 1944 ruling, United States v. South-Eastern Underwriters, that said insurance sales constituted interstate trade and Congress could regulate insurance under the U.S. Constitution’s Commerce Clause.

But states cried foul. In response, Congress passed and President Harry S. Truman in 1945 signed the McCarran-Ferguson Act to grant a limited anti-trust provision so states could keep regulating insurance carriers. The law does not preclude cross-border sales. It means insurance companies must abide by different sets of rules and regulations and laws in 50 states.

Congress obviously recognized that state regulation of health insurance would create monopoly power and restrain trade, even if states place bridles on insurers and impose ostensible consumer protections. The solution was to exempt health insurers from broad federal regulation and anti-trust prosecution by the Department of Justice.

Last week, the House of Representatives passed a bill that would repeal McCarran-Ferguson for health insurers. However, that would do little to encourage cross-border competition as long as the tangle of state mandates and other regulations remain in place. The regulatory landscape would have to change under this kind of federal legislation, but how that would happen is an open question. Could court challenges be brought against state regulators and coverage mandates as anti-competitive? Would anti-trust actions be brought against incumbent carriers?

Robert Laszewski has strong objections to any new law that would allow interstate sales of health insurance as long as state benefit mandates remain in place for “local legacy” carriers. In particular, he believes it would encourage “cherry picking” of the best risks by market entrants who would be free of the mandates. Many of the healthiest individuals would jump at the chance to purchase stripped down, catastrophic coverage. That would leave the legacy carriers under the burden of mandates and deteriorating risk pools. Would states do this to their incumbent insurers without prodding by the courts? Would they simply drop the mandates? I doubt it.

No matter the end-state, there is likely to be a contentious transition. Promoting interstate competition in the health insurance market is a laudable goal, but it is not as simple as some health-care reformers would have us believe. Real competition requires action by states to eliminate or liberalize regulations on benefit mandates, risk-rating and pre-existing conditions. Ultimately, the cost of coverage for high-risk individuals might have to be subsidized, whether means-tested or not, through a combination of support from the states, the federal government, and private charities. And of course, interstate competition really does requires repeal of the health insurance provisions of McCarran-Ferguson.

Governments at any level can act against the well-being of consumers, despite the acknowledged benefits of decentralized governance over central control. Benefit mandates, whether imposed at the federal or state levels, are inimical to consumer choice, competition, efficient pricing, and often to the very concept of insurance. Those aren’t the sort of purposes federalism was intended to serve.

The CBO’s Obamacare Fantasy Forecast

Tags

, , , , , , , , ,

The Congressional Budget Office (CBO) is still predicting strong future growth in the number of insured individuals under Obamacare, despite their past, drastic over-predictions for the exchange market and slim chances that the Affordable Care Act’s expansion of Medicaid will be adopted by additional states. Now that Republican leaders have backed away from an unpopular health care plan they’d hoped would pass the House and meet the Senate’s budget reconciliation rules, it will be interesting to see how the CBO’s predictions pan out. The “decremental” forecasts it made for the erstwhile American Health Care Act (AHCA) were based on its current Obamacare “baseline”. A figure cited often by critics of the GOP plan was that 24 million fewer individuals would be insured by 2026 than under the baseline.

It was fascinating to see many supporters of the AHCA accept this “forecast” uncritically. With the AHCA’s failure, however, we’ve been given an opportunity to witness the distortion in what would have been a CBO counterfactual. What a wonderful life! We’re stuck with Obamacare for the time being, but this glimpse into the CBO’s delusions will be one of several silver linings for me.

Again, the projected 24 million loss in the number of insured under the AHCA was based on an actual predicted loss of about 5 – 6 million and the absence of an Obamacare gain of 18 – 19 million. Those figures are from an excellent piece by Avik Roy in Forbes. I drew on that article extensively in my post on the AHCA prior to its demise. Here are some key points I raised then, which I’ve reworded slightly to put more emphasis on the Obamacare forecasts:

  1. The CBO has repeatedly erred by a large margin in its forecasts of Obamacare exchange enrollment, overestimating 2016 enrollment by over 100% as recently as 2014.
  2. The AHCA changes relative to Obamacare were taken from CBO’s 2016 forecast, which is likely to over-predict Obamacare enrollment on the exchanges by at least 7 million, according to Roy.
  3. The CBO also assumes that all states will opt to participate in expanded Medicaid under Obamacare going forward. That is highly unlikely, and Roy estimates its impact on the CBO’s forecast at about 3 million individuals.
  4. The CBO believes that the Obamacare individual mandate has encouraged millions to opt for insurance. Roy says that assumption accounts for as much as 9 million of total enrollment across the individual and employer markets, as well as Medicaid.

Thus, Roy believes the CBO’s estimate of the coverage loss of 24 million individuals under the AHCA was too high by about 19 million!

In truth, Obamacare will be watered down by regulatory and other changes instituted by the Trump Administration, which has said it will not enforce Obamacare’s individual mandate. Coverage under the “new” Obamacare will devolve quickly if the CBO is correct about the impact of the individual mandate.

The CBO’s job is to “score” proposed legislation relative to current law; traditionally, it made no attempt to account for dynamic effects that might arise from the changed incentives under a law. The results show it, and the Obamacare projections are no exception. In the case of Obamacare, however,  the CBO seems to have applied certain incentive effects selectively. The supporters of the AHCA might have helped their case by focusing on the flaws in the CBO’s baseline assumptions. We should keep that in mind in the future with respect to any future health care legislation, not to mention tax reform!

 

 

 

 

 

Lighten Up For Human Achievement Hour!

Tags

, , , , , , , ,

idea-light-bulb

Tonight, Saturday March 25th from 8:30 to 9:30, I’ll be doing my part to celebrate humankind’s ascendence over the bare subsistence and misery that was ubiquitous until just the last few centuries. Human Achievement Hour is sponsored by the Competitive Enterprise Institute (CEI) to celebrate the incredible technological miracles  brought forth by human ingenuity and free markets:

Originally launched as the counter argument to the World Wide Fund for Nature’s Earth Hour, where participants renounce the environmental impacts of modern technology by turning off their lights for an hour, Human Achievement Hour challenges people to look forward rather than back to the dark ages.

Symbolically or not, Earth Hour is a misguided effort that completely ignores how modern technology allows societies to develop new and more sustainable practices, like helping people around the world be more eco-friendly and better conserve our natural resources.

While Earth Hour supporters may suggest rolling brown-outs in India are desirable, we respectfully disagree. Instead of sitting in the dark, Human Achievement Hour promotes new ideas and celebrates the technology and innovation that will help solve the world’s environmental challenges.”

The following are suggestions from CEI as to how you can participate in the celebration. I’ll take them up on the third and sixth items on this list, just as I have for the past several years.

  • Use your phone or computer to connect with friends and family
  • Watch a movie or your favorite television show
  • Drink a beer or cocktail
  • Drive a car or take a ride-sharing service
  • Take a hot shower
  • Or, in true CEI fashion, celebrate reliable electricity that has saved lives, by bringing heat and air conditioning to people around the world, and keep your lights on for an hour

Light up the night! Here are a couple of links with information on the worldwide progress in improving human living conditions:

The Human Progress Blog

Thank Fracking For Reduced Emissions

We are winning the war against starvation, disease and poverty around the globe, though progress can seem frustratingly gradual in real time. Nevertheless, over the sweep of history, we are winning the battle in a dramatic way.

Risks, Costs and the Sharing Kind

Tags

, , , , , , , , , , ,

Of all the health care buffoonery we’ve witnessed since the Affordable Care Act (ACA, or Obamacare) was first introduced in Congress in 2009, one of the most egregious is the strengthening of the notion that health insurance should cover a variety of wholly predictable, and strictly speaking, non-insurable events. Charlie Martin recently posted some interesting comments on insurance and why it works, and why public perceptions and public policy are often at odds with good insurance practices. He says that “Insurance Is Always Just Gambling“. True, real insurance is like any other rational hedge against risk, and that can be called a gamble. Unfortunately, public policy often interferes with our ability to hedge these risks efficiently.

Hedged Risk Or Prepaid Expenses?

To begin with, insurance is a mechanism for individuals to manage the financial impact of events that are unpredictable and potentially costly. These are insurable risks. But if an event recurs regularly, like an annual physical exam, a breast exam, or a pap smear, or if an event is largely within the individual’s control, like whether an ugly mole should be removed, then it is not an insurable risk. Paying for such “coverage” through a third-party insurer amounts to prepaying for services for which you’d otherwise pay directly when the time comes. We’ve essentially adopted this prepayment scheme on a national scale through Obamacare’s mandated benefits: we get broad coverage of non-insurable events in exchange for premiums and/or deductibles high enough to cover the prepayments! Big win, huh?

The rationale for a broad coverage mandate is that it will induce healthy behaviors like, well… getting an annual checkup. Therefore, it is said to be in the interests of insurers to include such benefits in basic coverage. That might well be, but the insurers don’t do it for free! Indeed, a combination of premiums and deductibles are correspondingly higher as a result, and the mandate introduces a “middle man”, the insurer, who adds cost to the process of executing a relatively simple transaction.

Unlike these prepaid health care expenses, real insurance is really a sort of gamble. An insurer makes a bet that you won’t have a major, unanticipated health care need, and you put up the “premium” as your bet that you will have such a need. If you are healthy, then the odds are low, so it’s a fairly cheap bet for you, but you have to put up a little extra to pay for your insurer’s administrative costs. Down the road, if you need acute care, your bet pays off. Yippee! You’ll be covered.

But who knows the odds that you’ll need expensive care? And why would an insurer take the risk of losing big if you get sick?

The insurer can estimate those odds via actuarial data and experience, and they can assume your risk by playing the law of large numbers: if they make similar bets with many individuals, their actual losses will be more than covered by premium revenue (most of the time… as Martin explains, it’s possible for an insurer to make a bet with a so-called reinsurer as a hedge against the small risk of a huge loss on its book of business, beyond some threshold).

Shared Risk Or Shared Cost?

Martin objects to the use of the term “shared risk” in this context. Many individuals make similar bets, which makes the insurer’s aggregate payout more predictable. That allows them to offer such bets on reasonable monetary terms, and they are all voluntary contracts sought out by people facing risks of the same character. If an individual seeks to insure against a demonstrably heightened risk, an insurer might or might not agree to the “bet” voluntarily, but if it does, the risk is not truly “shared” by individuals who face lower risks. The high-risk bet is reasonable for the insurer only to the extent that: (1) the premium is actuarially fair in conjunction with a larger pool of high-risk bets, or (2) it can be cross-subsidized by more profitable lines of coverage. If the answer is (2), then premiums for healthy individuals must rise to cover risks they do not share. That is one basis under which Obamacare operates and it is a subtle aspect of Martin’s argument against the notion of “shared risks”. Perhaps we can avoid the semantic difficulty by speaking of “sharing the costs of risks that are not shared”.

A more obvious aspect of Martin’s objection to “shared risk” relates to the expectation that predictable medical costs must be “covered” by health insurance, as discussed above. If so, no risk is shared because there is no risk! Yet we often speak of health insurance “needs” as if they combine a variety of such things, and as if all those “needs” embody risks that are shared. They are not.

Sharing the Cost of Prenatal Care

In another post, Martin tackles the question of whether certain people should be expected to pay a premium that includes the cost of prenatal care. Martin was prompted by a tweet from the National Association for the Repeal of Abortion Laws (NARAL), which read:

WOW. The #GOP’s reason to object to insurance covering prenatal care? ‘Why should men pay for it?’ #Trumpcare #ProtectOurCare”

There was a link in the tweet to a video, which was captioned by NARAL as follows:

The GOP reasoning to object to prenatal insurance
Two male Republicans object to prenatal care coverage under the ACA because—while it ensures women have healthy pregnancies—it means men pay *a tiny bit more* for insurance. WOW.”

To the extent that pregnancy can be considered a risk, it is certainly not shared by seniors, gays and lesbians, and infertile individuals, let alone unattached males. And from an insurance perspective, an obvious difficulty with NARAL’s point is that many pregnancies are planned. As such, they are not insurable events (though complications of pregnancy clearly are insurable). Yet people speak as though others must “share” the costs. That is fundamentally unfair and economically inefficient. Subsidies for couples who might wish to have children lead to greater rates of fertility than those couples can otherwise afford, saddling society with the medical bill. Incentives are no joke.

There are also unplanned pregnancies among singles and married couples, however. That sounds more like an insurable event, but it’s usually impossible for a third party to determine whether a pregnancy is planned or unplanned, so moral hazard is an issue (except in extreme circumstances like rape or incest). The risk of pregnancy is confined to a subset of the population, so sharing these costs more broadly is inefficient to the extent that it subsidizes some pregnancies (oops!) that individuals cannot otherwise afford. Individuals and couples who face pregnancy risk must manage that risk in any way they chose, and they might wish to purchase a form of coverage that will help them smooth the cost of pregnancies over their fertile years. It’s not clear that coverage of that nature is better for the prospective parent(s) than a line of credit, but it is a form of insurance only because of the “unplanned” component, and at least it allows them to spread the cost ex ante as well as ex post.

Sharing Costs of Common Risks 

The basic point here is that sharing a risk across all individuals, whether they do or do not actually face the risk, is not a natural characteristic of private insurance. In fact, the idea that this cost should be shared broadly is a collectivist notion. The major flaws are that 1) individuals and couples at risk are not financially responsible for certain cost-causing decisions they might make; and 2) it forces individuals and couples not at risk to pay for others’ risks, which is an act of coercion. NARAL feels that individuals who subscribe to these sound principles are worthy of rebuke. And NARAL asserts that “men pay a tiny bit more“, without providing quantification. Of course, it’s not just men, but this is a variation on the old statist argument that diffuse costs are not meaningful and should be disregarded, ad infinitum.

Public Aid Dressed As Insurance

There are segments of society that are often depicted as incapable of managing risks like pregnancy and unable to afford the consequences of mistakes. Subsidizing those individuals is a second collectivist front for “risk sharing”. Those subsidies can and do take the form of “family planning”, as well as prenatal care and childbirth. That’s part of the social safety net, and while it is perhaps more tolerable as aid, it entails the same kinds of bad incentives as discussed earlier.

The welfare state has seldom been praised for its impact on incentives. Most studies have found a link between public aid and higher fertility, and mixed effects on the dissolution of marriage (see here and here, and for international evidence, see here). But aid for health care expenses should not interfere with the sound operation of the insurance market. Vouchers for catastrophic coverage would be far preferable, and that aid could even cover some regularly recurring health care costs, despite their non-insurable nature, but that would be a compromise.

The misgivings voiced by Martin are partly driven by two fundamental issues: guaranteed issue and community rating. The former means that an insurer must take your bet regardless of the risks you present; the latter means that the insurer cannot charge premiums commensurate with the risk inherent in the various bets it takes. As David Henderson writes, both underpin the ACA. In other words, the ACA imposes cost sharing. Here is Henderson:

As I wrote over 20 years ago, the combination of guaranteed issue and community rating, a key feature of Obamacare, leads to the destruction of insurance markets. No one would advocate forcing insurance companies to issue house insurance policies to people whose houses are burning, at premiums equal to those paid by others whose houses aren’t burning. And the twin requirements would cause more and more people to refrain from buying insurance until their houses are on fire. Insurance companies, knowing this, would charge astronomically high premiums.

Cleaving the Health Care Knot… Or Not

Tags

, , , , , , , , , , , , , , , , , ,

IMG_3957

Republican leadership has succeeded in making their health care reform plans in 2017 even more confusing than the ill-fated reforms enacted by Congress and signed by President Obama in 2010. A three-phase process has been outlined by Republican leaders in both houses after the initial rollout of the American Health Care Act (AHCA), now billed as “Phase 1”. The AHCA was greeted with little enthusiasm by the GOP faithful, however.

As a strictly political matter, there is a certain logic to the intent of “three-phase plan”: limiting the provisions of the AHCA to issues having an impact on the federal budget. That would allow the bill to be addressed under “budget reconciliation” rules requiring only 51 votes for passage in the Senate. Phase 2 would involve regulatory rule-making, or rule-rescinding, as the case may be. The putative Phase 3 would require additional legislation to address such unfinished business as allowing health insurance competition across state lines, eliminating anti-trust protection for insurers, and medical tort reform. How the sponsors will get 60 Senate votes for Phase 3 reforms is an unanswered question.

Legislative Priorities

Yuval Levin wrote a great analysis of the AHCA last week In which he described the structure of the House bill as a paranoid reaction to the demands of an “imaginary parliamentarian”. By that he means that the reforms in the bill conform to a rigid and potentially flawed interpretation of Senate budget reconciliation rules. Levin’s view is that the House should not twist itself up over what might be negotiated prior to a Senate vote. In other words, the House should concern itself at this stage with passing a bill that at least makes sense as reform, without bowing to any of the awful legacy provisions in Obamacare.

Medicaid reform is one piece of the proposed legislation and is reasonably straightforward. It imposes caps on federal funding to states after 2020, but it grants more flexibility to the states in managing the program. It also involves a tradeoff by allowing Medicaid funding to increase over the first few years, in line with the expansion under Obamacare, in exchange for capped growth later. The expectation is that long-term costs of the program will be reduced through a combination of the caps and better management at the state level.

The more complex aspects of the AHCA attempt to effect changes in the individual market. Levin offers a good perspective on these measures. First, he describes the general character of earlier Republican reform proposals from which the AHCA descends:

Those various proposals all involved bringing premium costs down by enabling insurers to sell catastrophic coverage plans (along with more comprehensive plans) and enabling everyone in the individual market to afford at least those catastrophic coverage plans. This would enable far greater competition and let anyone not otherwise covered by insurance enter the individual market as a consumer.  …

The House proposal bears a clear resemblance to this approach. It involves some deregulation from Obamacare, it includes a refundable tax credit for coverage, it gestures toward incentives for continuous coverage. But it is also fundamentally different from this approach, because it functions within the core insurance rules established by Obamacare, which means it can’t really achieve most of the key aims of the conservative reforms it is modeled on.”

The rules established by Obamacare to which Levin refers include the form of community rating, which is merely loosened somewhat by the AHCA. However, the AHCA would impose a 30% penalty for those who fail to enroll while still healthy. This is a poorly designed incentive meant to substitute for Obamacare’s individual mandate, and it is likely to backfire. Levin is clear that this feature could have been avoided by scrapping the old rules and introducing a new form of community rating available only to the continuously insured.

The AHCA also fails to cap the tax benefits of employer-provided coverage, which retains a potential imbalance between the incentives for employer versus individual coverage. Levin believes, however, that some of these shortcomings can be fixed through a negotiation process in either the House or the Senate, if and when the bill goes there.

The CBO’s Report

As it is, the bill was “scored” by the Congressional Budget Office (CBO) with results that are widely viewed as unsatisfactory. The CBO’s report states that the AHCA would reduce the federal budget deficit, but the ugly headline is that relative to Obamacare, it woud cause 24 million people to lose their coverage by 2024. That number is drastically inflated, as Avik Roy demonstrated in his Forbes column this week. Here are the issues laid out by Roy:

  1. The CBO has repeatedly erred by a large margin in its forecasts of Obamacare exchange enrollment, overestimating 2016 enrollment by over 100% as recently as 2014.
  2. The AHCA changes relative to Obamacare are taken from CBO’s 2016 forecast, which still appears to over-predict Obamacare enrollment substantially. Roy estimates that this difference alone would shave at least 7 million off the 24 million loss of coverage quoted by the CBO.
  3. The CBO also assumes that all states will opt to participate in expanded Medicaid going forward. That is highly unlikely, and it inflates CBO’s estimate of the AHCA’s negative impact on coverage by another 3 million individuals, according to Roy.
  4. Going forward, the CBO expects the Obamacare individual mandate to encourage millions more to opt for insurance than would under the AHCA. Roy estimates that this assumptions adds as much as 9 million to the CBO’s estimate of lost coverage across the individual and employer markets, as well as Medicaid.

Thus, Roy believes the CBO’s estimate of lost coverage for 24 million individuals is too high by about 19 million! And remember, these hypothetical losses are voluntary to the extent that individuals refuse to avail themselves of AHCA tax credits to purchase catastrophic coverage, or to enroll in Medicaid. The latter will be no less generous under the AHCA than it is today. The tax credits are refundable, which means that you qualify regardless of your pre-credit tax liability.

Fixes

Despite Roy’s initial skepticism about the AHCA, he thinks it can be fixed, in part by means-testing the tax credits, rather than the flat credit in the bill. He also believes the transition away from the individual mandate should be more gradual, allowing more time for markets to being premiums down, but I find this position rather puzzling given Roy’s skepticism that the mandate has a strong impact on enrollment. Perhaps gradualism would convince the CBO to score the bill more favorably, but that’s a bad reason to make such a change.

It’s impossible to say how the bill will evolve, but certainly improvements can be made. It is also impossible to know whether Phases 2 and 3 will ultimately bring a more complete set of cost-reducing regulatory and competitive reforms. Phase 3, of course, is a political wild card.

Michael Tanner notes a few other advantages to the AHCA. Even the CBO says the cost of health insurance would fall, and the AHCA will bring greater choice to the individual market. It also promises over $1 trillion in tax cuts and lower federal deficits.

Alternatives

The GOP faced alternatives that should have received more consideration, but those alternatives might not be politically viable at this point. Some of them contain features that might be negotiated into the final legislation. Rand Paul’s plan has not attracted many advocates. Paul took the courageous position that there should be no entitlements in a reform plan (i.e., subsidies); instead, he insisted, with liberalized market forces, premium costs would decline sufficiently to allow affordable coverage to be purchased by a broad cross-section of Americans. Paul is obviously unhappy about the widespread support in the GOP for refundable tax credits as a replacement for existing Obamacare subsidies.

John C. Goodman has advocated a much simpler solution: take every federal penny now dedicated to health care and insurance subsidies, including every penny of taxes now avoided via tax deductions on employer-provided coverage, and pay it out to households as a tax credit contingent on the purchase of health insurance or health care expenses. This is essentially the plan put forward by Rep. Pete Sessions and Sen. Bill Cassidy in the Patient Freedom Act, described here. While I admire the simplicity of one program to replace the existing complexities in the federal funding of health care coverage, my objection is that a health care “dividend” of this nature resembles the flat tax credit in the AHCA. Neither is means-tested, amounting to a “Universal Basic Health Insurance Benefit”. Regular readers will recall my recent criticism of the Universal Basic Income, which is the sort of program that smacks of “universal state dependency”. But let’s face it: we’re already in a state of federal health care dependency. In this case, there is no incremental cost to taxpayers because the credit would replace existing outlays and tax expenditures. In that sense, it would eliminate many of the distortions currently embedded in federal health care policy.

A more drastic approach, at this point, is to simply repeal Obamacare, perhaps with a lengthy phase-out, and attempt to replace it later in the hope that support will coalesce around a reasonable set of measures leveraging market forces, and with accommodations for high-risk individuals and the economically disadvantaged. Michael Cannon writes that CBO estimated a simple repeal would increase the number of uninsured by 23 million over ten years, slightly less than the 24 million estimate for the AHCA! Of course, neither of these estimates is likely to be remotely accurate, as both are distorted by the CBO’s rosy assumptions about the future of Obamacare.

Where To Go?

Tanner reminds us that the real alternative to Republican legislation, whatever form it might take, is not a health care utopia. It is Obamacare, and it is collapsing. That plan cannot be effectively reformed with additional subsidies for insurers and consumers, or we’d find ourselves in a continuing premium spiral. The needed reforms to Obamacare would resemble changes contemplated in some of the GOP proposals. While I cannot endorse that AHCA legislation in its current form, or as a standalone reform, I believe it can be improved, and the later phases of reform we are told to anticipate might ultimately vindicate the approach taken by GOP leadership. I am most skeptical about the promise of subsequent legislation in Phase 3. I’ll have to keep my fingers crossed that by then, the path to additional reforms will be more attractive to democrats.

Trump Versus the Holocaust Trivializers

Tags

, , , , , , , , , , , , , , , , , , , , ,

trump-tallit

George Mason University Law Professor David Bernstein observed this week that many in the American Jewish community are panicked by Donald Trump’s election because they perceive Trump and his followers as anti-Semitic. That perception was seemingly reinforced by recent anti-Semitic acts, such as bomb threats at Jewish Community Centers and the desecration of graves at Jewish cemeteries in St. Louis, MO and Philadelphia, PA. Bernstein, who is Jewish and not a Trump supporter, wrote a piece entitled “The Great Anti-Semitism Panic of 2017“, which appeared in the Volokh Conspiracy blog sponsored by the Washington Post.

Like Bernstein, I’ve seen a number of indignant posts by Jewish friends connecting Trump and anti-Semitism, complete with comparisons to Adolf Hitler. My quick reaction is that such comparisons are not only irresponsible, they are idiotic. The ghastly implication is that Trump might entertain the idea of exterminating Jews, or any other opposition group, and it is complete nonsense.

Taking a step back, perhaps all this is related to Trump’s nationalism and his views on border security. That includes “extreme vetting” of refugees, deportation of illegal immigrants, and even the dubious argument for a border wall. While that’s not about Jews, those policies appeal to certain fringe, racist elements on the extreme right where anti-Semitism is commonplace. However, those policies also appeal to a much broader and diverse audience of voters who harbor anxieties about economic and national security, and who are neither racists nor anti-Semites.

Bernstein takes progressive Jews to task for tying any of this to anti-Semitism on the part of Trump, his Administration, or his broader base of support:

…  the origins of the fear bear only a tangential relationship to the actual Trump campaign. For example, I’ve lost track of how many times Jewish friends and acquaintances in my Facebook feed have asserted, as a matter of settled fact, that Bannon’s website Breitbart News is a white-supremacist, anti-Semitic site. I took the liberty of searching for every article published at Breitbart that has the words Jew, Jewish, Israel or anti-Semitism in it, and can vouch for the fact that the website is not only not anti-Semitic, but often criticizes anti-Semitism (though it is quite ideologically selective in which types of anti-Semitism it chooses to focus on). I’ve invited Bannon’s Facebook critics to actually look at Breitbart and do a similar search on the site, and each has declined, generally suggesting that it would be beneath them to look at such a site, when they already know it’s anti-Semitic.

There is .. a general sense among Jews, at least liberal Jews, that Trump’s supporters are significantly more anti-Semitic than the public at large. I have many times asked for empirical evidence that supports this proposition, and have so far come up empty. I don’t rule out the possibility that it’s true, but there doesn’t seem to be any survey or other evidence supporting it. Given that American subgroups with the highest proportions of anti-Semites — African Americans, first-generation Hispanic immigrants, Muslims and high school dropouts — are strong Democratic constituencies (though the latter group appears to have gone narrowly for Trump this time), one certainly can’t simply presume that Trump has a disproportionate number of anti-Semitic supporters.

Bernstein goes on to discuss the hostility to Trump from groups like the Anti-Defamation League (ADL), hostility which he characterizes as essentially opportunistic:

The ADL’s reticent donors are no longer reticent in the age of Trump, with the media reporting that donations have been pouring in since Trump’s victory. It’s therefore hardly in the ADL’s interest to objectively assess the threat from Trump and his supporters. Indeed, I’m almost impressed that an ADL official managed just the other day to link the JCC bomb threats to emboldened white supremacists, even though the only suspect caught so far is an African American leftist.

He also notes the irony that progressive Jews have been shunned by many leftists, who almost uniformly condemn Zionism. Now, progressive Jews hope to renew common cause with those whose political purposes are defined by membership in groups with a history of marginalized treatment, and who now believe they are threatened by Trump. Will they be happy together? Bernstein attests that many Jews privately acknowledge the danger of “changing demographics”:

… which is a euphemism for a growing population of Arab migrants to the United States. Anti-Semitism is rife in the Arab world, with over 80 percent of the public holding strongly anti-Semitic views in many countries.

As a non-Jew, some would say I lack the bona fides to comment on how Jews “should” feel about Donald Trump. I was raised Catholic, but I attended a high school at which over 60% of the student population was Jewish. I was a member of a traditionally Jewish fraternity in college, where I witnessed occasional anti-Semitism from certain members of non-Jewish fraternities, and I felt victimized by it to some degree. My late brother married a Jewish woman, and he was buried according to Jewish custom. I was once stunned by a brief anti-Semitic wisecrack I overheard in the restroom at a community theatre production of the great musical Fiddler On the Roof!

So, I am connected and strongly sympathetic to the Jewish community. I am also well acquainted with white Gentiles who have had much less interaction with Jews. Those individuals span the political spectrum, and there is no doubt that racists and anti-Semites reside at both ends. I will state unequivocally that among this population, I have observed as much racism and denigration of Jews from the left as from the right. It partly reflects anti-Zionism, but there have been leftists in my acquaintance who seem to regard Jews as Shylockian, as greedy moneychangers and crooked lawyers, or as “hopelessly bourgeois”. Jews should not be blind to the hatred that still exists for them in certain quarters on the left, even if it’s easier to pretend that right-wing religious nuts are their only enemies.

Bernstein’s column was met with outrage by some Jewish progressives. In the Jewish Journal, Rob Eshman accused Bernstein of making apologies for Trumpian anti-Semitic behavior. Here is Bernstein’s response, in which he castigates Eshman for distorting both his thesis and the reaction of the Jewish community to Trump. He also notes that Eshman assigns guilt for the recent spate of anti-Semitic acts to Trump supporters where no evidence exists. That implication is a constant refrain from certain Jewish friends on my Facebook news feed. But there is ample evidence of “fake” hate crimes by progressives, as documented last week by Kevin Williamson.

Finally, it is hard to square the idea that Trump and his leadership team (which includes his Jewish son-in-law) are anti-Semitic with other evidence, such as the unequivocal support they have pledged to Israel, and their hard stand on vetting refugees from nations that are avowed enemies of the Jewish people. Yes, Bernstein is well aware of the anti-Semitic, fringe-right elements that have supported Trump, but those are not the sentiments of anyone serving in the administration, including Steve Bannon. The left has become quite blithe about observing Godwin’s Law, which states that all political opponents will eventually be called out as Nazis. Progressive Jews have taken the cue without much thought: the frequent comparisons of Donald Trump to Hitler are awful and are not compatible with healthy discourse. As Stefan Kanfer writes in City Journal in his review of the book “A Tale of Three Cities” (my emphasis added):

… those who persist in comparing Adolf Hitler with any U.S. politician reveal themselves as members of a group just to the side of the Holocaust denier—the Holocaust trivializer. There are no lower categories.

The Taxing Logic of Carbon Cost Guesswork

Tags

, , , , , , , , , , , , , ,

An article by three prominent economists* in the New York Times this week summarized the Climate Leadership Council’s Conservative Case for Climate Action“. The “four pillars” of this climate plan include (1) a revenue-neutral tax on carbon emissions, which are used to fund… (2) quarterly “carbon dividend” payments to all Americans; (3) border tax adjustments to account for carbon emissions and carbon taxes abroad; (4) eliminating all other regulations on emissions of carbon. The “Case” is thus a shift from traditional environmental regulation to a policy based on tax incentives, then wrapped around a redistributive universal income mechanism.

I’ll dispense with the latter “feature” by referencing my recent post on the universal basic income: bad idea! The economists advocate for the carbon dividend sincerely, but also perhaps as a political inducement to the left and confused centrists.

The Limits of Our Knowledge

The most interesting aspect of the “Case” is how it demonstrates uncertainty around the wisdom of carbon restrictions of any kind: traditional regulations, market-oriented trading, or tax incentives. Those all involve assumptions about the extent to which carbon emissions should be restricted, and it’s not clear that any one form of restriction is more ham-handed than another. Traditional regulation may restrict output in various ways. For example, standards on fuel efficiency are an indirect way of restricting output. A carbon market, with private trading in assigned “rights” to emit carbon, is more economically efficient in the sense that a tradeoff is involved for any decision having carbon implications at the margin. However, the establishment of a carbon market ultimately means that a limit must be imposed on the total quantity of rights available for trading.

A carbon tax imputes a cost of carbon emissions to society. It also imposes tradeoffs, so it is similar to carbon trading in being more economically efficient than traditional regulation. A producer can attempt to adjust a production process such that it emits less carbon, and the incidence of the tax falls partly on final consumers, who adjust the carbon intensity of their behavior accordingly. For our purposes here, a tax is more illuminating in the sense that we can assess inputs to the cost imputation. Even a cursory examination shows that the cost estimate can vary widely given reasonable differences in the inputs. So, in a sense, a tax helps to reveal the weakness of the case against carbon and the carbon-based rationale for allowing a coercive environmental authority to sclerose the arteries of the market system.

The three economists propose an initial tax of $40 per metric ton of emitted carbon. The basis for that figure is the so-called “social cost of carbon” (SCC), a theoretical construct that is not readily measured. Economists have long subscribed to the theory of social costs, or negative externalities, and to the legitimacy of government action to force cost causers to internalize social costs via corrective taxation. However, the wisdom of allowing the state to intrude upon markets in this way depends on our ability to actually measure specific external costs.

Fatuous Forecasts

The SCC is based on the presumed long-run costs of an incremental ton of carbon in the environment. I do not use the word “presumed” lightly. The $40 estimate subsumes a variety of speculative assumptions about the climate’s response to carbon emissions, the future economic impact of that response, and the rate at which society should be willing to trade those future costs against present costs. The figure only counts costs, without considering the huge potential benefits of warming, should it actually occur.

Ronald Bailey at Reason illustrates the many controversies surrounding the calculation of the SCC. He notes the tremendous uncertainty surrounding an Obama Administration estimate of $36 a ton in 2007 dollars. It used an outdated climate sensitivity figure much higher than more recent estimates, which would bring the calculated SCC down to just $16.

A discount rate of 3% was applied to projected future carbon costs to produce an SCC in present value terms. The idea is that today’s “collective” would be indifferent between paying this cost today and suffering the burden of future costs inflicted by carbon emissions. This presumes that 3% is the expected return society can earn for the future by investing resources today. Unfortunately, the SCC is tremendously sensitive to the discount rate. Together with the more realistic estimate of climate sensitivity, a discount rate of 7% (the Office of Management and Budget’s regulatory guidance) would actually make the SCC negative!

Another U.S. regulatory standard, according to Bailey, is that calculations of social cost are confined to costs borne domestically. However, the SCC attempts to encompass global costs, inflating the estimate by a factor of 4 to 14 times. The justification for the global calculation is apparent righteousness in owning up to the costs we cause as a nation, and also for the example it sets for other countries in crafting their own carbon policies. Unfortunately, it also magnifies the great uncertainties inherent in this messy calculation.

Lack of Evidence

This guest essay on the Watts Up With That? web site by Paul Driessen and Roger Bezdek takes a less gracious view of the SCC than Bailey, if that is possible. As they note, in addition to climate sensitivity, the SCC must come to grips with the challenge of measuring the economic damage caused by each degree of warming. This includes factors far into the future that simply cannot be projected with any confidence. We are expected to place faith in distant cost estimates of heat-related deaths, widespread crop failures, severe storm damage, coastal flooding, and many other calamities that are little more than scare stories. For example, the widely reported connection between atmospheric carbon concentration and severe weather is demonstrably false, as are reports that Pacific islands have been swallowed by the sea due to global warming.

Ignoring the Benefits

The SCC makes no allowance for the real benefits of burning fossil fuels, which have been a powerful engine of economic growth and still hold the potential to lift the underdeveloped world out of poverty and environmental  distress. The benefits of carbon also include fewer cold-related deaths, higher agricultural output, and a greener environment. It isn’t surprising that these benefits are ignored in the SCC calculation, as any recognition of that promise would undermine the narrative that fossil fuels are unambiguously evil. Indeed, an effort to calculate only the net costs of carbon emissions would likely expose the entire exercise as a sham.

The “four pillars” of the Climate Leadership Council‘s case for climate action rest upon an incredibly flimsy foundation. Like anthropomorphic climate change itself, appropriate measurement of a social cost of carbon is an unsettled issue. Its magnitude is far too uncertain to use as a tool of public policy: as either a tax or a rationale for carbon regulation of any kind. And let’s face it, taxation and regulation are coercive acts that better be undertaken with respect for the distortions they create. In this case, it’s not even clear that carbon emissions should be treated as an external cost in many applications, as opposed to an external benefit. So much for the corrective wisdom of authorities. The government is not well-equipped to centrally plan the economy, let alone the environment.

  • The three economists are Martin Feldstein, Ted Halstead and Greg Mankiw.

National Endowment for Rich Farts

Tags

, , , , , , , , , , , ,

Wailing has begun over the possible defunding and demise of the National Endowment for the Arts (NEA). How could those cretins propose to eliminate an institution so very critical to promoting artistic expression? If that’s your reaction, you haven’t thought much about the main beneficiaries of federal sinkholes like the NEA. Granted, at $146 million annually, it is not a major federal budget item, but I’d rather not stoop to defend a lousy program because it’s small. So what’s my beef with the NEA, you ask? Read on.

First, any implication that the NEA is the lifeblood of the arts is laughable. No, the arts won’t die if federal funding is denied. Jeff Jacoby quotes figures suggesting that grants from the NEA represented less than 1% of all support for the arts and culture in the U.S. in 2015. Great art was created prior to the establishment of the NEA in 1965. Without the NEA, such bungles as “Piss Christ” would have met with less acclaim. As such a minor funding vehicle, eliminating the NEA won’t make much difference to artists, but it will end a subsidy for wealthy patrons, who can and do provide support for worthy projects, but also derive essentially private benefits from the federal arts spigot.

A large share of NEA grant money goes to non-profit organizations that are already subsidized to the extent that they are not taxed. (Let’s face it: the term “non-profit” itself is often a term of art.) Large arts organizations, which receive a significant share of NEA grants, often have highly-paid administrators and sumptuous facilities. Contributions to those organizations are tax-deductible for the donors. And few of those organizations provide art to the public for free or at a discount. Indeed, as noted at the last link, they often charge significant prices for attendance, and their audiences include a disproportionate percentage of high-income patrons.

Lawrence Reed argues persuasively that government need not subsidize the arts in an article in his series on the Cliches of Progressivism. Here are the highlights:

  • “Government funding of the arts… carries with it all the downsides of dependence on politics.
  • Claims that arts spending is magically “multiplied” are specious and usually self-serving, and never look at alternative uses of the same money.
  • Culture arises naturally and spontaneously among people who chose to interact with each other. Art is part of that, but it also competes with all sorts of other things people choose to do with their time and money.
  • If art is truly important, then the last thing we should want to do is politicize it or divert it toward those things that people with power think we should see or hear.”

Reed’s comment regarding “multipliers” might need some explanation in this context. The NEA’s defenders often claim that each dollar of NEA grant money results in multiple additional grants from other sources, but there is absolutely no evidence to support this claim except for a requirement that NEA grants be matched at the state level (not to mention a requirement for a state-level arts agency). Obviously, that represents another cost to taxpayers. It is quite possible, in fact, that the NEA and matching state grants act as substitutes for, and depress, private arts giving. See this piece in Forbes for more background. This NBER research utilized a large panel data set on individual charities and found only mixed support for the proposition that government grants encourage private contributions. In fact, the estimated effect was ambiguous for individual categories of charitable giving (which did not explicitly address the arts as a category). In any case, a positive cross-sectional effect of government grants on private giving for individual charities is consistent with a negative effect on other charities that do not receive public grants.

In a 20-year-old report from the Heritage Foundation, Stuart Butler offered a list of reasons to defund the NEA, which have held up well. Here, I provide eight that seem relevant:

  1. The arts will have more than enough support without the NEA: See above.
  2. Welfare for cultural elitists: See above. NEA grants fund a number of big and very elite organizations, but they would have you believe that it’s a veritable welfare program for the arts. That is a huge distortion. There is no question that the distribution of patrons of these organizations skews to the wealthy.
  3. Discourages charitable gifts to the arts: See above. Is the award of an NEA grant the equivalent of establishing a credit record to an arts organization? This might hold up for a few small organizations with projects the NEA has funded, but again, the support for this proposition is anecdotal and self-serving, and the numbers are small. And is there an implied stain on the legitimacy of any organization unable to win such a grant?
  4. Lowers the quality of American art: Committee decisions and central planning are not conducive to the spirit of creativity. Public institutions are often guided by political agendas, and government-sanctioned art stands in sharp contradiction to the ideal of free expression. Butler quotes Ralph Waldo Emerson: “Beauty will not come at the call of the legislature…. It will come, as always, unannounced, and spring up between the feet of brave and earnest men.”
  5. Funds pornography: this is not my hot button… it’s an issue only to the extent that public funds should not be used for purposes only flimsily in the public interest that many taxpayers find morally repugnant.
  6. Promotes politically correct art: See #4 above. The merits are then judged on the basis of criteria like race, ethnicity, and gender identity, not the quality of the art itself.
  7. Wastes resources: Butler offers a few examples of the waste at the NEA, a shortcoming common to all bureaucracies. The NEA funds organizations that behave as non-profit cronyists, engaging in lobbying efforts for more support. Butler also cites evidence that recipients of government grants in the UK hire more administrative staff than non-recipients, and tend not to reduce ticket prices.
  8. Funding the NEA disturbs the U.S. tradition of limited government: I suppose this goes without saying….

The federal government in the U.S. was granted a set of enumerated powers in the Constitution, and promoting the arts was not one of them. It wasn’t as if the subject didn’t come up at the Constitutional Convention. It did, and it was voted down. Today, entrenched interests at organizations like the NEA and National Public Radio distort the character of the constituencies they serve. In reality, those constituencies  are heavily concentrated among the cultural and economic elite. The NEA and NPR also promote the fiction that they are all that stand between access to the arts and culture and a bleak, artless dystopia. Give them credit for creating a fantasy about which the political left readily suspends disbelief.