Imprecision and Unsettled Science

Tags

, , , , , , , , , , , , , , , ,

Last week I mentioned some of the inherent upward biases in the earth’s more recent surface temperature record. Measuring a “global” air temperature at the surface is an enormously complex task, requiring the aggregation of measurements taken using different methods and instruments (land stations, buoys, water buckets, ship water intakes, different kinds of thermometers) at points that are unevenly distributed across latitudes, longitudes, altitudes, and environments (sea, forest, mountain, and urban). Those measurements must be extrapolated to surrounding areas that are usually large and environmentally diverse. The task is made all the more difficult by the changing representation of measurements taken at these points, and changes in the environments at those points over time (e.g., urbanization). The spatial distribution of reports may change systematically and unsystematically with the time of day (especially onboard ships at sea).

The precision with which anything can be measured depends on the instrument used. Beyond that, there is often natural variation in the thing being measured. Some thermometers are better than others, and the quality of these instruments has varied tremendously over the roughly 165-year history of recorded land temperatures. The temperature itself at any location is subject to variation as the air shifts, but temperature readings are like snapshots taken at points in time, and may not be representative of areas nearby. In fact, the number of land weather stations used in constructing global temperatures has declined drastically since the 1970s, which implies an increasing error in approximating temperatures within each expanding area of coverage.

The point is that a statistical range of variation exists around each temperature measurement, and there is additional error introduced by vagaries of the aggregation process. David Henderson and Charles Hooper discuss the handling of temperature measurement errors in aggregation and in discussions of climate change. The upward trend in the “global” surface temperature between 1856 and 2004 was about 0.8° C, but a 95% confidence interval around that change is ±0.98° C. (I believe that is probably small given the sketchiness of the early records.) In other words, from a statistical perspective, one cannot reject the hypothesis that the global service temperature was unchanged per the full period.

Henderson and Hooper make some other salient points related to the negligible energy impulse from carbon forcings relative to the massive impact of variations in solar energy and the uncertainty around the behavior of cloud formation. It’s little wonder that climate models relying on a carbon-forcing impact have erred so widely and consistently.

In addition to reinforcing the difficulty of measuring surface temperatures and modeling the climate, the implication of the Henderson and Hooper article is that policy should not be guided by measurements and models subject to so much uncertainty and such minor impulses or “signals”. The sheer cost of abating carbon emissions is huge, though some alternatives means of doing so are better than others. Costs increase as the degree of abatement increases (or replacement of low-carbon alternatives), and I suspect that the incremental benefit decreases. Strict limits on carbon emissions reduce economic output. On a broad scale, that would impose a sacrifice of economic development and incomes in the non-industrialized world, not to mention low-income minorities in the developed world. One well-known estimate by William Nordhaus involved a 90% reduction in world carbon emissions by 2050. He calculated a total long-run cost of between $17 trillion and $22 trillion. Annually, the cost was about 3.5% of world GDP. The climate model Nordhaus used suggested that the reduction in global temperatures would be between 1.3º and 1.6º C, but in view of the foregoing, that range is highly speculative and likely to be an extreme exaggeration. And note the small width of the “confidence interval”. That range is not at all a confidence interval in the usual sense; it is a “stab” at the uncertainty in a forecast of something many years hence.  Nordhaus could not possibly have considered all sources of uncertainty in arriving at that range of temperature change, least of all the errors in measuring global temperature to begin with.

Climate change activists would do well to spend their Earth Day educating themselves about the facts of surface temperature measurement. Their usual prescription is to extract resources and coercively deny future economic gains in exchange for steps that might or might not solve a problem they insist is severe. The realities are that the “global temperature” is itself subject to great uncertainty, and its long-term trend over the historical record cannot be distinguished statistically from zero. In terms of impacting the climate, natural forces are much more powerful than carbon forcings. And the models on which activists depend are so rudimentary, and so error prone and biased historically, that taking your money to solve the problem implied by their forecasts is utter foolishness.

Better Bids and No Bumpkins

Tags

, , , , , , , , , ,

United Airlines‘ mistreatment of a passenger last week in Chicago had nothing to do with overbooking, but commentary on the issue of overbooking is suddenly all the rage. The fiasco in Chicago began when four United employees arrived at the gate after a flight to Louisville had boarded. The flight was not overbooked, just full, but the employees needed to get to Louisville. United decided to “bump” four passengers to clear seats for the employees. They used an algorithm to select four passengers to be bumped based on factors like lowest-fare-paid and latest purchase. The four passengers were offered vouchers for a later flight and a free hotel night in Chicago. Three of the four agreed, but the fourth refused to budge. United enlisted the help of Chicago airport security officers, who dragged the unwilling victim off the flight, bloodying him in the process. It was a terrible day for United‘s public relations, and the airline will probably end up paying an expensive out-of-court settlement to the mistreated passenger.

Putting the unfortunate Chicago affair aside, is over-booking a big problem? Airlines always have cancellations, so they overbook in order to keep the seats filled. That means higher revenue and reduced costs on a per passenger basis. Passengers are rarely bumped from flights involuntarily: about 0.005% in the fourth quarter of 2016, according to the U.S. Department of Transportation. “Voluntarily denied boardings” are much higher: about 0.06%. Both of these figures seem remarkably low as “error rates”, in a manner of speaking.

Issues like the one in Chicago do not arise under normal circumstances because “bumps” are usually resolved before boarding takes place, albeit not always to everyone’s satisfaction. Still, if airlines were permitted (and willing) to bid sufficiently high rates of compensation to bumped ticket-holders, there would be no controversy at all. All denied boardings would be voluntary. There are a few other complexities surrounding the rules for compensation, which depend on estimates of the extra time necessary for a bumped traveler to reach their final destination. If less than an extra hour, for example, then no compensation is required. In other circumstances, the maximum compensation level allowed by the government is $1,300. These limits can create an impasse if a passenger is unwilling to accept the offer (or non-offer when only an hour is at stake). The only way out for the airline, in that case, is an outright taking of the passenger’s boarding rights. Of course, this possibility is undoubtedly in the airline’s “fine print” at the time of the original purchase.

No cap on a bumped traveler’s compensation was anticipated when economist Julian Simon first proposed such a scheme in 1968:

The solution is simple. All that need happen when there is overbooking is that an airline agent distributes among the ticket-holders an envelope and a bid form, instructing each person to write down the lowest sum of money he is willing to accept in return for waiting for the next flight. The lowest bidder is paid in cash and given a ticket for the next flight. All other passengers board the plane and complete the flight to their destination.

Today’s system is a simplified version of Simon’s suggestion, and somewhat bastardized, given the federal caps on compensation. If the caps were eliminated without other offsetting rule changes, would the airlines raise their bids sufficiently to eliminate most involuntary bumps? There would certainly be pressure to do so. Of course, the airlines already get to keep the fares paid on no-shows if they are non-refundable tickets.

John Cochrane makes another suggestion: limit ticket sales to the number of seats on the plane and allow a secondary market in tickets to exist, just as resale markets exist for concert and sports tickets. Bumps would be a thing of the past, or at least they would all be voluntary and arranged for mutual gain by the buyers and sellers. Some say that peculiarities of the airline industry argue that the airlines themselves would have to manage any resale market in their own tickets (see the comments on Cochrane’s post). That includes security issues, tickets with special accommodations for disabilities, meals, or children, handling transfers of frequent flier miles along with the tickets, and senior discounts.

Conceivably, trades on such a market could take place right up to the moment before the doors are closed on the plane. Buyers would still have to go through security, however, and you need a valid boarding pass to get through security. That might limit the ability of the market to clear in the final moments before departure: potential buyers would simply not be on hand.  Only those already through security, on layovers, or attempting to rebook on the concourse  could participate without changes in the security rules. Perhaps this gap could be minimized if last-minute buyers qualified for TSA pre-check. Also, with the airline’s cooperation, electronic boarding passes must be made changeable so that the new passenger’s name would match his or her identification. Clearly, the airlines would have to be active participants in arranging these trades, but a third-party platform for conducting trades is not out-of the question.

Could other concerns about secondary trading be resolved ion a third-party platform? Probably, but again, solutions would require participation by the airlines. Trading miles along with the ticket could be made optional (after all, the miles would have a market value), but the trade of miles would have to be recorded by the airline. The tickets themselves could trade just as they were sold originally by the airline, whether the accommodations are still necessary or not. The transfer of a discounted ticket might obligate the buyer to pay the airline a sum equal to the discount unless they qualified under the same discount program. All of these problems could be resolved.

Would the airlines want a secondary market in their tickets? Probably not. If there are gains to be made on resale, they would rather capture as much of it as they possibly can. The federal caps on compensation to bumped fliers give the airlines a break in that regard, and they should be eliminated in the interests of consumer welfare. Let’s face it, the airlines know the that a seat on an over-booked flight is a scarce resource; the owner (the original ticker buyer) should be paid fair market value if the airline wants to take their ticket for someone else. Airlines must increase their bids until the market clears, which means that fliers would never be bumped involuntarily. A secondary market in tickets, however, would obviate the practice of over-booking and allow fliers to capture the gain in exchange for surrendering their ticket. Once purchased, it belongs to them.

Playing Pretend Science Over Cocktails

Tags

, , , , , , , , , , , , , , , , ,

It’s a great irony that our educated and affluent classes have been largely zombified on the subject of climate change. Their brainwashing by the mainstream media has been so effective that these individuals are unwilling to consider more nuanced discussions of the consequences of higher atmospheric carbon concentrations, or any scientific evidence to suggest contrary views. I recently attended a party at which I witnessed several exchanges on the topic. It was apparent that these individuals are conditioned to accept a set of premises while lacking real familiarity with supporting evidence. Except in one brief instance, I avoided engaging on the topic, despite my bemusement. After all, I was there to party, and I did!

The zombie alarmists express their views within a self-reinforcing echo chamber, reacting to each others’ virtue signals with knowing sarcasm. They also seem eager to avoid any “denialist” stigma associated with a contrary view, so there is a sinister undercurrent to the whole dynamic. These individuals are incapable of citing real sources and evidence; they cite anecdotes or general “news-say” at best. They confuse local weather with climate change. Most of them haven’t the faintest idea how to find real research support for their position, even with powerful search engines at their disposal. Of course, the search engines themselves are programmed to prioritize the very media outlets that profit from climate scare-mongering. Catastrophe sells! Those media outlets, in turn, are eager to quote the views of researchers in government who profit from alarmism in the form of expanding programs and regulatory authority, as well as researchers outside of government who profit from government grant-making authority.

The Con in the “Consensus”

Climate alarmists take assurance in their position by repeating the false claim that  97% of climate scientists believe that human activity is the primary cause of warming global temperatures. The basis for this strong assertion comes from an academic paper that reviewed other papers, the selection of which was subject to bias. The 97% figure was not a share of “scientists”. It was the share of the selected papers stating agreement with the anthropomorphic global warming (AGW) hypothesis. And that figure is subject to other doubts, in addition to the selection bias noted above: the categorization into agree/disagree groups was made by “researchers” who were, in fact, environmental activists, who counted several papers written by so-called “skeptics” among the set that agreed with the strong AGW hypothesis. So the “97% of scientists” claim is a distortion of the actual findings, and the findings themselves are subject to severe methodological shortcomings. On the other hand, there are a number of widely-recognized, natural reasons for climate change, as documented in this note on 240 papers published over just the first six months of 2016.

Data Integrity

It’s rare to meet a climate alarmist with any knowledge of how temperature data is actually collected. What exactly is the “global temperature”, and how can it be measured? It is a difficult undertaking, and it wasn’t until 1979 that it could be done with any reliability. According to Roy Spencer, that’s when satellite equipment began measuring:

… the natural microwave thermal emissions from oxygen in the atmosphere. The intensity of the signals these microwave radiometers measure at different microwave frequencies is directly proportional to the temperature of different, deep layers of the atmosphere.

Prior to the deployment of weather satellites, and starting around 1850, temperature records came only from surface temperature readings. These are taken at weather stations on land and collected at sea, and they are subject to quality issues that are generally unappreciated. Weather stations are unevenly distributed and they come and go over time; many of them produce readings that are increasingly biased upward by urbanization. Sea surface temperatures are collected in different ways with varying implications for temperature trends. Aggregating these records over time and geography is a hazardous undertaking, and these records are, unfortunately, the most vulnerable to manipulation.

The urbanization bias in surface temperatures is significant. According to this paper by Ross McKitrick, the number of weather stations counted in the three major global temperature series declined by more than 4,500 since the 1970s (over 75%), and most of those losses were rural stations. From McKitrick’s abstract:

“The collapse of the sample size has increased the relative fraction of data coming from airports to about 50% (up from about 30% in the late 1970s). It has also reduced the average latitude of source data and removed relatively more high altitude monitoring sites. Oceanic data are based on sea surface temperature (SST) instead of marine air temperature (MAT)…. Ship-based readings changed over the 20th century from bucket-and-thermometer to engine-intake methods, leading to a warm bias as the new readings displaced the old.

Think about that the next time you hear about temperature records, especially NOAA reports on a “new warmest month on record”.

Data Manipulation

It’s rare to find alarmists having any awareness of the scandal at East Anglia University, which involved data falsification by prominent members of the climate change “establishment”. That scandal also shed light on corruption of the peer-review process in climate research, including a bias against publishing work skeptical of the accepted AGW narrative. Few are aware now of a very recent scandal involving manipulation of temperature data at NOAA in which retroactive adjustments were applied in an effort to make the past look cooler and more recent temperatures warmer. There is currently an FOIA outstanding for communications between the Obama White House and a key scientist involved in the scandal. Here are Judith Curry’s thoughts on the NOAA temperature manipulation.

Think about all that the next time you hear about temperature records, especially NOAA reports on a “new warmest month on record”.

Other Warming Whoppers

Last week on social media, I noticed a woman emoting about the way hurricanes used to frighten her late mother. This woman was sharing an article about the presumed negative psychological effects that climate change was having on the general public. The bogus premises: we are experiencing an increase in the frequency and severity of storms, that climate change is causing the storms, and that people are scared to death about it! Just to be clear, I don’t think I’ve heard much in the way of real panic, and real estate prices and investment flows don’t seem to be under any real pressure. In fact, the frequency and severity of severe weather has been in decline even as atmospheric carbon concentrations have increased over the past 50 years.

I heard another laughable claim at the party: that maps are showing great areas of the globe becoming increasingly dry, mostly at low latitudes. I believe the phrase “frying” was used. That is patently false, but I believe it’s another case in which climate alarmists have confused model forecasts with fact.

The prospect of rising sea levels is another matter that concerns alarmists, who always fail to note that sea levels have been increasing for a very long time, well before carbon concentrations could have had any impact. In fact, the sea level increases in the past few centuries are a rebound from lows during the Little Ice Age, and levels are now back to where the seas were during the Medieval Warm Period. But even those fluctuations look minor by comparison to the increases in sea levels that occurred over 8,000 years ago. Sea levels are rising at a very slow rate today, so slowly that coastal construction is proceeding as if there is little if any threat to new investments. While some of this activity may be subsidized by governments through cheap flood insurance, real money is on the line, and that probably represents a better forecast of future coastal flooding than any academic study can provide.

Old Ideas Die Hard

Two enduring features of the climate debate are 1) the extent to which so-called “carbon forcing” models of climate change have erred in over-predicting global temperatures, and 2) the extent to which those errors have gone unnoticed by the media and the public. The models have been plagued by a number of issues: the climate is not a simple system. However, one basic shortcoming has to do with the existence of strong feedback effects: the alarmist community has asserted that feedbacks are positive, on balance, magnifying the warming impact of a given carbon forcing. In fact, the opposite seems to be true: second-order responses due to cloud cover, water vapor, and circulation effects are negative, on balance, at least partially offsetting the initial forcing.

Fifty Years Ain’t History

One other amazing thing about the alarmist position is an insistence that the past 50 years should be taken as a permanent trend. On a global scale, our surface temperature records are sketchy enough today, but recorded history is limited to the very recent past. There are recognized methods for estimating temperatures in the more distant past by using various temperature proxies. These are based on measurements of other natural phenomenon that are temperature-sensitive, such as ice cores, tree rings, and matter within successive sediment layers such as pollen and other organic compounds.

The proxy data has been used to create temperature estimates into the distant past. A basic finding is that the world has been this warm before, and even warmer, as recently as 1,000 years ago. This demonstrates the wide range of natural variation in the climate, and today’s global temperatures are well within that range. At the party I mentioned earlier, I was amused to hear a friend say, “Ya’ know, Greenland isn’t supposed to be green”, and he meant it! He is apparently unaware that Greenland was given that name by Viking settlers around 1000 AD, who inhabited the island during a warm spell lasting several hundred years… until it got too cold!

Carbon Is Not Poison

The alarmists take the position that carbon emissions are unequivocally bad for people and the planet. They treat carbon as if it is the equivalent of poisonous air pollution. The popular press often illustrates carbon emissions as black smoke pouring from industrial smokestacks, but like oxygen, carbon dioxide is a colorless gas and a gas upon which life itself depends.

Our planet’s vegetation thrives on carbon dioxide, and increasing carbon concentrations are promoting a “greening” of the earth. Crop yields are increasing as a result; reforestation is proceeding as well. The enhanced vegetation provides an element of climate feedback against carbon “forcings” by serving as a carbon sink, absorbing increasing amounts of carbon and converting it to oxygen.

Matt Ridley has noted one of the worst consequences of the alarmists’ carbon panic and its influence on public policythe vast misallocation of resources toward carbon reduction, much of it dedicated to subsidies for technologies that cannot pass economic muster. Consider that those resources could be devoted to many other worthwhile purposes, like bringing electric power to third-world families who otherwise must burn dung inside their huts for heat; for that matter, perhaps the resources could be left under the control of taxpayers who can put it to the uses they value most highly. The regulatory burdens imposed by these policies on carbon-intensive industries represent lost output that can’t ever be recouped, and all in the service of goals that are of questionable value. And of course, the anti-carbon efforts almost certainly reflect a diversion of resources to the detriment of more immediate environmental concerns, such as mitigating truly toxic industrial pollutants.

The priorities underlying the alarm over climate change are severely misguided. The public should demand better evidence than consistently erroneous model predictions and manipulated climate data. Unfortunately, a media eager for drama and statism is complicit in the misleading narrative.

FYI: The cartoon at the top of this post refers to the climate blog climateaudit.org. The site’s blogger Steve McIntyre did much to debunk the “hockey stick” depiction of global temperature history, though it seems to live on in the minds of climate alarmists. McIntyre appears to be on an extended hiatus from the blog.

Courts and Their Administrative Masters

Tags

, , , , , , , , , , , , ,

IMG_4007

Supreme Court nominee Neil Gorsuch says the judicial branch should not be obliged to defer to government agencies within the executive branch in interpreting law. Gorsuch’s  opinion, however, is contrary to an established principle guiding courts since the 1984 Supreme Court ruling in Chevron USA vs. The Natural Resources Defense Council. In what is known as Chevron deference, courts apply a test of judgement as to whether the administrative agency’s interpretation of the law is “reasonable”, even if other “reasonable” interpretations are possible. This gets particularly thorny when the original legislation is ambiguous with respect to a certain point. Gorsuch believes the Chevron standard subverts the intent of Constitutional separation of powers and judicial authority, a point of great importance in an age of explosive growth in administrative rule-making at the federal level.

Ilya Somin offers a defense of Gorsuch’s position on Chevron deference, stating that it violates the text of the Constitution authorizing the judiciary to decide matters of legal dispute without ceding power to the executive branch. The agencies, for their part, seem to be adopting increasingly expansive views of their authority:

“Some scholars argue that in many situations, agencies are not so much interpreting law, but actually making it by issuing regulations that often have only a tenuous basis in congressional enactments. When that happens, Chevron deference allows the executive to usurp the power of Congress as well as that of the judiciary.”

Jonathan Adler quotes a recent decision by U.S. Appeals Court Judge Kent Jordan in which he expresses skepticism regarding the wisdom of Chevron deference:

Deference to agencies strengthens the executive branch not only in a particular dispute under judicial review; it tends to the permanent expansion of the administrative state. Even if some in Congress want to rein an agency in, doing so is very difficult because of judicial deference to agency action. Moreover, the Constitutional requirements of bicameralism and presentment (along with the President’s veto power), which were intended as a brake on the federal government, being ‘designed to protect the liberties of the people,’ are instead, because of Chevron, ‘veto gates’ that make any legislative effort to curtail agency overreach a daunting task.

In short, Chevron ‘permit[s] executive bureaucracies to swallow huge amounts of core judicial and legislative power and concentrate federal power in a way that seems more than a little difficult to square with the Constitution of the [F]ramers’ design.’

The unchecked expansion of administrative control is a real threat to the stability of our system of government, our liberty, and the health of our economic system. It imposes tremendous compliance costs on society and often violates individual property rights. Regulatory actions are often taken without performing a proper cost-benefit analysis, and the decisions of regulators may be challenged initially only within a separate judicial system in which courts are run by the agencies themselves! I covered this point in more detail one year ago in “Hamburger Nation: An Administrative Nightmare“, based on Philip Hamburger’s book “Is Administrative Law Unlawful?“.

Clyde Wayne Crews of the Competitive Enterprise Institute gives further perspective on the regulatory-state-gone-wild in “Mapping Washington’s Lawlessness: An Inventory of Regulatory Dark Matter“. He mentions some disturbing tendencies that may go beyond the implementation of legislative intent: agencies sometimes choose to wholly ignore some aspects of legislation; agencies tend to apply pressure on regulated entities on the basis of interpretations that stretch the meaning of such enabling legislation as may exist; and as if the exercise of extra-legislative power were not enough, administrative actions have a frequent tendency to subvert the price mechanism in private markets, disrupting the flow of accurate information about resource-scarcity and the operation of incentives that give markets their great advantages. All of these behaviors fit Crews’ description of “regulatory dark matter.”

Chevron deference represents an unforced surrender by the judicial branch to the exercise of power by the executive. As Judge Jordan notes in additional quotes provided by Adler at a link above, this does not deny the usefulness or importance of an agency’s specialized expertise. Nevertheless, the courts should not abdicate their role in reviewing an agency’s developmental evidence for any action, and the reasonability of an agency’s applications of evidence relative to alternative courses of action. Nor should the courts abdicate their role in ruling on the law itself. Judge Gorsuch is right: Chevron deference should be re-evaluated by the courts.

Benefit Mandates Bar Interstate Competition

Tags

, , , , , , , , , , , , ,

The lack of interstate competition in health insurance does not benefit consumers, but promoting that kind of competition requires steps that are not widely appreciated. Most of those steps must take place at the state level. In fact, it is not well known that it is already legal for states to jointly create interstate “compacts” under Obamacare, though none have done so.

The chief problem is that states regulate insurance carriers and the policies they offer in a variety of ways. Coverage mandates vary from state to state, as do rules governing the coverage of pre-existing conditions, renewability, dependents, costs, and risk rating. John Seiler, writing at the Foundation for Economic Education, offers a great perspective on the fractured character of state regulations. Incumbent insurers within a state have natural advantages due to their existing relationships with local providers. Between the difficulty of forming a new network and the costs of customizing policies and obtaining approval in multiple states, there are significant barriers to entry at state lines.

Federalism is a principle I often support, but state benefit mandates and other regulations are perverse examples because they restrict the otherwise voluntary and victimless choices available to a state’s consumers. Well, victimless except perhaps for in-state monopolists and their cronyist protectors in state government. Many powers are reserved to states under the Constitution, while the powers of the federal government are strictly limited. That’s well and good unless state governments infringe on the rights of individuals protected by the Constitution. In particular, the Commerce Clause prohibits state governments from obstructing the flow of interstate commerce.

Here is a bit of history surrounding the evolution of state versus federal control over insurance markets, as told by Pennsylvania Insurance Commissioner Teresa Miller (as quoted by reporter Steve Esack):

Since the 1800s, the U.S. Supreme Court held individual states, not Congress, had the power to regulate insurance companies. The high court overturned that precedent, however, in a 1944 ruling, United States v. South-Eastern Underwriters, that said insurance sales constituted interstate trade and Congress could regulate insurance under the U.S. Constitution’s Commerce Clause.

But states cried foul. In response, Congress passed and President Harry S. Truman in 1945 signed the McCarran-Ferguson Act to grant a limited anti-trust provision so states could keep regulating insurance carriers. The law does not preclude cross-border sales. It means insurance companies must abide by different sets of rules and regulations and laws in 50 states.

Congress obviously recognized that state regulation of health insurance would create monopoly power and restrain trade, even if states place bridles on insurers and impose ostensible consumer protections. The solution was to exempt health insurers from broad federal regulation and anti-trust prosecution by the Department of Justice.

Last week, the House of Representatives passed a bill that would repeal McCarran-Ferguson for health insurers. However, that would do little to encourage cross-border competition as long as the tangle of state mandates and other regulations remain in place. The regulatory landscape would have to change under this kind of federal legislation, but how that would happen is an open question. Could court challenges be brought against state regulators and coverage mandates as anti-competitive? Would anti-trust actions be brought against incumbent carriers?

Robert Laszewski has strong objections to any new law that would allow interstate sales of health insurance as long as state benefit mandates remain in place for “local legacy” carriers. In particular, he believes it would encourage “cherry picking” of the best risks by market entrants who would be free of the mandates. Many of the healthiest individuals would jump at the chance to purchase stripped down, catastrophic coverage. That would leave the legacy carriers under the burden of mandates and deteriorating risk pools. Would states do this to their incumbent insurers without prodding by the courts? Would they simply drop the mandates? I doubt it.

No matter the end-state, there is likely to be a contentious transition. Promoting interstate competition in the health insurance market is a laudable goal, but it is not as simple as some health-care reformers would have us believe. Real competition requires action by states to eliminate or liberalize regulations on benefit mandates, risk-rating and pre-existing conditions. Ultimately, the cost of coverage for high-risk individuals might have to be subsidized, whether means-tested or not, through a combination of support from the states, the federal government, and private charities. And of course, interstate competition really does requires repeal of the health insurance provisions of McCarran-Ferguson.

Governments at any level can act against the well-being of consumers, despite the acknowledged benefits of decentralized governance over central control. Benefit mandates, whether imposed at the federal or state levels, are inimical to consumer choice, competition, efficient pricing, and often to the very concept of insurance. Those aren’t the sort of purposes federalism was intended to serve.

The CBO’s Obamacare Fantasy Forecast

Tags

, , , , , , , , ,

The Congressional Budget Office (CBO) is still predicting strong future growth in the number of insured individuals under Obamacare, despite their past, drastic over-predictions for the exchange market and slim chances that the Affordable Care Act’s expansion of Medicaid will be adopted by additional states. Now that Republican leaders have backed away from an unpopular health care plan they’d hoped would pass the House and meet the Senate’s budget reconciliation rules, it will be interesting to see how the CBO’s predictions pan out. The “decremental” forecasts it made for the erstwhile American Health Care Act (AHCA) were based on its current Obamacare “baseline”. A figure cited often by critics of the GOP plan was that 24 million fewer individuals would be insured by 2026 than under the baseline.

It was fascinating to see many supporters of the AHCA accept this “forecast” uncritically. With the AHCA’s failure, however, we’ve been given an opportunity to witness the distortion in what would have been a CBO counterfactual. What a wonderful life! We’re stuck with Obamacare for the time being, but this glimpse into the CBO’s delusions will be one of several silver linings for me.

Again, the projected 24 million loss in the number of insured under the AHCA was based on an actual predicted loss of about 5 – 6 million and the absence of an Obamacare gain of 18 – 19 million. Those figures are from an excellent piece by Avik Roy in Forbes. I drew on that article extensively in my post on the AHCA prior to its demise. Here are some key points I raised then, which I’ve reworded slightly to put more emphasis on the Obamacare forecasts:

  1. The CBO has repeatedly erred by a large margin in its forecasts of Obamacare exchange enrollment, overestimating 2016 enrollment by over 100% as recently as 2014.
  2. The AHCA changes relative to Obamacare were taken from CBO’s 2016 forecast, which is likely to over-predict Obamacare enrollment on the exchanges by at least 7 million, according to Roy.
  3. The CBO also assumes that all states will opt to participate in expanded Medicaid under Obamacare going forward. That is highly unlikely, and Roy estimates its impact on the CBO’s forecast at about 3 million individuals.
  4. The CBO believes that the Obamacare individual mandate has encouraged millions to opt for insurance. Roy says that assumption accounts for as much as 9 million of total enrollment across the individual and employer markets, as well as Medicaid.

Thus, Roy believes the CBO’s estimate of the coverage loss of 24 million individuals under the AHCA was too high by about 19 million!

In truth, Obamacare will be watered down by regulatory and other changes instituted by the Trump Administration, which has said it will not enforce Obamacare’s individual mandate. Coverage under the “new” Obamacare will devolve quickly if the CBO is correct about the impact of the individual mandate.

The CBO’s job is to “score” proposed legislation relative to current law; traditionally, it made no attempt to account for dynamic effects that might arise from the changed incentives under a law. The results show it, and the Obamacare projections are no exception. In the case of Obamacare, however,  the CBO seems to have applied certain incentive effects selectively. The supporters of the AHCA might have helped their case by focusing on the flaws in the CBO’s baseline assumptions. We should keep that in mind in the future with respect to any future health care legislation, not to mention tax reform!

 

 

 

 

 

Lighten Up For Human Achievement Hour!

Tags

, , , , , , , ,

idea-light-bulb

Tonight, Saturday March 25th from 8:30 to 9:30, I’ll be doing my part to celebrate humankind’s ascendence over the bare subsistence and misery that was ubiquitous until just the last few centuries. Human Achievement Hour is sponsored by the Competitive Enterprise Institute (CEI) to celebrate the incredible technological miracles  brought forth by human ingenuity and free markets:

Originally launched as the counter argument to the World Wide Fund for Nature’s Earth Hour, where participants renounce the environmental impacts of modern technology by turning off their lights for an hour, Human Achievement Hour challenges people to look forward rather than back to the dark ages.

Symbolically or not, Earth Hour is a misguided effort that completely ignores how modern technology allows societies to develop new and more sustainable practices, like helping people around the world be more eco-friendly and better conserve our natural resources.

While Earth Hour supporters may suggest rolling brown-outs in India are desirable, we respectfully disagree. Instead of sitting in the dark, Human Achievement Hour promotes new ideas and celebrates the technology and innovation that will help solve the world’s environmental challenges.”

The following are suggestions from CEI as to how you can participate in the celebration. I’ll take them up on the third and sixth items on this list, just as I have for the past several years.

  • Use your phone or computer to connect with friends and family
  • Watch a movie or your favorite television show
  • Drink a beer or cocktail
  • Drive a car or take a ride-sharing service
  • Take a hot shower
  • Or, in true CEI fashion, celebrate reliable electricity that has saved lives, by bringing heat and air conditioning to people around the world, and keep your lights on for an hour

Light up the night! Here are a couple of links with information on the worldwide progress in improving human living conditions:

The Human Progress Blog

Thank Fracking For Reduced Emissions

We are winning the war against starvation, disease and poverty around the globe, though progress can seem frustratingly gradual in real time. Nevertheless, over the sweep of history, we are winning the battle in a dramatic way.

Risks, Costs and the Sharing Kind

Tags

, , , , , , , , , , ,

Of all the health care buffoonery we’ve witnessed since the Affordable Care Act (ACA, or Obamacare) was first introduced in Congress in 2009, one of the most egregious is the strengthening of the notion that health insurance should cover a variety of wholly predictable, and strictly speaking, non-insurable events. Charlie Martin recently posted some interesting comments on insurance and why it works, and why public perceptions and public policy are often at odds with good insurance practices. He says that “Insurance Is Always Just Gambling“. True, real insurance is like any other rational hedge against risk, and that can be called a gamble. Unfortunately, public policy often interferes with our ability to hedge these risks efficiently.

Hedged Risk Or Prepaid Expenses?

To begin with, insurance is a mechanism for individuals to manage the financial impact of events that are unpredictable and potentially costly. These are insurable risks. But if an event recurs regularly, like an annual physical exam, a breast exam, or a pap smear, or if an event is largely within the individual’s control, like whether an ugly mole should be removed, then it is not an insurable risk. Paying for such “coverage” through a third-party insurer amounts to prepaying for services for which you’d otherwise pay directly when the time comes. We’ve essentially adopted this prepayment scheme on a national scale through Obamacare’s mandated benefits: we get broad coverage of non-insurable events in exchange for premiums and/or deductibles high enough to cover the prepayments! Big win, huh?

The rationale for a broad coverage mandate is that it will induce healthy behaviors like, well… getting an annual checkup. Therefore, it is said to be in the interests of insurers to include such benefits in basic coverage. That might well be, but the insurers don’t do it for free! Indeed, a combination of premiums and deductibles are correspondingly higher as a result, and the mandate introduces a “middle man”, the insurer, who adds cost to the process of executing a relatively simple transaction.

Unlike these prepaid health care expenses, real insurance is really a sort of gamble. An insurer makes a bet that you won’t have a major, unanticipated health care need, and you put up the “premium” as your bet that you will have such a need. If you are healthy, then the odds are low, so it’s a fairly cheap bet for you, but you have to put up a little extra to pay for your insurer’s administrative costs. Down the road, if you need acute care, your bet pays off. Yippee! You’ll be covered.

But who knows the odds that you’ll need expensive care? And why would an insurer take the risk of losing big if you get sick?

The insurer can estimate those odds via actuarial data and experience, and they can assume your risk by playing the law of large numbers: if they make similar bets with many individuals, their actual losses will be more than covered by premium revenue (most of the time… as Martin explains, it’s possible for an insurer to make a bet with a so-called reinsurer as a hedge against the small risk of a huge loss on its book of business, beyond some threshold).

Shared Risk Or Shared Cost?

Martin objects to the use of the term “shared risk” in this context. Many individuals make similar bets, which makes the insurer’s aggregate payout more predictable. That allows them to offer such bets on reasonable monetary terms, and they are all voluntary contracts sought out by people facing risks of the same character. If an individual seeks to insure against a demonstrably heightened risk, an insurer might or might not agree to the “bet” voluntarily, but if it does, the risk is not truly “shared” by individuals who face lower risks. The high-risk bet is reasonable for the insurer only to the extent that: (1) the premium is actuarially fair in conjunction with a larger pool of high-risk bets, or (2) it can be cross-subsidized by more profitable lines of coverage. If the answer is (2), then premiums for healthy individuals must rise to cover risks they do not share. That is one basis under which Obamacare operates and it is a subtle aspect of Martin’s argument against the notion of “shared risks”. Perhaps we can avoid the semantic difficulty by speaking of “sharing the costs of risks that are not shared”.

A more obvious aspect of Martin’s objection to “shared risk” relates to the expectation that predictable medical costs must be “covered” by health insurance, as discussed above. If so, no risk is shared because there is no risk! Yet we often speak of health insurance “needs” as if they combine a variety of such things, and as if all those “needs” embody risks that are shared. They are not.

Sharing the Cost of Prenatal Care

In another post, Martin tackles the question of whether certain people should be expected to pay a premium that includes the cost of prenatal care. Martin was prompted by a tweet from the National Association for the Repeal of Abortion Laws (NARAL), which read:

WOW. The #GOP’s reason to object to insurance covering prenatal care? ‘Why should men pay for it?’ #Trumpcare #ProtectOurCare”

There was a link in the tweet to a video, which was captioned by NARAL as follows:

The GOP reasoning to object to prenatal insurance
Two male Republicans object to prenatal care coverage under the ACA because—while it ensures women have healthy pregnancies—it means men pay *a tiny bit more* for insurance. WOW.”

To the extent that pregnancy can be considered a risk, it is certainly not shared by seniors, gays and lesbians, and infertile individuals, let alone unattached males. And from an insurance perspective, an obvious difficulty with NARAL’s point is that many pregnancies are planned. As such, they are not insurable events (though complications of pregnancy clearly are insurable). Yet people speak as though others must “share” the costs. That is fundamentally unfair and economically inefficient. Subsidies for couples who might wish to have children lead to greater rates of fertility than those couples can otherwise afford, saddling society with the medical bill. Incentives are no joke.

There are also unplanned pregnancies among singles and married couples, however. That sounds more like an insurable event, but it’s usually impossible for a third party to determine whether a pregnancy is planned or unplanned, so moral hazard is an issue (except in extreme circumstances like rape or incest). The risk of pregnancy is confined to a subset of the population, so sharing these costs more broadly is inefficient to the extent that it subsidizes some pregnancies (oops!) that individuals cannot otherwise afford. Individuals and couples who face pregnancy risk must manage that risk in any way they chose, and they might wish to purchase a form of coverage that will help them smooth the cost of pregnancies over their fertile years. It’s not clear that coverage of that nature is better for the prospective parent(s) than a line of credit, but it is a form of insurance only because of the “unplanned” component, and at least it allows them to spread the cost ex ante as well as ex post.

Sharing Costs of Common Risks 

The basic point here is that sharing a risk across all individuals, whether they do or do not actually face the risk, is not a natural characteristic of private insurance. In fact, the idea that this cost should be shared broadly is a collectivist notion. The major flaws are that 1) individuals and couples at risk are not financially responsible for certain cost-causing decisions they might make; and 2) it forces individuals and couples not at risk to pay for others’ risks, which is an act of coercion. NARAL feels that individuals who subscribe to these sound principles are worthy of rebuke. And NARAL asserts that “men pay a tiny bit more“, without providing quantification. Of course, it’s not just men, but this is a variation on the old statist argument that diffuse costs are not meaningful and should be disregarded, ad infinitum.

Public Aid Dressed As Insurance

There are segments of society that are often depicted as incapable of managing risks like pregnancy and unable to afford the consequences of mistakes. Subsidizing those individuals is a second collectivist front for “risk sharing”. Those subsidies can and do take the form of “family planning”, as well as prenatal care and childbirth. That’s part of the social safety net, and while it is perhaps more tolerable as aid, it entails the same kinds of bad incentives as discussed earlier.

The welfare state has seldom been praised for its impact on incentives. Most studies have found a link between public aid and higher fertility, and mixed effects on the dissolution of marriage (see here and here, and for international evidence, see here). But aid for health care expenses should not interfere with the sound operation of the insurance market. Vouchers for catastrophic coverage would be far preferable, and that aid could even cover some regularly recurring health care costs, despite their non-insurable nature, but that would be a compromise.

The misgivings voiced by Martin are partly driven by two fundamental issues: guaranteed issue and community rating. The former means that an insurer must take your bet regardless of the risks you present; the latter means that the insurer cannot charge premiums commensurate with the risk inherent in the various bets it takes. As David Henderson writes, both underpin the ACA. In other words, the ACA imposes cost sharing. Here is Henderson:

As I wrote over 20 years ago, the combination of guaranteed issue and community rating, a key feature of Obamacare, leads to the destruction of insurance markets. No one would advocate forcing insurance companies to issue house insurance policies to people whose houses are burning, at premiums equal to those paid by others whose houses aren’t burning. And the twin requirements would cause more and more people to refrain from buying insurance until their houses are on fire. Insurance companies, knowing this, would charge astronomically high premiums.

Cleaving the Health Care Knot… Or Not

Tags

, , , , , , , , , , , , , , , , , ,

IMG_3957

Republican leadership has succeeded in making their health care reform plans in 2017 even more confusing than the ill-fated reforms enacted by Congress and signed by President Obama in 2010. A three-phase process has been outlined by Republican leaders in both houses after the initial rollout of the American Health Care Act (AHCA), now billed as “Phase 1”. The AHCA was greeted with little enthusiasm by the GOP faithful, however.

As a strictly political matter, there is a certain logic to the intent of “three-phase plan”: limiting the provisions of the AHCA to issues having an impact on the federal budget. That would allow the bill to be addressed under “budget reconciliation” rules requiring only 51 votes for passage in the Senate. Phase 2 would involve regulatory rule-making, or rule-rescinding, as the case may be. The putative Phase 3 would require additional legislation to address such unfinished business as allowing health insurance competition across state lines, eliminating anti-trust protection for insurers, and medical tort reform. How the sponsors will get 60 Senate votes for Phase 3 reforms is an unanswered question.

Legislative Priorities

Yuval Levin wrote a great analysis of the AHCA last week In which he described the structure of the House bill as a paranoid reaction to the demands of an “imaginary parliamentarian”. By that he means that the reforms in the bill conform to a rigid and potentially flawed interpretation of Senate budget reconciliation rules. Levin’s view is that the House should not twist itself up over what might be negotiated prior to a Senate vote. In other words, the House should concern itself at this stage with passing a bill that at least makes sense as reform, without bowing to any of the awful legacy provisions in Obamacare.

Medicaid reform is one piece of the proposed legislation and is reasonably straightforward. It imposes caps on federal funding to states after 2020, but it grants more flexibility to the states in managing the program. It also involves a tradeoff by allowing Medicaid funding to increase over the first few years, in line with the expansion under Obamacare, in exchange for capped growth later. The expectation is that long-term costs of the program will be reduced through a combination of the caps and better management at the state level.

The more complex aspects of the AHCA attempt to effect changes in the individual market. Levin offers a good perspective on these measures. First, he describes the general character of earlier Republican reform proposals from which the AHCA descends:

Those various proposals all involved bringing premium costs down by enabling insurers to sell catastrophic coverage plans (along with more comprehensive plans) and enabling everyone in the individual market to afford at least those catastrophic coverage plans. This would enable far greater competition and let anyone not otherwise covered by insurance enter the individual market as a consumer.  …

The House proposal bears a clear resemblance to this approach. It involves some deregulation from Obamacare, it includes a refundable tax credit for coverage, it gestures toward incentives for continuous coverage. But it is also fundamentally different from this approach, because it functions within the core insurance rules established by Obamacare, which means it can’t really achieve most of the key aims of the conservative reforms it is modeled on.”

The rules established by Obamacare to which Levin refers include the form of community rating, which is merely loosened somewhat by the AHCA. However, the AHCA would impose a 30% penalty for those who fail to enroll while still healthy. This is a poorly designed incentive meant to substitute for Obamacare’s individual mandate, and it is likely to backfire. Levin is clear that this feature could have been avoided by scrapping the old rules and introducing a new form of community rating available only to the continuously insured.

The AHCA also fails to cap the tax benefits of employer-provided coverage, which retains a potential imbalance between the incentives for employer versus individual coverage. Levin believes, however, that some of these shortcomings can be fixed through a negotiation process in either the House or the Senate, if and when the bill goes there.

The CBO’s Report

As it is, the bill was “scored” by the Congressional Budget Office (CBO) with results that are widely viewed as unsatisfactory. The CBO’s report states that the AHCA would reduce the federal budget deficit, but the ugly headline is that relative to Obamacare, it woud cause 24 million people to lose their coverage by 2024. That number is drastically inflated, as Avik Roy demonstrated in his Forbes column this week. Here are the issues laid out by Roy:

  1. The CBO has repeatedly erred by a large margin in its forecasts of Obamacare exchange enrollment, overestimating 2016 enrollment by over 100% as recently as 2014.
  2. The AHCA changes relative to Obamacare are taken from CBO’s 2016 forecast, which still appears to over-predict Obamacare enrollment substantially. Roy estimates that this difference alone would shave at least 7 million off the 24 million loss of coverage quoted by the CBO.
  3. The CBO also assumes that all states will opt to participate in expanded Medicaid going forward. That is highly unlikely, and it inflates CBO’s estimate of the AHCA’s negative impact on coverage by another 3 million individuals, according to Roy.
  4. Going forward, the CBO expects the Obamacare individual mandate to encourage millions more to opt for insurance than would under the AHCA. Roy estimates that this assumptions adds as much as 9 million to the CBO’s estimate of lost coverage across the individual and employer markets, as well as Medicaid.

Thus, Roy believes the CBO’s estimate of lost coverage for 24 million individuals is too high by about 19 million! And remember, these hypothetical losses are voluntary to the extent that individuals refuse to avail themselves of AHCA tax credits to purchase catastrophic coverage, or to enroll in Medicaid. The latter will be no less generous under the AHCA than it is today. The tax credits are refundable, which means that you qualify regardless of your pre-credit tax liability.

Fixes

Despite Roy’s initial skepticism about the AHCA, he thinks it can be fixed, in part by means-testing the tax credits, rather than the flat credit in the bill. He also believes the transition away from the individual mandate should be more gradual, allowing more time for markets to being premiums down, but I find this position rather puzzling given Roy’s skepticism that the mandate has a strong impact on enrollment. Perhaps gradualism would convince the CBO to score the bill more favorably, but that’s a bad reason to make such a change.

It’s impossible to say how the bill will evolve, but certainly improvements can be made. It is also impossible to know whether Phases 2 and 3 will ultimately bring a more complete set of cost-reducing regulatory and competitive reforms. Phase 3, of course, is a political wild card.

Michael Tanner notes a few other advantages to the AHCA. Even the CBO says the cost of health insurance would fall, and the AHCA will bring greater choice to the individual market. It also promises over $1 trillion in tax cuts and lower federal deficits.

Alternatives

The GOP faced alternatives that should have received more consideration, but those alternatives might not be politically viable at this point. Some of them contain features that might be negotiated into the final legislation. Rand Paul’s plan has not attracted many advocates. Paul took the courageous position that there should be no entitlements in a reform plan (i.e., subsidies); instead, he insisted, with liberalized market forces, premium costs would decline sufficiently to allow affordable coverage to be purchased by a broad cross-section of Americans. Paul is obviously unhappy about the widespread support in the GOP for refundable tax credits as a replacement for existing Obamacare subsidies.

John C. Goodman has advocated a much simpler solution: take every federal penny now dedicated to health care and insurance subsidies, including every penny of taxes now avoided via tax deductions on employer-provided coverage, and pay it out to households as a tax credit contingent on the purchase of health insurance or health care expenses. This is essentially the plan put forward by Rep. Pete Sessions and Sen. Bill Cassidy in the Patient Freedom Act, described here. While I admire the simplicity of one program to replace the existing complexities in the federal funding of health care coverage, my objection is that a health care “dividend” of this nature resembles the flat tax credit in the AHCA. Neither is means-tested, amounting to a “Universal Basic Health Insurance Benefit”. Regular readers will recall my recent criticism of the Universal Basic Income, which is the sort of program that smacks of “universal state dependency”. But let’s face it: we’re already in a state of federal health care dependency. In this case, there is no incremental cost to taxpayers because the credit would replace existing outlays and tax expenditures. In that sense, it would eliminate many of the distortions currently embedded in federal health care policy.

A more drastic approach, at this point, is to simply repeal Obamacare, perhaps with a lengthy phase-out, and attempt to replace it later in the hope that support will coalesce around a reasonable set of measures leveraging market forces, and with accommodations for high-risk individuals and the economically disadvantaged. Michael Cannon writes that CBO estimated a simple repeal would increase the number of uninsured by 23 million over ten years, slightly less than the 24 million estimate for the AHCA! Of course, neither of these estimates is likely to be remotely accurate, as both are distorted by the CBO’s rosy assumptions about the future of Obamacare.

Where To Go?

Tanner reminds us that the real alternative to Republican legislation, whatever form it might take, is not a health care utopia. It is Obamacare, and it is collapsing. That plan cannot be effectively reformed with additional subsidies for insurers and consumers, or we’d find ourselves in a continuing premium spiral. The needed reforms to Obamacare would resemble changes contemplated in some of the GOP proposals. While I cannot endorse that AHCA legislation in its current form, or as a standalone reform, I believe it can be improved, and the later phases of reform we are told to anticipate might ultimately vindicate the approach taken by GOP leadership. I am most skeptical about the promise of subsequent legislation in Phase 3. I’ll have to keep my fingers crossed that by then, the path to additional reforms will be more attractive to democrats.

Trump Versus the Holocaust Trivializers

Tags

, , , , , , , , , , , , , , , , , , , , ,

trump-tallit

George Mason University Law Professor David Bernstein observed this week that many in the American Jewish community are panicked by Donald Trump’s election because they perceive Trump and his followers as anti-Semitic. That perception was seemingly reinforced by recent anti-Semitic acts, such as bomb threats at Jewish Community Centers and the desecration of graves at Jewish cemeteries in St. Louis, MO and Philadelphia, PA. Bernstein, who is Jewish and not a Trump supporter, wrote a piece entitled “The Great Anti-Semitism Panic of 2017“, which appeared in the Volokh Conspiracy blog sponsored by the Washington Post.

Like Bernstein, I’ve seen a number of indignant posts by Jewish friends connecting Trump and anti-Semitism, complete with comparisons to Adolf Hitler. My quick reaction is that such comparisons are not only irresponsible, they are idiotic. The ghastly implication is that Trump might entertain the idea of exterminating Jews, or any other opposition group, and it is complete nonsense.

Taking a step back, perhaps all this is related to Trump’s nationalism and his views on border security. That includes “extreme vetting” of refugees, deportation of illegal immigrants, and even the dubious argument for a border wall. While that’s not about Jews, those policies appeal to certain fringe, racist elements on the extreme right where anti-Semitism is commonplace. However, those policies also appeal to a much broader and diverse audience of voters who harbor anxieties about economic and national security, and who are neither racists nor anti-Semites.

Bernstein takes progressive Jews to task for tying any of this to anti-Semitism on the part of Trump, his Administration, or his broader base of support:

…  the origins of the fear bear only a tangential relationship to the actual Trump campaign. For example, I’ve lost track of how many times Jewish friends and acquaintances in my Facebook feed have asserted, as a matter of settled fact, that Bannon’s website Breitbart News is a white-supremacist, anti-Semitic site. I took the liberty of searching for every article published at Breitbart that has the words Jew, Jewish, Israel or anti-Semitism in it, and can vouch for the fact that the website is not only not anti-Semitic, but often criticizes anti-Semitism (though it is quite ideologically selective in which types of anti-Semitism it chooses to focus on). I’ve invited Bannon’s Facebook critics to actually look at Breitbart and do a similar search on the site, and each has declined, generally suggesting that it would be beneath them to look at such a site, when they already know it’s anti-Semitic.

There is .. a general sense among Jews, at least liberal Jews, that Trump’s supporters are significantly more anti-Semitic than the public at large. I have many times asked for empirical evidence that supports this proposition, and have so far come up empty. I don’t rule out the possibility that it’s true, but there doesn’t seem to be any survey or other evidence supporting it. Given that American subgroups with the highest proportions of anti-Semites — African Americans, first-generation Hispanic immigrants, Muslims and high school dropouts — are strong Democratic constituencies (though the latter group appears to have gone narrowly for Trump this time), one certainly can’t simply presume that Trump has a disproportionate number of anti-Semitic supporters.

Bernstein goes on to discuss the hostility to Trump from groups like the Anti-Defamation League (ADL), hostility which he characterizes as essentially opportunistic:

The ADL’s reticent donors are no longer reticent in the age of Trump, with the media reporting that donations have been pouring in since Trump’s victory. It’s therefore hardly in the ADL’s interest to objectively assess the threat from Trump and his supporters. Indeed, I’m almost impressed that an ADL official managed just the other day to link the JCC bomb threats to emboldened white supremacists, even though the only suspect caught so far is an African American leftist.

He also notes the irony that progressive Jews have been shunned by many leftists, who almost uniformly condemn Zionism. Now, progressive Jews hope to renew common cause with those whose political purposes are defined by membership in groups with a history of marginalized treatment, and who now believe they are threatened by Trump. Will they be happy together? Bernstein attests that many Jews privately acknowledge the danger of “changing demographics”:

… which is a euphemism for a growing population of Arab migrants to the United States. Anti-Semitism is rife in the Arab world, with over 80 percent of the public holding strongly anti-Semitic views in many countries.

As a non-Jew, some would say I lack the bona fides to comment on how Jews “should” feel about Donald Trump. I was raised Catholic, but I attended a high school at which over 60% of the student population was Jewish. I was a member of a traditionally Jewish fraternity in college, where I witnessed occasional anti-Semitism from certain members of non-Jewish fraternities, and I felt victimized by it to some degree. My late brother married a Jewish woman, and he was buried according to Jewish custom. I was once stunned by a brief anti-Semitic wisecrack I overheard in the restroom at a community theatre production of the great musical Fiddler On the Roof!

So, I am connected and strongly sympathetic to the Jewish community. I am also well acquainted with white Gentiles who have had much less interaction with Jews. Those individuals span the political spectrum, and there is no doubt that racists and anti-Semites reside at both ends. I will state unequivocally that among this population, I have observed as much racism and denigration of Jews from the left as from the right. It partly reflects anti-Zionism, but there have been leftists in my acquaintance who seem to regard Jews as Shylockian, as greedy moneychangers and crooked lawyers, or as “hopelessly bourgeois”. Jews should not be blind to the hatred that still exists for them in certain quarters on the left, even if it’s easier to pretend that right-wing religious nuts are their only enemies.

Bernstein’s column was met with outrage by some Jewish progressives. In the Jewish Journal, Rob Eshman accused Bernstein of making apologies for Trumpian anti-Semitic behavior. Here is Bernstein’s response, in which he castigates Eshman for distorting both his thesis and the reaction of the Jewish community to Trump. He also notes that Eshman assigns guilt for the recent spate of anti-Semitic acts to Trump supporters where no evidence exists. That implication is a constant refrain from certain Jewish friends on my Facebook news feed. But there is ample evidence of “fake” hate crimes by progressives, as documented last week by Kevin Williamson.

Finally, it is hard to square the idea that Trump and his leadership team (which includes his Jewish son-in-law) are anti-Semitic with other evidence, such as the unequivocal support they have pledged to Israel, and their hard stand on vetting refugees from nations that are avowed enemies of the Jewish people. Yes, Bernstein is well aware of the anti-Semitic, fringe-right elements that have supported Trump, but those are not the sentiments of anyone serving in the administration, including Steve Bannon. The left has become quite blithe about observing Godwin’s Law, which states that all political opponents will eventually be called out as Nazis. Progressive Jews have taken the cue without much thought: the frequent comparisons of Donald Trump to Hitler are awful and are not compatible with healthy discourse. As Stefan Kanfer writes in City Journal in his review of the book “A Tale of Three Cities” (my emphasis added):

… those who persist in comparing Adolf Hitler with any U.S. politician reveal themselves as members of a group just to the side of the Holocaust denier—the Holocaust trivializer. There are no lower categories.