Musings II: Avik Roy on Health Insurance Reform

Tags

, , , , , , , , , , , , , ,

IMG_4209

Vox carried an excellent Dylan Scott interview with Avik Roy this week. Roy is a health care policy expert for whom I have great respect. Among other health care issues, I have quoted him in the recent past on the faulty Congressional Budget Office (CBO) projections for Obamacare enrollment, which have consistently overshot actual enrollment. In this interview, Roy explains his current views on the health care insurance reform process and, in particular, the American Health Care Act (AHCA), the bill passed by the House of Representatives last month. The interview provides a good follow-up to my “musings” post on Sacred Cow Chips earlier this week.

Roy provides good explanations of some of the AHCA’s regulatory changes that have merit. These include:

  1. relaxation of Obamacare’s community rating standards, meaning that insurers have more flexibility to charge premia based on age and other risk factors, thus mitigating the pricing distortions caused by cross-subsidies on the individual market;
  2. a rollback in the required minimum actuarial value (AV) of an insurance plan (the ratio of plan-paid medical expenses to total medical expenses);
  3. elimination of federal essential benefits requirements.

Roy provides context for these proposed changes relative to Obamacare. For example, regarding AV, he says:

[In] the old individual market, prior to Obamacare, the typical actuarial value of a plan was about 40 percent. Obamacare drives that up effectively to 70 percent. That has a corresponding effect on premiums; it makes premiums a lot more expensive. In the AHCA, those actuarial value mandates are repealed. Which should provide a lot more opportunity for plans to design more affordable insurance policies for individuals.

Even with Obamacare’s high AV requirements, an insurer could make money by virtue of the law’s “risk corridors”, which were intended to cover losses for insurers as they adjusted to the new regulations and as the exchange market matured, but those bailouts were temporary, and development of the exchanges did not go exactly as hoped. Insurers have been ending their participation in the exchange market, leaving even less than the limited choices available under Obamacare and little competition to restrain pricing.

On essential benefits, Roy reminds us that every state has essential benefit regulations of its own. These mandates create an unfortunate obstacle to interstate competition, as I discussed in March in “Benefit Mandates Bar Interstate Competition“. Nevertheless, the federal mandates have created additional complexities and added costs to cover risks that a) are not common to the risk pool, or b) cover benefits that are not risk-related and therefore inappropriate as insurance.

Roy also defends the AHCA’s protection of individuals with pre-existing conditions. One fact often overlooked is that burdening the individual market with coverage of pre-existing conditions made Obamacare less workable from the start, simultaneously driving up premiums and sending insurers for the hills. These risks can and should be handled separately, and the AHCA offers subsidies that should be up to the task:

… if you look at Obamacare, the mechanisms in Obamacare’s exchanges that served as a way to fund coverage for sick people, they were spending $8 billion a year on that program. If you look at it that way, if $8 billion was enough under Obamacare, then maybe $15 billion a year is enough. I really don’t think that’s the problem with this bill.

Roy contends that the big weakness in the AHCA is inadequate assistance to the poor in arranging affordable coverage. While highly critical of the CBO’s wild estimate of lost coverage (24 million), he does believe that the AHCA, as it stands, would involve a loss. He favors means-tested subsidies as a way of closing the gap, but acknowledges the incentive problems inherent in means testing. With time and a growing economy, and if the final legislation (and the purported stages 2 and 3 of reform) is successful in reducing the growth of health care costs relative to income, the subsidies would constitute a smaller drain on taxpayers.

As for Medicaid reform, Roy defends the AHCA’s approach:

You start with the fact that access to care under Medicaid and health outcomes under Medicaid are very poor, far underperforming other health insurance programs and certainly way underperforming private insurance. Why does that problem exist? It exists because states have very little flexibility in how they managed their Medicaid costs. They’re basically not able to do anything to keep Medicaid costs under control, except pay doctors and hospitals less money for the same amount of care. As a result of that, people have poor access. By moving to a system in which you put Medicaid on a clear budget and you give states more flexibility in how they manage their Medicaid costs, you actually can end up with much better access to care and much better coverage.

One point that deserves reemphasis is that a final plan, should one actually pass in both houses of Congress, will be different from the AHCA. From my perspective, the changes could be more aggressive in terms of deregulation on both the insurance side and in health care delivery. The health care sector has been overwhelmed by compliance costs and incentives for consolidation under Obamacar. Nobody bends cost curves downward by creating monopolies.

I’ve hardly done justice to the points made by Roy in this interview, but do read the whole thing!

Musings On Health Insurance Reform

Tags

, , , , , , , , , , , , , , , ,

An acquaintance of mine is a cancer patient who just made the following claim on Facebook: the only people complaining about Obamacare are hypocrites because they don’t have to purchase their health insurance on the exchanges. That might be her experience. It certainly isn’t mine. I know several individuals who purchase their coverage on the exchanges and complain bitterly about Obamacare. But her assertion reveals its own bit of hypocrisy: it’s apparently okay to defend Obamacare if you are a net beneficiary, but you may not complain if you are a net payer. Of course, I would never begrudge this woman the care she needs, but it is possible to arrange for that care without destroying the health care industry and insurance markets in the process. Forgive me for thinking that Obamacare was designed with the cynical intent to do exactly that! Well, at least insurance markets. The damage to the health care industry was brought on by simple buffoonery and rent seeking.

Depending on developments in Congress over the next few months (3? 6? 9?), Obamacare could be a thing of the past. We’ve all probably heard hyperbolic claims that the new health care bill “will kill people”, which is another absurdity given the law’s dislocations. That was the subject of “Death By Obamacare“, posted in January on Sacred Cow Chips. AHCA detractors base their accusations of murderous intent on a fictitious notion of reduced access to care under the plan, as well as a Congressional Budget Office (CBO) report that viewed the future of Obamacare through rose-colored glasses. I discussed the CBO report at greater length in “The CBO’s Obamacare Fantasy Forecast“.

Before anyone gets too excited about what they like or dislike about the health care bill passed by the House of Representatives last week, remember that a final health care bill, should one actually get through Congress, is unlikely to bear a close resemblance to the House bill. The next step will be the drafting of a Senate bill, which might be assembled from parts of the House’s American Health Care Act (AHCA) and other ideas, or it might take a different form. It could take a while. Then, the House and Senate will attempt to shape a compromise in conference committee and bring it to a vote in both houses. President Trump, looking for a “win”, is likely to sign whatever gets through, even if he has to bargain with democrats to win votes.

So relax! If your legislators are democrats, tell them to participate in the shaping of new policies, rather than throwing petulant barbs from the sidelines. First, of course,  you’ll have to face up to the fact that Obamacare is a failed policy.

Another recent post on Sacred Cow Chips, “Cleaving the Health Care Knot… Or Not“, covered some of the most important provisions of the AHCA. By the time of the vote, a few new provisions had been added to the House bill. The McArthur Amendment allows states to waive the Obamacare essential benefits requirements. Fewer mandated benefits would allow insurance companies to offer simpler policies covering truly insurable health care events, as opposed to predictable health maintenance costs. Let’s face it: if you must have insurance coverage for your annual checkup, then it is not really insurance against risk; either the premium or the deductible must rise to cover the expenses, ceteris paribus.

The other change in the AHCA is an additional $8 billion dollars allocated to state high-risk pools for pre-existing conditions, for a total of $138 billion. These risks are too high to blend with standard risks in a well-functioning insurance market. (In a perfect insurance market, there would be no cross-subsidies between groups on an ex ante basis.) As a separate risk pool, these high-risk individuals would face very high premia, so the idea is to allow states the latitude to subsidize their health care costs in ways they see fit. This is a federalist approach to the problem of subsidizing coverage for pre-existing conditions, and it has the advantage of restoring the ability of insurers to underwrite standard risks at reasonable rates, correcting one of Obamacare’s downfalls. However, some GOP senators are advocating a combination of standard risks and those with pre-existing conditions, which obviously distorts the efficient pricing of risk and exaggerates the need for broader subsidies.

And what about the uninsured poor? A major focus of health care insurance reform, now and in the past, has been to find a way for the poor to afford coverage. Obamacare fell far short of its goals in this respect, as any enthusiasm for subsidized (though high) premia was dampened by shockingly high deductibles. This week, Tyler Cowan reported on some research suggesting that low-income individuals place a low value on insurance. Their responsiveness to subsidies is so low that few are persuaded to pay anything close to the premium required. Cowan quotes the authors as saying that even 90% subsidies for these individuals would leave about 25% of this population unwilling to pay for the balance. Cowen quotes the study’s authors:

‘We conclude that the size of uncompensated care for low-income populations provides a plausible explanation for their low [willingness-to-pay].’ In other words, many of the poor do not value health insurance nearly as much as many planners feel they ought to, in large part because they are already getting some health care.

This has several implications. First, these individuals are not without health care, regardless of their coverage status. One of the great misapprehensions among Obamacare supporters is that the poor had no access to care before the law’s passage. Never mind that emergency room utilization is still quite high. Uninsured individuals can go to a public hospital and get treatment in the emergency room and get admitted if that is deemed medically necessary. If the illness causes a loss of income, the individual might qualify for Medicaid if they hadn’t before, and Medicaid has no exclusion for pre-existing conditions. In fact, I’m told the hospital staff might even help you apply right there at the hospital! So who needs insurance before a health crisis?

Many of the poor have continued to do what they did before: go without coverage. Obamacare’s complex system of subsidies is almost beside the point, as is almost any other effort to sign up everyone prior to the onset of major health care needs. Eventual enrollment in Medicaid will pay some of the hospital bills, though it’s true that not all can qualify for the program. Either way, the hospital will swallow a share of the cost — that is, the taxpayer will. Providers would rather not rely on low Medicaid reimbursement rates or perform charity work. This coalition will grapple with the failure of many low-income individuals to arrive at their emergency room doors with coverage as long as we rely on direct subsidies as an inducement to purchase insurance. Unfortunately, a policy offering a separate guarantee of financial health for providers would create another set of awful incentives.

The unfortunate truth is that Medicaid is unsustainable at current funding levels. The AHCA would convert the federal share of the program to one of block grants to states, wnich have always managed the program under federal mandates. The AHCA would free the states to manage the program more flexibly, but caps on the grants would create pressure to manage costs. It is not yet clear whether the Senate will offer a different approach to Medicaid reform, but it was the primary driver of increased health care coverage under Obamacare.

Finally, there are certain individuals with higher incomes who can afford to pay for coverage but prefer to freeload. Those who experience catastrophic health problems will be a burden to others, not necessarily through distortions in insurance pricing, but via taxes and deficits. To an extent, the situation is a classic problem of the commons. In this case, the “commons” is an invention of government and the presumed “right to health care”: there is no solution to the freeloader problem faced by taxpayers short of denying the existence of that right to those who can afford catastrophic coverage but would refuse to pay. Only then would the burdens be internalized to the cost-causes. Charity can and should go partway to relieving individuals of the consequences of their bad decisions, but EMS will still arrive if called, providers will render care, and a chunk of the costs will be on the public dime.

 

A Trump Tax Reform Tally

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , ,

IMG_4199

The Trump tax plan has some very good elements and several that I dislike strongly. For reference, this link includes the contents of an “interpretation” of the proposal from Goldman Sachs, based on the one-page summary presented by the Administration last week as well as insights that the investment bank might have gleaned from its connections within the administration. At the link, click on the chart for an excellent summary of the plan relative to current law and other proposals.

At the outset, I should state that most members of the media do not understand economics, tax burdens, or the dynamic effects of taxes on economic activity. First, they seem to forget that in the first instance, taxpayers do not serve at the pleasure of the government. It is their money! Second, Don Boudreaux’s recent note on the media’s “taxing” ignorance is instructive:

In recent days I have … heard and read several media reports on Trump’s tax plan…. Nearly all of these reports are juvenile: changes in tax rates are evaluated by the media according to changes in the legal tax liabilities of various groups of people. For example, Trump’s proposal to cut the top federal personal income-tax rate from 39.6% to 35% is assessed only by its effect on high-income earners. Specifically, of course, it’s portrayed as a ‘gift’ to high-income earners.

… taxation is not simply a slicing up of an economic pie the size of which is independent of the details of the system of taxation. The core economic case for tax cuts is that they reduce the obstacles to creative and productive activities.

Boudreaux ridicules those who reject this “supply-side” rationale, despite its fundamental and well-established nature. Thomas Sowell makes the distinction between tax rates and tax revenues, and provides some history on tax rate reductions and particularly “tax cuts for the rich“:

… higher-income taxpayers paid more — repeat, MORE tax revenues into the federal treasury under the lower tax rates than they had under the previous higher tax rates. … That happened not only during the Reagan administration, but also during the Coolidge administration and the Kennedy administration before Reagan, and under the G.W. Bush administration after Reagan. All these administrations cut tax rates and received higher tax revenues than before.

More than that, ‘the rich’ not only paid higher total tax revenues after the so-called ‘tax cuts for the rich,’ they also paid a higher percentage of all tax revenues afterwards. Data on this can be found in a number of places …

In some cases, a proportion of the increased revenue may have been due to short-term incentives for asset sales in the wake of tax rate reductions. In general, however, Sowell’s point stands.

Kevin Williamson offers thoughts that could be construed as exactly the sort of thing about which Boudreaux is critical:

It is nearly impossible to cut federal income taxes in a way that primarily benefits low-income Americans, because high-income Americans pay most of the federal income taxes. … The 2.4 percent of households with incomes in excess of $250,000 a year pay about half of all federal income taxes; the bottom half pays about 3 percent.”

The first sentence of that quote highlights the obvious storyline pounced upon by simple-minded journalists, and it also emphasizes the failing political appeal of tax cuts when a decreasing share of the population actually pays taxes. After all, there is some participatory value in spreading the tax burden in a democracy. I believe Williamson is well aware of the second-order, dynamic consequences of tax cuts that spread benefits more broadly, but he is also troubled by the fact that significant spending cuts are not on the immediate agenda: the real resource cost of government will continue unabated. We cannot count on that from Trump, and that should not be a big surprise. Greater accumulation of debt is a certainty without meaningful future reductions in the growth rate of spending.

Here are my thoughts on the specific elements contained in the proposal, as non-specific as they might be:

What I like about the proposal:

  • Lower tax rate on corporate income (less double-taxation): The U.S. has the highest corporate tax rates in the developed world, and the corporate income tax represents double-taxation of income: it is taxed at the corporate level and again at the individual level, perhaps not all at once, but when it is actually received by owners.
  • Adoption of a territorial tax system on corporate income: The U.S. has a punishing system of taxing corporate income wherever it is earned, unlike most of our trading parters. It’s high time we shifted to taxing only the corporate income that is earned in the U.S., which should discourage the practice of tax inversion, whereby firms transfer their legal domicile overseas.
  • No Border Adjustment Tax (BAT): What a relief! This was essentially the application of taxes on imports but tax-free exports. Whatever populist/nationalist appeal this might have had would have quickly evaporated with higher import prices and the crushing blow to import-dependent businesses. Let’s hope it doesn’t come back in congressional negotiations.
  • Lower individual tax rates: I like it.
  • Fewer tax brackets: Simplification, and somewhat lower compliance costs.
  • Fewer deductions from personal income, a broader tax base, and lower compliance costs. Scrapping deductions for state and local taxes in exchange for lower rates will end federal tax subsidies from low-tax to high-tax states.
  • Elimination of the Alternative Minimum Tax: This tax can be rather punitive and it is a nasty compliance cost-causer.

What I dislike about the proposal:

  • The corporate tax rate should be zero (with no double taxation).
  • Taxation of cash held abroad, an effort to encourage repatriation of the cash for reinvestment in the U.S. Taxes on capital of any kind are an act of repeated taxation, as the income used to accumulate capital is taxed to begin with. And such taxes are destructive of capital, which represents a fundamental engine for productivity and economic growth.
  • Retains the mortgage interest and charitable deductions: Both are based on special interest politics. The former leads to an overallocation of resources to owner-occupied housing. Certainly the latter has redeeming virtues, but it subsidizes activities conferring unique benefits to large donors.
  • Increase in the standard deduction: This means fewer “interested” taxpayers. See the  discussion of the Kevin Williamson article above.
  • We should have just one personal income tax bracket, not three: A flat tax would be simpler and would reduce distortions to productive incentives.
  • Tax relief for child-care costs: More special interest politics. Subsidizing market income relative to home activity, hired child care relative to parental care, and fertility is not an appropriate role for government. To the extent that public aid payments are made, they should not be contingent on how the money is spent.
  • Many details are missing: Almost anything could happen with this tax “plan” when the real negotiations begin, but that’s politics, I suppose.

Mixed Feelings:

  • Descriptions of the changes to treatment of pass-through” income seem confused. There is only one kind of tax applied to the income of pass-through entities like S-corporations, and it is the owner’s individual tax rate. Income from C-corporations, on the other hand, is taxed twice: once at a 15% corporate tax rate under the Trump plan, and a second time when it is paid to investors at an individual tax rate, which now range from 15% to almost 24% for “qualified dividends” (most dividend payments), but are likely to range up to 35% for “ordinary” dividends under the plan. So effectively, double-taxed C-corporate income would be taxed at total rates ranging from 30% to 50% after tallying both the C-corp tax and the individual tax. (This is a simplification: C-corp income paid as dividends would be taxed to the corporation and then immediately to the shareholder at their individual rate, while retained corporate income would be taxed later).

Presumably, the Trump tax plan is to reduce the rate on “pass-through” income to just 15% at the individual level, regardless of other income. (It is not clear how that would effect brackets or the rate of taxation on other components of individual income.) Is that good? Yes, to the extent that lower tax rates allow individuals to keep more of their hard-earned income, and to the extent that such a change would help small businesses. S-corps have always had an advantage in avoiding double taxation, however, and this would not end the differential taxation of S and C income, which is distortionary. It might incent business owners to shift income away from salary payments to profit, however, which would increase the negative impact on tax revenue.

  • Interest deductibility and expensing of capital expenditures are in question. Interest deductibility puts debt funding on an equal footing with equity funding only if the double tax on C-corp income is fully repealed. Immediate expensing of “capex” would certainly provide an investment incentive (as long as “excess” expenses can be carried forward), and for C-corporations, it would certainly bring us closer to elimination of the double-tax on income (the accounting matching principle be damned!).
  • There is no commitment to shrink government, but that’s partly (only partly) a function of having abandoned revenue neutrality. It’s also something that has been promised for the next budget year.
  • The tax reform proposal represents a departure from insistence on revenue neutrality: On the whole, I find this appealing, not because I like deficits better than taxes, but because there may be margins along which tax policy can be improved if unconstrained by neutrality, assuming that the incremental deficits are less damaging to the economy than the gains. The political landscape may dictate that desirable changes in tax policy can be made more easily in this way.

Shikha Dalmia wonders whether a real antidote for “Trumpism” might be embedded within the tax reform proposal. If the reforms are successful in stimulating non-inflationary economic growth, a “big if” on the first count, the popular preoccupations inspired by Trump with immigration policy, the “wall” and protectionism might just fade away. But don’t count on it. On the whole, I think the tax reform proposal has promise, though some of the good parts could vanish before a bill hits Trump’s desk, and some of the bad parts could get worse!

What Part of “Free Speech” Did You Not Understand?

Tags

, , , , , , , , , , , , , , ,

The left has adopted an absurdly expansive definition of “hate speech”, and they’d like you to believe that “hate speech” is unconstitutional. Their objective is to establish a platform from which they can ostracize and ultimately censor political opponents on a variety of policy issues, mixed with the pretense of a moral high ground. The constitutional claim is legal nonsense, of course. To be fair, the moral claim may depend on the issue.

John Daniel Davidson writes in The Federalist of the distinction between protected and unprotected speech in constitutional law. The primary exception to protected speech has to do with the use of “fighting words”. Davidson describes one Supreme Court interpretation of fighting words as “a face-to-face insult directed at a specific person for the purpose of provoking a fight.” Obviously threats would fall into the same category, but only to the extent that they imply “imminent lawless action”, according to a major precedent. As such, there is a distinction between fighting words versus speech that is critical, discriminatory, or even hateful, all of which are protected.

Hate speech, on the other hand, has no accepted legal definition. In law, it has not been specifically linked to speech offensive to protected groups under employment, fair housing, hate crime or any other legislation. If we are to accept the parlance of the left, it seems to cover almost anything over which one might take offense. However, unless it qualifies as fighting words, it is protected speech.

The amorphous character of hate speech, as a concept, makes it an ideal vehicle for censoring political opponents, and that makes it extremely dangerous to the workings of a free society. Any issue of public concern has more than one side, and any policy solution will usually create winners and losers. Sometimes the alleged winners and losers are merely ostensible winners and losers, as dynamic policy effects or “unexpected consequences” often change the outcomes. Advocacy for one solution or another seldom qualifies as hate toward those presumed to be losers by one side in a debate, let alone a threat of violence. Yet we often hear that harm is done by the mere expression of opinion. Here is Davidson:

By hate speech, they mean ideas and opinions that run afoul of progressive pieties. Do you believe abortion is the taking of human life? That’s hate speech. Think transgenderism is a form of mental illness? Hate speech. Concerned about illegal immigration? Believe in the right to bear arms? Support President Donald Trump? All hate speech.

Do you support the minimum wage? Do you oppose national reparation payments to African Americans? Do you support health care reform? Welfare reform? Rollbacks in certain environmental regulations? Smaller government? You just might be a hater, according to this way of thinking!

The following statement appears in a recent proposal on free speech. The proposal was recommended as policy by an ad hoc committee created by the administration of a state university:

… Nor does freedom of expression create a privilege to engage in discrimination involving unwelcome verbal, written, or physical conduct directed at a particular individual or group of individuals on the basis of actual or perceived status, or affiliation within a protected status, and so severe or pervasive that it creates an intimidating or hostile environment that interferes with an individual’s employment, education, academic environment, or participation in the University’s programs or activities.

This is an obvious departure from the constitutional meaning of free expression or any legal precedent.

And here is Ulrich Baer, who is New York University‘s vice provost for faculty, arts, humanities, and diversity (and professor of comparative literature), in an opinion piece this week in the New York Times:

The recent student demonstrations [against certain visiting speakers] should be understood as an attempt to ensure the conditions of free speech for a greater group of people, rather than censorship. … Universities invite speakers not chiefly to present otherwise unavailable discoveries, but to present to the public views they have presented elsewhere. When those views invalidate the humanity of some people, they restrict speech as a public good.  …

The idea of freedom of speech does not mean a blanket permission to say anything anybody thinks. It means balancing the inherent value of a given view with the obligation to ensure that other members of a given community can participate in discourse as fully recognized members of that community.

How’s that for logical contortion? Silencing speakers is an effort to protect free speech! As noted by Robby Soave in on Reason.com, “... free speech is not a public good. It is an individual right.” This cannot be compromised by the left’s endlessly flexible conceptualization of “hate speech”, which can mean almost any opinion with which they disagree. Likewise, to “invalidate the humanity of some people” is a dangerously subjective standard. Mr. Baer is incorrect in his assertion that speakers must balance the “inherent” value of their views with an obligation to be “inclusive”. The only obligation is not to threaten or incite “imminent lawless action”. Otherwise, freedom of speech is a natural and constitutionally unfettered right to express oneself. Nothing could be more empowering!

Note that the constitution specifically prohibits the government from interfering with free speech. That includes any public institution such as state universities. Private parties, however, are free to restrict speech on their own property or platform. For example, a private college can legally restrict speech on its property and within its facilities. The owner of a social media platform can legally restrict the speech used there as well.

Howard Dean, a prominent if somewhat hapless member of the democrat establishment, recently tweeted this bit of misinformation: “Hate speech is not protected by the first amendment.” To this, Dean later added some mischaracterizations of Supreme Court decisions, prompting legal scholar Eugene Volokh to explain the facts. Volokh cites a number of decisions upholding a liberal view of free speech rights (and I do not use the word liberal lightly). Volokh also cites the “prior restraint doctrine”:

The government generally may not exclude speakers — even in government-owned ‘limited public forums’ — because of a concern that the speakers might violate the rules if they spoke.

If a speaker violates the law by engaging in threats or inciting violence, it is up to law enforcement to step in, ex post, just as they should when antifa protestors show their fascist colors through violent efforts to silence speakers. Volokh quotes from an opinion written by Supreme Court Justice Harry A. Backmun:

… a free society prefers to punish the few who abuse rights of speech after they break the law than to throttle them and all others beforehand. It is always difficult to know in advance what an individual will say, and the line between legitimate and illegitimate speech is often so finely drawn that the risks of freewheeling censorship are formidable.”

Imprecision and Unsettled Science

Tags

, , , , , , , , , , , , , , , ,

Last week I mentioned some of the inherent upward biases in the earth’s more recent surface temperature record. Measuring a “global” air temperature at the surface is an enormously complex task, requiring the aggregation of measurements taken using different methods and instruments (land stations, buoys, water buckets, ship water intakes, different kinds of thermometers) at points that are unevenly distributed across latitudes, longitudes, altitudes, and environments (sea, forest, mountain, and urban). Those measurements must be extrapolated to surrounding areas that are usually large and environmentally diverse. The task is made all the more difficult by the changing representation of measurements taken at these points, and changes in the environments at those points over time (e.g., urbanization). The spatial distribution of reports may change systematically and unsystematically with the time of day (especially onboard ships at sea).

The precision with which anything can be measured depends on the instrument used. Beyond that, there is often natural variation in the thing being measured. Some thermometers are better than others, and the quality of these instruments has varied tremendously over the roughly 165-year history of recorded land temperatures. The temperature itself at any location is subject to variation as the air shifts, but temperature readings are like snapshots taken at points in time, and may not be representative of areas nearby. In fact, the number of land weather stations used in constructing global temperatures has declined drastically since the 1970s, which implies an increasing error in approximating temperatures within each expanding area of coverage.

The point is that a statistical range of variation exists around each temperature measurement, and there is additional error introduced by vagaries of the aggregation process. David Henderson and Charles Hooper discuss the handling of temperature measurement errors in aggregation and in discussions of climate change. The upward trend in the “global” surface temperature between 1856 and 2004 was about 0.8° C, but a 95% confidence interval around that change is ±0.98° C. (I believe that is probably small given the sketchiness of the early records.) In other words, from a statistical perspective, one cannot reject the hypothesis that the global surface temperature was unchanged for the full period.

Henderson and Hooper make some other salient points related to the negligible energy impulse from carbon forcings relative to the massive impact of variations in solar energy and the uncertainty around the behavior of cloud formation. It’s little wonder that climate models relying on a carbon-forcing impact have erred so widely and consistently.

In addition to reinforcing the difficulty of measuring surface temperatures and modeling the climate, the implication of the Henderson and Hooper article is that policy should not be guided by measurements and models subject to so much uncertainty and such minor impulses or “signals”. The sheer cost of abating carbon emissions is huge, though some alternative means of doing so are better than others. Costs increase as the degree of abatement increases (or replacement of low-carbon alternatives), and I suspect that the incremental benefit decreases. Strict limits on carbon emissions reduce economic output. On a broad scale, that would impose a sacrifice of economic development and incomes in the non-industrialized world, not to mention low-income minorities in the developed world. One well-known estimate by William Nordhaus involved a 90% reduction in world carbon emissions by 2050. He calculated a total long-run cost of between $17 trillion and $22 trillion. Annually, the cost was about 3.5% of world GDP. The climate model Nordhaus used suggested that the reduction in global temperatures would be between 1.3º and 1.6º C, but in view of the foregoing, that range is highly speculative and likely to be an extreme exaggeration. And note the small width of the “confidence interval”. That range is not at all a confidence interval in the usual sense; it is a “stab” at the uncertainty in a forecast of something many years hence.  Nordhaus could not possibly have considered all sources of uncertainty in arriving at that range of temperature change, least of all the errors in measuring global temperature to begin with.

Climate change activists would do well to spend their Earth Day educating themselves about the facts of surface temperature measurement. Their usual prescription is to extract resources and coercively deny future economic gains in exchange for steps that might or might not solve a problem they insist is severe. The realities are that the “global temperature” is itself subject to great uncertainty, and its long-term trend over the historical record cannot be distinguished statistically from zero. In terms of impacting the climate, natural forces are much more powerful than carbon forcings. And the models on which activists depend are so rudimentary, and so error prone and biased historically, that taking your money to solve the problem implied by their forecasts is utter foolishness.

Better Bids and No Bumpkins

Tags

, , , , , , , , , ,

United Airlines‘ mistreatment of a passenger last week in Chicago had nothing to do with overbooking, but commentary on the issue of overbooking is suddenly all the rage. The fiasco in Chicago began when four United employees arrived at the gate after a flight to Louisville had boarded. The flight was not overbooked, just full, but the employees needed to get to Louisville. United decided to “bump” four passengers to clear seats for the employees. They used an algorithm to select four passengers to be bumped based on factors like lowest-fare-paid and latest purchase. The four passengers were offered vouchers for a later flight and a free hotel night in Chicago. Three of the four agreed, but the fourth refused to budge. United enlisted the help of Chicago airport security officers, who dragged the unwilling victim off the flight, bloodying him in the process. It was a terrible day for United‘s public relations, and the airline will probably end up paying an expensive out-of-court settlement to the mistreated passenger.

Putting the unfortunate Chicago affair aside, is over-booking a big problem? Airlines always have cancellations, so they overbook in order to keep the seats filled. That means higher revenue and reduced costs on a per passenger basis. Passengers are rarely bumped from flights involuntarily: about 0.005% in the fourth quarter of 2016, according to the U.S. Department of Transportation. “Voluntarily denied boardings” are much higher: about 0.06%. Both of these figures seem remarkably low as “error rates”, in a manner of speaking.

Issues like the one in Chicago do not arise under normal circumstances because “bumps” are usually resolved before boarding takes place, albeit not always to everyone’s satisfaction. Still, if airlines were permitted (and willing) to bid sufficiently high rates of compensation to bumped ticket-holders, there would be no controversy at all. All denied boardings would be voluntary. There are a few other complexities surrounding the rules for compensation, which depend on estimates of the extra time necessary for a bumped traveler to reach their final destination. If less than an extra hour, for example, then no compensation is required. In other circumstances, the maximum compensation level allowed by the government is $1,300. These limits can create an impasse if a passenger is unwilling to accept the offer (or non-offer when only an hour is at stake). The only way out for the airline, in that case, is an outright taking of the passenger’s boarding rights. Of course, this possibility is undoubtedly in the airline’s “fine print” at the time of the original purchase.

No cap on a bumped traveler’s compensation was anticipated when economist Julian Simon first proposed such a scheme in 1968:

The solution is simple. All that need happen when there is overbooking is that an airline agent distributes among the ticket-holders an envelope and a bid form, instructing each person to write down the lowest sum of money he is willing to accept in return for waiting for the next flight. The lowest bidder is paid in cash and given a ticket for the next flight. All other passengers board the plane and complete the flight to their destination.

Today’s system is a simplified version of Simon’s suggestion, and somewhat bastardized, given the federal caps on compensation. If the caps were eliminated without other offsetting rule changes, would the airlines raise their bids sufficiently to eliminate most involuntary bumps? There would certainly be pressure to do so. Of course, the airlines already get to keep the fares paid on no-shows if they are non-refundable tickets.

John Cochrane makes another suggestion: limit ticket sales to the number of seats on the plane and allow a secondary market in tickets to exist, just as resale markets exist for concert and sports tickets. Bumps would be a thing of the past, or at least they would all be voluntary and arranged for mutual gain by the buyers and sellers. Some say that peculiarities of the airline industry argue that the airlines themselves would have to manage any resale market in their own tickets (see the comments on Cochrane’s post). That includes security issues, tickets with special accommodations for disabilities, meals, or children, handling transfers of frequent flier miles along with the tickets, and senior discounts.

Conceivably, trades on such a market could take place right up to the moment before the doors are closed on the plane. Buyers would still have to go through security, however, and you need a valid boarding pass to get through security. That might limit the ability of the market to clear in the final moments before departure: potential buyers would simply not be on hand.  Only those already through security, on layovers, or attempting to rebook on the concourse  could participate without changes in the security rules. Perhaps this gap could be minimized if last-minute buyers qualified for TSA pre-check. Also, with the airline’s cooperation, electronic boarding passes must be made changeable so that the new passenger’s name would match his or her identification. Clearly, the airlines would have to be active participants in arranging these trades, but a third-party platform for conducting trades is not out-of the question.

Could other concerns about secondary trading be resolved ion a third-party platform? Probably, but again, solutions would require participation by the airlines. Trading miles along with the ticket could be made optional (after all, the miles would have a market value), but the trade of miles would have to be recorded by the airline. The tickets themselves could trade just as they were sold originally by the airline, whether the accommodations are still necessary or not. The transfer of a discounted ticket might obligate the buyer to pay the airline a sum equal to the discount unless they qualified under the same discount program. All of these problems could be resolved.

Would the airlines want a secondary market in their tickets? Probably not. If there are gains to be made on resale, they would rather capture as much of it as they possibly can. The federal caps on compensation to bumped fliers give the airlines a break in that regard, and they should be eliminated in the interests of consumer welfare. Let’s face it, the airlines know the that a seat on an over-booked flight is a scarce resource; the owner (the original ticker buyer) should be paid fair market value if the airline wants to take their ticket for someone else. Airlines must increase their bids until the market clears, which means that fliers would never be bumped involuntarily. A secondary market in tickets, however, would obviate the practice of over-booking and allow fliers to capture the gain in exchange for surrendering their ticket. Once purchased, it belongs to them.

Playing Pretend Science Over Cocktails

Tags

, , , , , , , , , , , , , , , , ,

It’s a great irony that our educated and affluent classes have been largely zombified on the subject of climate change. Their brainwashing by the mainstream media has been so effective that these individuals are unwilling to consider more nuanced discussions of the consequences of higher atmospheric carbon concentrations, or any scientific evidence to suggest contrary views. I recently attended a party at which I witnessed several exchanges on the topic. It was apparent that these individuals are conditioned to accept a set of premises while lacking real familiarity with supporting evidence. Except in one brief instance, I avoided engaging on the topic, despite my bemusement. After all, I was there to party, and I did!

The zombie alarmists express their views within a self-reinforcing echo chamber, reacting to each others’ virtue signals with knowing sarcasm. They also seem eager to avoid any “denialist” stigma associated with a contrary view, so there is a sinister undercurrent to the whole dynamic. These individuals are incapable of citing real sources and evidence; they cite anecdotes or general “news-say” at best. They confuse local weather with climate change. Most of them haven’t the faintest idea how to find real research support for their position, even with powerful search engines at their disposal. Of course, the search engines themselves are programmed to prioritize the very media outlets that profit from climate scare-mongering. Catastrophe sells! Those media outlets, in turn, are eager to quote the views of researchers in government who profit from alarmism in the form of expanding programs and regulatory authority, as well as researchers outside of government who profit from government grant-making authority.

The Con in the “Consensus”

Climate alarmists take assurance in their position by repeating the false claim that  97% of climate scientists believe that human activity is the primary cause of warming global temperatures. The basis for this strong assertion comes from an academic paper that reviewed other papers, the selection of which was subject to bias. The 97% figure was not a share of “scientists”. It was the share of the selected papers stating agreement with the anthropomorphic global warming (AGW) hypothesis. And that figure is subject to other doubts, in addition to the selection bias noted above: the categorization into agree/disagree groups was made by “researchers” who were, in fact, environmental activists, who counted several papers written by so-called “skeptics” among the set that agreed with the strong AGW hypothesis. So the “97% of scientists” claim is a distortion of the actual findings, and the findings themselves are subject to severe methodological shortcomings. On the other hand, there are a number of widely-recognized, natural reasons for climate change, as documented in this note on 240 papers published over just the first six months of 2016.

Data Integrity

It’s rare to meet a climate alarmist with any knowledge of how temperature data is actually collected. What exactly is the “global temperature”, and how can it be measured? It is a difficult undertaking, and it wasn’t until 1979 that it could be done with any reliability. According to Roy Spencer, that’s when satellite equipment began measuring:

… the natural microwave thermal emissions from oxygen in the atmosphere. The intensity of the signals these microwave radiometers measure at different microwave frequencies is directly proportional to the temperature of different, deep layers of the atmosphere.

Prior to the deployment of weather satellites, and starting around 1850, temperature records came only from surface temperature readings. These are taken at weather stations on land and collected at sea, and they are subject to quality issues that are generally unappreciated. Weather stations are unevenly distributed and they come and go over time; many of them produce readings that are increasingly biased upward by urbanization. Sea surface temperatures are collected in different ways with varying implications for temperature trends. Aggregating these records over time and geography is a hazardous undertaking, and these records are, unfortunately, the most vulnerable to manipulation.

The urbanization bias in surface temperatures is significant. According to this paper by Ross McKitrick, the number of weather stations counted in the three major global temperature series declined by more than 4,500 since the 1970s (over 75%), and most of those losses were rural stations. From McKitrick’s abstract:

“The collapse of the sample size has increased the relative fraction of data coming from airports to about 50% (up from about 30% in the late 1970s). It has also reduced the average latitude of source data and removed relatively more high altitude monitoring sites. Oceanic data are based on sea surface temperature (SST) instead of marine air temperature (MAT)…. Ship-based readings changed over the 20th century from bucket-and-thermometer to engine-intake methods, leading to a warm bias as the new readings displaced the old.

Think about that the next time you hear about temperature records, especially NOAA reports on a “new warmest month on record”.

Data Manipulation

It’s rare to find alarmists having any awareness of the scandal at East Anglia University, which involved data falsification by prominent members of the climate change “establishment”. That scandal also shed light on corruption of the peer-review process in climate research, including a bias against publishing work skeptical of the accepted AGW narrative. Few are aware now of a very recent scandal involving manipulation of temperature data at NOAA in which retroactive adjustments were applied in an effort to make the past look cooler and more recent temperatures warmer. There is currently an FOIA outstanding for communications between the Obama White House and a key scientist involved in the scandal. Here are Judith Curry’s thoughts on the NOAA temperature manipulation.

Think about all that the next time you hear about temperature records, especially NOAA reports on a “new warmest month on record”.

Other Warming Whoppers

Last week on social media, I noticed a woman emoting about the way hurricanes used to frighten her late mother. This woman was sharing an article about the presumed negative psychological effects that climate change was having on the general public. The bogus premises: we are experiencing an increase in the frequency and severity of storms, that climate change is causing the storms, and that people are scared to death about it! Just to be clear, I don’t think I’ve heard much in the way of real panic, and real estate prices and investment flows don’t seem to be under any real pressure. In fact, the frequency and severity of severe weather has been in decline even as atmospheric carbon concentrations have increased over the past 50 years.

I heard another laughable claim at the party: that maps are showing great areas of the globe becoming increasingly dry, mostly at low latitudes. I believe the phrase “frying” was used. That is patently false, but I believe it’s another case in which climate alarmists have confused model forecasts with fact.

The prospect of rising sea levels is another matter that concerns alarmists, who always fail to note that sea levels have been increasing for a very long time, well before carbon concentrations could have had any impact. In fact, the sea level increases in the past few centuries are a rebound from lows during the Little Ice Age, and levels are now back to where the seas were during the Medieval Warm Period. But even those fluctuations look minor by comparison to the increases in sea levels that occurred over 8,000 years ago. Sea levels are rising at a very slow rate today, so slowly that coastal construction is proceeding as if there is little if any threat to new investments. While some of this activity may be subsidized by governments through cheap flood insurance, real money is on the line, and that probably represents a better forecast of future coastal flooding than any academic study can provide.

Old Ideas Die Hard

Two enduring features of the climate debate are 1) the extent to which so-called “carbon forcing” models of climate change have erred in over-predicting global temperatures, and 2) the extent to which those errors have gone unnoticed by the media and the public. The models have been plagued by a number of issues: the climate is not a simple system. However, one basic shortcoming has to do with the existence of strong feedback effects: the alarmist community has asserted that feedbacks are positive, on balance, magnifying the warming impact of a given carbon forcing. In fact, the opposite seems to be true: second-order responses due to cloud cover, water vapor, and circulation effects are negative, on balance, at least partially offsetting the initial forcing.

Fifty Years Ain’t History

One other amazing thing about the alarmist position is an insistence that the past 50 years should be taken as a permanent trend. On a global scale, our surface temperature records are sketchy enough today, but recorded history is limited to the very recent past. There are recognized methods for estimating temperatures in the more distant past by using various temperature proxies. These are based on measurements of other natural phenomenon that are temperature-sensitive, such as ice cores, tree rings, and matter within successive sediment layers such as pollen and other organic compounds.

The proxy data has been used to create temperature estimates into the distant past. A basic finding is that the world has been this warm before, and even warmer, as recently as 1,000 years ago. This demonstrates the wide range of natural variation in the climate, and today’s global temperatures are well within that range. At the party I mentioned earlier, I was amused to hear a friend say, “Ya’ know, Greenland isn’t supposed to be green”, and he meant it! He is apparently unaware that Greenland was given that name by Viking settlers around 1000 AD, who inhabited the island during a warm spell lasting several hundred years… until it got too cold!

Carbon Is Not Poison

The alarmists take the position that carbon emissions are unequivocally bad for people and the planet. They treat carbon as if it is the equivalent of poisonous air pollution. The popular press often illustrates carbon emissions as black smoke pouring from industrial smokestacks, but like oxygen, carbon dioxide is a colorless gas and a gas upon which life itself depends.

Our planet’s vegetation thrives on carbon dioxide, and increasing carbon concentrations are promoting a “greening” of the earth. Crop yields are increasing as a result; reforestation is proceeding as well. The enhanced vegetation provides an element of climate feedback against carbon “forcings” by serving as a carbon sink, absorbing increasing amounts of carbon and converting it to oxygen.

Matt Ridley has noted one of the worst consequences of the alarmists’ carbon panic and its influence on public policythe vast misallocation of resources toward carbon reduction, much of it dedicated to subsidies for technologies that cannot pass economic muster. Consider that those resources could be devoted to many other worthwhile purposes, like bringing electric power to third-world families who otherwise must burn dung inside their huts for heat; for that matter, perhaps the resources could be left under the control of taxpayers who can put it to the uses they value most highly. The regulatory burdens imposed by these policies on carbon-intensive industries represent lost output that can’t ever be recouped, and all in the service of goals that are of questionable value. And of course, the anti-carbon efforts almost certainly reflect a diversion of resources to the detriment of more immediate environmental concerns, such as mitigating truly toxic industrial pollutants.

The priorities underlying the alarm over climate change are severely misguided. The public should demand better evidence than consistently erroneous model predictions and manipulated climate data. Unfortunately, a media eager for drama and statism is complicit in the misleading narrative.

FYI: The cartoon at the top of this post refers to the climate blog climateaudit.org. The site’s blogger Steve McIntyre did much to debunk the “hockey stick” depiction of global temperature history, though it seems to live on in the minds of climate alarmists. McIntyre appears to be on an extended hiatus from the blog.

Courts and Their Administrative Masters

Tags

, , , , , , , , , , , , ,

IMG_4007

Supreme Court nominee Neil Gorsuch says the judicial branch should not be obliged to defer to government agencies within the executive branch in interpreting law. Gorsuch’s  opinion, however, is contrary to an established principle guiding courts since the 1984 Supreme Court ruling in Chevron USA vs. The Natural Resources Defense Council. In what is known as Chevron deference, courts apply a test of judgement as to whether the administrative agency’s interpretation of the law is “reasonable”, even if other “reasonable” interpretations are possible. This gets particularly thorny when the original legislation is ambiguous with respect to a certain point. Gorsuch believes the Chevron standard subverts the intent of Constitutional separation of powers and judicial authority, a point of great importance in an age of explosive growth in administrative rule-making at the federal level.

Ilya Somin offers a defense of Gorsuch’s position on Chevron deference, stating that it violates the text of the Constitution authorizing the judiciary to decide matters of legal dispute without ceding power to the executive branch. The agencies, for their part, seem to be adopting increasingly expansive views of their authority:

“Some scholars argue that in many situations, agencies are not so much interpreting law, but actually making it by issuing regulations that often have only a tenuous basis in congressional enactments. When that happens, Chevron deference allows the executive to usurp the power of Congress as well as that of the judiciary.”

Jonathan Adler quotes a recent decision by U.S. Appeals Court Judge Kent Jordan in which he expresses skepticism regarding the wisdom of Chevron deference:

Deference to agencies strengthens the executive branch not only in a particular dispute under judicial review; it tends to the permanent expansion of the administrative state. Even if some in Congress want to rein an agency in, doing so is very difficult because of judicial deference to agency action. Moreover, the Constitutional requirements of bicameralism and presentment (along with the President’s veto power), which were intended as a brake on the federal government, being ‘designed to protect the liberties of the people,’ are instead, because of Chevron, ‘veto gates’ that make any legislative effort to curtail agency overreach a daunting task.

In short, Chevron ‘permit[s] executive bureaucracies to swallow huge amounts of core judicial and legislative power and concentrate federal power in a way that seems more than a little difficult to square with the Constitution of the [F]ramers’ design.’

The unchecked expansion of administrative control is a real threat to the stability of our system of government, our liberty, and the health of our economic system. It imposes tremendous compliance costs on society and often violates individual property rights. Regulatory actions are often taken without performing a proper cost-benefit analysis, and the decisions of regulators may be challenged initially only within a separate judicial system in which courts are run by the agencies themselves! I covered this point in more detail one year ago in “Hamburger Nation: An Administrative Nightmare“, based on Philip Hamburger’s book “Is Administrative Law Unlawful?“.

Clyde Wayne Crews of the Competitive Enterprise Institute gives further perspective on the regulatory-state-gone-wild in “Mapping Washington’s Lawlessness: An Inventory of Regulatory Dark Matter“. He mentions some disturbing tendencies that may go beyond the implementation of legislative intent: agencies sometimes choose to wholly ignore some aspects of legislation; agencies tend to apply pressure on regulated entities on the basis of interpretations that stretch the meaning of such enabling legislation as may exist; and as if the exercise of extra-legislative power were not enough, administrative actions have a frequent tendency to subvert the price mechanism in private markets, disrupting the flow of accurate information about resource-scarcity and the operation of incentives that give markets their great advantages. All of these behaviors fit Crews’ description of “regulatory dark matter.”

Chevron deference represents an unforced surrender by the judicial branch to the exercise of power by the executive. As Judge Jordan notes in additional quotes provided by Adler at a link above, this does not deny the usefulness or importance of an agency’s specialized expertise. Nevertheless, the courts should not abdicate their role in reviewing an agency’s developmental evidence for any action, and the reasonability of an agency’s applications of evidence relative to alternative courses of action. Nor should the courts abdicate their role in ruling on the law itself. Judge Gorsuch is right: Chevron deference should be re-evaluated by the courts.

Benefit Mandates Bar Interstate Competition

Tags

, , , , , , , , , , , , ,

The lack of interstate competition in health insurance does not benefit consumers, but promoting that kind of competition requires steps that are not widely appreciated. Most of those steps must take place at the state level. In fact, it is not well known that it is already legal for states to jointly create interstate “compacts” under Obamacare, though none have done so.

The chief problem is that states regulate insurance carriers and the policies they offer in a variety of ways. Coverage mandates vary from state to state, as do rules governing the coverage of pre-existing conditions, renewability, dependents, costs, and risk rating. John Seiler, writing at the Foundation for Economic Education, offers a great perspective on the fractured character of state regulations. Incumbent insurers within a state have natural advantages due to their existing relationships with local providers. Between the difficulty of forming a new network and the costs of customizing policies and obtaining approval in multiple states, there are significant barriers to entry at state lines.

Federalism is a principle I often support, but state benefit mandates and other regulations are perverse examples because they restrict the otherwise voluntary and victimless choices available to a state’s consumers. Well, victimless except perhaps for in-state monopolists and their cronyist protectors in state government. Many powers are reserved to states under the Constitution, while the powers of the federal government are strictly limited. That’s well and good unless state governments infringe on the rights of individuals protected by the Constitution. In particular, the Commerce Clause prohibits state governments from obstructing the flow of interstate commerce.

Here is a bit of history surrounding the evolution of state versus federal control over insurance markets, as told by Pennsylvania Insurance Commissioner Teresa Miller (as quoted by reporter Steve Esack):

Since the 1800s, the U.S. Supreme Court held individual states, not Congress, had the power to regulate insurance companies. The high court overturned that precedent, however, in a 1944 ruling, United States v. South-Eastern Underwriters, that said insurance sales constituted interstate trade and Congress could regulate insurance under the U.S. Constitution’s Commerce Clause.

But states cried foul. In response, Congress passed and President Harry S. Truman in 1945 signed the McCarran-Ferguson Act to grant a limited anti-trust provision so states could keep regulating insurance carriers. The law does not preclude cross-border sales. It means insurance companies must abide by different sets of rules and regulations and laws in 50 states.

Congress obviously recognized that state regulation of health insurance would create monopoly power and restrain trade, even if states place bridles on insurers and impose ostensible consumer protections. The solution was to exempt health insurers from broad federal regulation and anti-trust prosecution by the Department of Justice.

Last week, the House of Representatives passed a bill that would repeal McCarran-Ferguson for health insurers. However, that would do little to encourage cross-border competition as long as the tangle of state mandates and other regulations remain in place. The regulatory landscape would have to change under this kind of federal legislation, but how that would happen is an open question. Could court challenges be brought against state regulators and coverage mandates as anti-competitive? Would anti-trust actions be brought against incumbent carriers?

Robert Laszewski has strong objections to any new law that would allow interstate sales of health insurance as long as state benefit mandates remain in place for “local legacy” carriers. In particular, he believes it would encourage “cherry picking” of the best risks by market entrants who would be free of the mandates. Many of the healthiest individuals would jump at the chance to purchase stripped down, catastrophic coverage. That would leave the legacy carriers under the burden of mandates and deteriorating risk pools. Would states do this to their incumbent insurers without prodding by the courts? Would they simply drop the mandates? I doubt it.

No matter the end-state, there is likely to be a contentious transition. Promoting interstate competition in the health insurance market is a laudable goal, but it is not as simple as some health-care reformers would have us believe. Real competition requires action by states to eliminate or liberalize regulations on benefit mandates, risk-rating and pre-existing conditions. Ultimately, the cost of coverage for high-risk individuals might have to be subsidized, whether means-tested or not, through a combination of support from the states, the federal government, and private charities. And of course, interstate competition really does requires repeal of the health insurance provisions of McCarran-Ferguson.

Governments at any level can act against the well-being of consumers, despite the acknowledged benefits of decentralized governance over central control. Benefit mandates, whether imposed at the federal or state levels, are inimical to consumer choice, competition, efficient pricing, and often to the very concept of insurance. Those aren’t the sort of purposes federalism was intended to serve.

The CBO’s Obamacare Fantasy Forecast

Tags

, , , , , , , , ,

The Congressional Budget Office (CBO) is still predicting strong future growth in the number of insured individuals under Obamacare, despite their past, drastic over-predictions for the exchange market and slim chances that the Affordable Care Act’s expansion of Medicaid will be adopted by additional states. Now that Republican leaders have backed away from an unpopular health care plan they’d hoped would pass the House and meet the Senate’s budget reconciliation rules, it will be interesting to see how the CBO’s predictions pan out. The “decremental” forecasts it made for the erstwhile American Health Care Act (AHCA) were based on its current Obamacare “baseline”. A figure cited often by critics of the GOP plan was that 24 million fewer individuals would be insured by 2026 than under the baseline.

It was fascinating to see many supporters of the AHCA accept this “forecast” uncritically. With the AHCA’s failure, however, we’ve been given an opportunity to witness the distortion in what would have been a CBO counterfactual. What a wonderful life! We’re stuck with Obamacare for the time being, but this glimpse into the CBO’s delusions will be one of several silver linings for me.

Again, the projected 24 million loss in the number of insured under the AHCA was based on an actual predicted loss of about 5 – 6 million and the absence of an Obamacare gain of 18 – 19 million. Those figures are from an excellent piece by Avik Roy in Forbes. I drew on that article extensively in my post on the AHCA prior to its demise. Here are some key points I raised then, which I’ve reworded slightly to put more emphasis on the Obamacare forecasts:

  1. The CBO has repeatedly erred by a large margin in its forecasts of Obamacare exchange enrollment, overestimating 2016 enrollment by over 100% as recently as 2014.
  2. The AHCA changes relative to Obamacare were taken from CBO’s 2016 forecast, which is likely to over-predict Obamacare enrollment on the exchanges by at least 7 million, according to Roy.
  3. The CBO also assumes that all states will opt to participate in expanded Medicaid under Obamacare going forward. That is highly unlikely, and Roy estimates its impact on the CBO’s forecast at about 3 million individuals.
  4. The CBO believes that the Obamacare individual mandate has encouraged millions to opt for insurance. Roy says that assumption accounts for as much as 9 million of total enrollment across the individual and employer markets, as well as Medicaid.

Thus, Roy believes the CBO’s estimate of the coverage loss of 24 million individuals under the AHCA was too high by about 19 million!

In truth, Obamacare will be watered down by regulatory and other changes instituted by the Trump Administration, which has said it will not enforce Obamacare’s individual mandate. Coverage under the “new” Obamacare will devolve quickly if the CBO is correct about the impact of the individual mandate.

The CBO’s job is to “score” proposed legislation relative to current law; traditionally, it made no attempt to account for dynamic effects that might arise from the changed incentives under a law. The results show it, and the Obamacare projections are no exception. In the case of Obamacare, however,  the CBO seems to have applied certain incentive effects selectively. The supporters of the AHCA might have helped their case by focusing on the flaws in the CBO’s baseline assumptions. We should keep that in mind in the future with respect to any future health care legislation, not to mention tax reform!