Obamanomics and Opportunity Knocked Off

Tags

, , , , , , , , ,

find-govt-worker

Another Obama fallacy and a new, binding constraint on voluntary private arrangements: in the latest example of administrative rule-making gone berserk, the Obama Administration (via The Department of Labor) is proposing a drastic change in the definition of an exempt employee, increasing the salary threshold for the exemption from $23,660 to as much as $52,000. This is likely to change the status of a large number of workers, but as Warren Meyer explains, not in the way the administration hopes.

Obama and his advisors imagine that this change will actually increase the incomes of a large number of workers — that employers will begin paying overtime to hard-working supervisory and administrative employees. Meyer quotes Politico‘s headline: “Barack Obama poised to hike wages for millions.” But employers are not indifferent to the cost of a given labor input.

As Meyer asserts, currently exempt employees who now earn a salary between the current and the new thresholds may well be converted to hourly, non-exempt employees. And those now working extra hours are likely to be working fewer hours under the new rules. In fact, they may well see their hours and incomes reduced. Some employers will be able to automate certain tasks to compensate for the reduction in labor input, as Meyer suggests. Or perhaps more part-time workers will be hired.

There is another issue at stake, however, in addition to the mere calculation of workers X hours X the wage rate. Meyer expresses disgust at the way the new threshold could change relationships between employers and certain employees. As he tells it, the change will convert ambitious young managers into clock-punchers. In case that sounds too much like a negative personality change, a more sympathetic view is that many workers do not mind putting in extra hours, even as it reduces their effective wage. They have their reasons, ranging from the non-pecuniary, such as simple work ethic, enjoyment and pride in their contribution to reward-driven competitiveness and ambition. Hours worked gives exempt employees an additional margin along which to prove their value to the enterprise. Obama’s proposal takes that away, which may penalize employees with less talent but strong ambition. Opportunity’s knock is getting softer.

OTC Birth Control vs. State Control

Tags

, , , , , , , , ,

cartoon_dancing

Why would the Progressive Left oppose over-the-counter birth control? Let us count the reasons…

Senator Cory Gardner (R-CO) has proposed a bill to eliminate the federal requirement that a doctor’s prescription is needed to obtain birth control. According to Gardner,

Most other drugs with such a long history of safe and routine use are available for purchase over the counter, and contraception should join them.

Six other Republicans have signed-on as co-sponsors. The change is sensible on many levels, from improving access to birth control to reducing health care costs, yet the Left and some so-called women’s advocates have reacted with horror. Most of what follows is discussed in two articles, “Why Liberals Oppose Over-the-Counter Birth Control“, by Jillian Kay Melchior, and “Republicans Push For Over-The-Counter Birth Control, Liberals Immediately Oppose The Plan“, from Before It’s News (BIN).

  1. “Free” birth control was offered under Obamacare. The Left claims that the OTC proposal is a conspiracy to eliminate federal funding of birth control and shift the cost burden back to women. Yet the bill does not change the coverage requirement in any way.
  2. The Left claims that the change to OTC will increase the cost of birth control. On one level, this is the same as #1. However, some have argued that the change will actually drive up the cost of contraception, and that’s a whole different level of delusional economics. Filling prescriptions involves much greater use resources than OTC, particularly the time of the physician and staff, the pharmacist, and the buyer. OTC would also remove a barrier to competition in the provision of birth control, which would reduce costs.
  3. Some physicians require an examination and even tests before they’ll write a birth control prescription, which can run into hundreds of dollars. Naturally, many of them would like to retain this flow of business, yet according to Melchior, “…the World Health Organization and the American Congress of Obstetricians and Gynecologists have confirmed that doctors can safely prescribe the pill without a full examination.” Freeing women of the need for a doctor’s blessing would  improve access unambiguously.
  4. Melchior also reports that “Planned Parenthood alone makes around $1.2 billion each year from contraceptive services.” Naturally, Planned Parenthood would like to protect that flow of revenue, but the availability of OTC birth control would expose it to competition.

What nonsense people spout in defense of their political agenda, not to mention their rents! The proposal for OTC birth control should be a slam-dunk liberalization, one that no self-respecting Liberal or Libertarian should oppose. But apparently, for the Progressives, helping women is secondary to preserving state control and the “statist quo”.

Fitting Data To Models At NOAA

Tags

, , , , , , , , , , , ,

Dilbert Made Up Numbers

If the facts don’t suit your agenda, change them! The 18-year “hiatus” in global warming, which has made a shambles of climate model predictions, is now said to have been based on “incorrect data”, according to researchers at National Oceanic and Atmospheric Administration (NOAA). Translation: they have created new data “adjustments” that tell a story more consistent with their preferred narrative, namely, that man-made carbon emissions are forcing global temperatures upward, more or less steadily. The New York Times’ report on the research took a fairly uncritical tone, despite immediate cautions and rebuttals from a number of authorities. On balance, the NOAA claims seem rather laughable.

Ross McKitrick has an excellent discussion of the NOAA adjustments on the Watts Up With That? blog (WUWT). His post reinforces the difficulty of aggregating temperature data in a meaningful way. A given thermometer in a fixed location can yield drifting temperatures over time due to changes in the surrounding environment, such as urbanization. In addition, weather stations are dispersed in irregular ways with extremely uneven coverage, and even worse, they have come and gone over time. There are gaps in the data that must be filled. There might be international differences in reporting practices as well. Sea surface temperature measurement is subject to even greater uncertainty. They can be broadly classified into temperatures collected on buoys and those collected by ships, and the latter have been taken in a variety of ways, from samples collected in various kinds of buckets, hull sensors, engine room intakes, and deck temperatures. The satellite readings, which are a recent development, are accurate in tracking changes, but the levels must be calibrated to other data. Here’s McKitrick on the measurements taken on ships:

… in about half the cases people did not record which method was used to take the sample (Hirahari et al. 2014). In some cases they noted that, for example, ERI readings were obtained but they not indicate the depth. Or they might not record the height of the ship when the MAT reading is taken.

The upshot is that calculating a global mean temperature is a statistical exercise fraught with uncertainty. A calculated mean at any point in time is an estimate of a conceptual value. The estimate is one of many possible estimates around the “true” value. Given the measurement difficulties, any meaningful confidence interval for the true mean would likely be so broad as to render inconsequential the much-discussed temperature trends of the past 50 years.

McKitrick emphasizes the three major changes made by NOAA, all having to do with sea surface temperatures:

  1. NOAA has decided to apply an upward adjustment to bring buoy temperature records into line with ship temperatures. This is curious, because most researchers have concluded that the ship temperatures are subject to greater bias. Also, the frequency of buoy records has been rising as a share of total sea temperature readings.
  2. NOAA added extra weight to the buoy readings, a decision which was unexplained.
  3. They applied a relatively large downward adjustment to temperatures collected by ships during 1998-2000.

Even the difference between the temperatures measured by ships and buoys (0.12 degrees Celsius), taken at face value, has a confidence interval (95%?) that is about 29 times as large as the difference. That adjustments such as those above are made with a straight face is nothing short of preposterous.

A number of other researchers have weighed in on the NOAA adjustments. Carl Beisner summarizes some of this work. He quotes McKitrick as well as Judith Curry:

I think that uncertainties in global surface temperature anomalies is [sic] substantially understated. The surface temperature data sets that I have confidence in are the UK group and also Berkeley Earth. This short paper in Science is not adequate to explain and explore the very large changes that have been made to the NOAA data set. The global surface temperature datasets are clearly a moving target.

There are a number of other posts this week on WUWT regarding the NOAA adjustments. Some of the experts, like Judith Curry, emphasize the new disparities created by NOAA’s adjustments with other well-regarded temperature series. It will be interesting to see how these differences are debated. Let’s hope that the discussion is driven wholly by science and not politics, but I fear that the latter will have a major impact on the debate. It has already.

Major Mistake: The Minimum Opportunity Wage

Tags

, , , , , , , , , , , , , , , , , , , , ,

government-problem

City leaders in St. Louis and Kansas City are the latest to fantasize that market manipulation can serve as a pathway to “economic justice”. They want to raise the local minimum wage to $15 by 2020, following similar actions in Los Angeles, Oakland  and Seattle. They will harm the lowest-skilled workers in these cities, not to mention local businesses, their own local economies and their own city budgets. Like many populists on the national level with a challenged understanding of market forces (such as Robert Reich), these politicians won’t recognize the evidence when it comes in. If they do, they won’t find it politically expedient to own up to it. A more cynical view is that the hike’s gradual phase-in may be a deliberate attempt to conceal its negative consequences.

There are many reasons to oppose a higher minimum wage, or any minimum wage for that matter. Prices (including wages) are rich with information about demand conditions and scarcity. They provide signals for owners and users of resources that guide them toward the best decisions. Price controls, such as a wage floor like the minimum wage, short-circuit those signals and are notorious for their disastrous unintended (but very predictable) consequences. Steve Chapman at Reason Magazine discusses the mechanics of such distortions here.

Supporters of a higher minimum wage usually fail to recognize the relationship between wages and worker productivity. That connection is why the imposition of a wage floor leads to a surplus of low-skilled labor. Those with the least skills and experience are the most likely to lose their jobs, work fewer hours or not be hired. In another Reason article, Brian Doherty explains that this is a thorny problem for charities providing transitional employment to workers with low-skills or employability. He also notes the following:

All sorts of jobs have elements of learning or training, especially at the entry level. Merely having a job at all can have value down the line worth enormously more than the wage you are currently earning in terms of a proven track record of reliable employability or moving up within a particular organization.

The negative employment effects of a higher wage floor are greater if the employer cannot easily pass higher costs along to customers. That’s why firms in highly competitive markets (and their workers) are more vulnerable. This detriment is all the worse when a higher wage floor is imposed within a single jurisdiction, such as the city of St. Louis. Bordering municipalities stand to benefit from the distorted wage levels in the city, but the net effect will be worse than a wash for the region, as adjustments to the new, artificial conditions are not costless. Again, it is likely that the least capable workers and least resourceful firms will be harmed the most.

The negative effects of a higher wage floor are also greater when substitutes for low-skilled labor are available. Here is a video on the robot solution for fast food order-taking. In fact, today there are robots capable of preparing meals, mopping floors, and performing a variety of other menial tasks. Alternatively, more experienced workers may be asked to perform more menial tasks or work longer hours. Either way, the employer takes a hit. Ultimately, the best alternative for some firms will be to close.

The impact of the higher minimum on the wage rates of more skilled workers is likely to be muted. A correspondent of mine mentioned the consequences of wage compression. From the link:

In some cases, compression (or inequity) increases the risk of a fight or flee phenomonon [sic]–disgruntlement culminating in union organizing campaigns or, in the case of flee, higher turnover as the result of employees quitting. … all too often, companies are forced to address the problem by adjusting their entire compensation systems–usually upward and across-the-board. .. While wage adjustments may sound good for those who do not have to worry about profits and losses, the real impact for a company typically means it must either increase productivity or lay people off.

For those who doubt the impact of the minimum wage hike on employment decisions, consider this calculation by Mark Perry:

The pending 67% minimum wage hike in LA (from $9 to $15 per hour by 2020), which is the same as a $6 per hour tax (or $12,480 annual tax per full-time employee and more like $13,500 per year with increased employer payroll taxes…)….

Don Boudreaux offers another interesting perspective, asking whether a change in the way the minimum wage is enforced might influence opinion:

“... if these policies were enforced by police officers monitoring workers and fining those workers who agreed to work at hourly wages below the legislated minimum – would you still support minimum wages?

Proponents of a higher minimum wage often cite a study from 1994 by David Card and Alan Krueger purporting to show that a higher minimum wage in New Jersey actually increased employment in the fast food industry. Tim Worstall at Forbes discussed a severe shortcoming of the Card/Krueger study (HT: Don Boudreaux): Card and Krueger failed to include more labor-intensive independent operators in their analysis, instead focusing exclusively on employment at fast-food chain franchises. The latter were likely to benefit from the failure of independent competitors.

Another common argument put forward by supporters of higher minimum wages is that economic theory predicts positive employment effects if employers have monopsony power in hiring labor, or power to influence the market wage. This is a stretch: it describes labor market conditions in very few localities. Of course, any employer in an unregulated market is free to offer noncompetitive wages, but they will suffer the consequences of taking less skilled and less experienced hires, higher labor turnover and ultimately a competitive disadvantage. Such forces lead rational employers to offer competitive wages for the skills levels they require.

Minimum wages are also defended as an anti-poverty program, but this is a weak argument. A recent post at Coyote Blog explains “Why Minimum Wage Increases are a Terrible Anti-Poverty Program“. Among other points:

Most minimum wage earners are not poor. The vast majority of minimum wage jobs are held as second jobs or held by second earners in a household or by the kids of affluent households. …

Most people in poverty don’t make the minimum wage. In fact, the typically [sic] hourly income of the poor appears to be around $14 an hour. The problem is not the hourly rate, the problem is the availability of work. The poor are poor because they don’t get enough job hours. …

Many young workers or poor workers with a spotty work record need to build a reliable work history to get better work in the future…. Further, many folks without much experience in the job market are missing critical skills — by these I am not talking about sophisticated things like CNC machine tool programming. I am referring to prosaic skills you likely take for granted (check your privilege!) such as showing up reliably each day for work, overcoming the typical frictions of working with diverse teammates, and working to achieve management-set goals via a defined process.”

Some of the same issues are highlighted by the Show-Me Institute, a Missouri think tank, in “Minimum Wage Increases Not Effective at Fighting Poverty“.

A higher minimum wage is one of those proposals that “sound good” to the progressive mind, but are counter-productive in the extreme. The cities of St. Louis and Kansas City would do well to avoid market manipulation that is likely to backfire.

Haunted By Food Demons

Tags

, , , , , , ,

dr-jekyll gmo Glyphosate herbicide usage (as in Roundup) in the U.S. has increased dramatically over the last two decades, replacing the use of far more toxic herbicides on many crops. That’s one of the major points in a post at The Credible Hulk blog entitled “About those more caustic herbicides that glyphosate helped replace“. The increased use of glyphosate corresponded to heavier reliance on glyphosate-tolerant strains of genetically-engineered crops. The author provides charts and other details on the changing use of a number of different herbicides both over time and across crop varieties.

… the purpose of this [post] is to show that when opponents of GE technology and of glyphosate claim that GR crops are bad on the grounds that they increased glyphosate use, they are leaving out critical information that would be highly inconvenient for their narrative.

The use of insect-resistant GE crops has also been associated with a declines in total pesticide use. [The links above are all given in The Credible Hulk post].

There is a great deal of distortion prompted by certain interest groups who oppose the use of synthetic herbicides and insecticides and GMOs. Irrational fears among consumers are inflamed by this sort of propaganda. This post and this post give farmers’ perspectives on some of the misinformation with respect to glyphosate. No, farmer’s do not “drench” their crops with glyphosate prior to harvest. That claim is pure hyperbole.

It’s also important to note that organic crops are not free of treatments. So-called organic pesticides are often just as toxic as synthetic pesticides, and they are often used in heavier quantities. Furthermore, organic foods carry undeniable health risks to consumers. A balanced view must acknowledge the benefits of crop treatments to consumers (whether the treatments are organic or nonorganic), that residues on produce in the U.S. are minimal, that safety still dictates that consumers wash their produce, and that consumers deserve a free choice between crops grown conventionally or organically.

Busted Big Government

Tags

, , , , , , , , , , ,

accounting or accountability

Alan Greenspan says we are “way underestimating” the U.S. national debt. His statements on this point make a great follow-up to last night’s post on bailouts. Here are a couple of recent Greeenspan quotes from an article by Nicholas Ballasy:

Largely because we are not including what I would call contingent liabilities, that is the issue of, which is answered by a question: what is the probability that in today’s environment JP Morgan would be allowed to default? The answer is zero or less.”

Now, that means that whole balance sheet is a contingent liability. Now to be sure, while it’s contingent, there’s no interest payments but ultimately that overhangs the structure because we have committed in so many different ways to guarantee this, that and the other thing. It’s not only Fannie and Freddie but it’s a whole series of financial institutions and, regrettably, it is also non-financial institutions.

The bailout barometer I mentioned last night is an eye-opener, but it reflects a very incomplete view of the contingent liabilities faced by the government. Ballasy discusses some massive unfunded liabilities associated with programs like Social Security, which has a trust fund that Greenspan calls “meaningless”:

The Social Security and Medicare Trustees 2014 annual report said while legislation is needed to address all of Social Security’s financial imbalances, ‘the need has become most urgent with respect to the program’s disability insurance component. Lawmakers need to act soon to avoid automatic reductions in payments to DI beneficiaries in late 2016.’

Lawrence Lindsey, an economic official in the Bush Administration, says the real national debt is closer to 300 percent of GDP when unfunded obligations for Social Security and Medicare are added. The fast-dissipating disability insurance fund was the subject of another post here two days ago. It is a case study in irresponsible governance. Here is Ballasy with another Greenspan quote:

According to Greenspan, entitlement spending in the U.S. was 4.7 percent of GDP in 1967 compared to more than 14 percent today. ‘Had we kept it at that level, our productivity would be far higher today. The average wage would be very significantly higher, the standard of living would be higher and what we have to do is think about how we are going to shrink that pie back and, to me, that is the single most important problem that confronts this country,’ he said.

Shrinking the ongoing flow of entitlements is a tall political order. Avoiding the contingencies that would add to existing obligations calls for economic policies that promote stability, rather than boom and bust cycles that follow misguided efforts to stimulate the economy. Still another matter is to deal with the obligations that already exist. Higher taxes, inflation and default do not represent attractive policy options, but our activist government has placed us squarely in that corner.

Bailouts and Destruction: Your Risk, Our Reward

Tags

, , , , , , ,

bailout-gravy-train-cartoon

The federal government creates some artificial incentives for financial risk taking. These are mostly guarantees against losses, either explicit or implied by similar, past acts of loss indemnification, i.e, bailouts. Under this regime, successes accrue to private risk takers while failures are borne by taxpayers and others from whom resources are diverted by artificially low user costs. This is a huge source of waste, pure and simple.

The financial crisis that began in 2007 featured bailouts of a variety of privately-owned institutions such as banks and insurers, as well as government-sponsored enterprises (GSEs: Fannie Mae and Freddie Mac). So-called financial reforms enacted in the wake of the crisis, especially those embodied in the Dodd-Frank Act, did nothing to eliminate the expectation that losses, should they occur, would be met by a rescue package.

This approach to financial regulation, while a natural response to a market failure narrative, only increases the vulnerability of financial system to regulatory failure. Regulatory failure played an important role in the last crisis by concentrating resources in the housing sector, encouraging reliance on credit-rating agencies, and driving financial institutions to concentrate their holdings in mortgage-backed securities. Dodd-Frank gives regulators more authority and broad discretion to shape the financial sector and the firms operating within it.

The Federal Reserve Bank of Richmond’s so-called “bailout barometer” shows the share of implicit and explicit federal guarantees on a large class of financial liabilities. It reached a total of 60% at the end of 2013. When losses are covered, who cares about risk? Did any of this change, as a lesson learned, after the last financial crisis? No. Instead, we have this:

The 2010 Dodd-Frank law has certified various large institutions as “systemically important,” as prelude to burying them in costly regulation ostensibly for safety purposes but partly to divert lending to politically favored sectors. This hasn’t helped the economy. It probably hasn’t done much to make the financial system safer.

That quote is from Holman Jenkins in “Bank Bashing: the Modern Nero’s Fiddle“. Jenkins accepts “too big to fail” (TBTF) and government guarantees as a reality, blaming the financial crisis on other aspects of government regulatory policy. And there is plenty of evidence that the government contributed in a number of ways, contrary to the usual media narrative. I don’t disagree, but federal guarantees have, and still do, distort risk-reward tradeoffs faced in the financial sector. And the guarantees don’t stop there: federal bailouts of large or politically-connected firms in other industries are now more commonplace and they will continue. Today, the expectation of federal bailouts even extends to other levels of government saddled with insolvent pension funds and other debts that can’t be paid.

Even now, the federal government is creating conditions that may lead to another financial crisis: in addition to the high bailout barometer, bank reserves are plentiful thanks to Federal Reserve policy, and the government seems eager to have those reserves invested in new mortgage lending. Here is John Ligon on this point:

Two recent examples: Fannie Mae recently started a program guaranteeing loans with as little as 3 percent down payments, and, earlier this year, the Federal Housing Administration reduced by 50 basis points the annual mortgage insurance premiums it charges borrowers. …

A great irony, though, is that these affordable housing initiatives have had the exact opposite of their intended impact: These programs encourage higher levels of debt, increased housing prices (and lower affordability) in many markets, and greater risk within the overall housing finance system.”

There is no doubt that taxpayers will be called upon to cover losses should another financial bubble pop, whether that is in housing or other assets. The one-sided risk this creates represents a transfer of wealth to the financial sector. What’s worse is the contribution of government policy to the sort of economic instability this creates.

Counter-Cyclical Disability Debauchery

Tags

, , , , ,

Disability

Should economic growth drive changes in the Social Security disability insurance rolls? It appears to have done just that over the past ten years, suggesting that the program embodies a degree of sham. The Political Calculations blog has some fascinating charts and discussion of this phenomenon entitled “The Disability Dumping Ground“. It shows that the number of workers receiving disability benefits rose across many age cohorts, and especially more “mature” cohorts, as the economy entered the Great Recession. Successful claims continued to rise throughout the weak economic recovery, but the increases began to taper as economic activity finally neared and exceeded pre-recession levels. However, the post notes that:

the vast majority of those who were added to Social Security’s disability rolls during the period from 2008 through 2013 are still on them.

One must question whether the Obama Administration had a motive to encourage more latitude in the approval of disability claims during this period:

And because being classified as disabled would remove such individuals from being counted as both unemployed and part of the U.S. civilian labor force, the Obama administration had a strong incentive to get the program’s administrators to look the other way at the disability insurance applications for benefits that were being made as jobless benefits were expiring, as the resulting math would considerably reduce the official unemployment rates reported by the U.S. Bureau of Labor Statistics.

Of course, an intentional effort to bring more of the long-term unemployed onto the disability rolls might be defended as counter-cyclical fiscal policy and on immediate humanitarian grounds. However, the accelerated depletion of the Disability Insurance Trust Fund implies “that the payments to individuals receiving … benefits will be reduced by nearly one-fifth.” Such cuts would be extremely unjust to those suffering from more legitimate disabilities. In any case, this makes the pretext under which payroll taxes are collected highly suspect.

It would be interesting to know whether changes in the disability rolls or benefit payments bore a correlation to economic growth over a longer history. The social gains from pooling risks at this level are easily frittered by mismanagement and fraudulent activity, faults to which government activity is particularly prone.

Federal Strings and Executive Puppeteers

Tags

, , , , , ,

federal bribes

We often think of government bureaucracy as a force of stasis, but it is unlikely to promote stability. At all levels, government administrative organs have a way of growing, absorbing increasing levels of resources and constricting private activity by imposing increasingly complex rules. A large administrative apparatus tends to calcify the economy, undermining growth or even a sustained level of economic activity. The negative consequences of the administrative state were treated twice on this blog last year.

Federalism, on the other hand, is usually viewed as a check on federal power relative to state governments. That was the perspective of “Nullifying the Federal Blob” last year on SCC. However, in “The Rise of Executive Federalism“, Michael S. Greve discusses forms of federalism that can serve as adjuncts or even alternatives to the exercise of federal legislative power. First, he discusses “cooperative federalism”, whereby lower levels of government receive federal funds and in turn administer federal programs:

With very few exceptions…, virtually all federal domestic programs are administered by state and local governments, often under one of over 1,100 federal funding statutes (such as Medicaid or NCLB). Since its inception under the New Deal, this ‘cooperative’ federalism has proven stupendously successful in doing what it was supposed to do: expand government at all levels.

Greve draws a connection between political and economic developments over recent decades, the coincident decline of cooperative federalism and the rise of a more aggressive “executive federalism”. These developments include constraints on funding at both the federal and state levels, a decline in the willingness of states to cooperate on certain programs, and a divided Congress. No funding, no federal-state cooperation and no federal legislative direction leaves a vacuum to be filled by federal executive initiative:

Thus, to make federal programs ‘work’ under current conditions, agencies rewrite statutes, issue expansive waivers, and negotiate deals with individual states on a one-off basis. That is how the ACA is being ‘administered.’ That is how Secretary of Health and Human Services Sylvia Burwell is trying to expand Medicaid. That is how No Child Left Behind is run. And that is how Environmental Protection Agency is trying to impose its Clean Power Plan: ‘stakeholder meetings’ and assurances of regulatory forbearance for cooperating states; unveiled threats against holdout states. This brand of federalism knows neither statutory compliance nor even administrative regularity. It is executive federalism.

It does not bode well that this perverse form of federalism “is robust to partisan politics.” Greve notes that certain aspects of executive federalism were initiated by the Reagan Administration.

Greve’s advice on combating this trend is to make federalism “less cooperative, one program at a time.” While he’s a little short on specifics, he advises that initiatives such as block grants to states are likely to be counterproductive in restoring traditional federalism. One point on which I part company with Greve is his disparaging reference to “state’s rights” as a battle of “yesterday”. I suspect his underlying objection (which I do not share) is drug legalization at the state level, or any other measure that he might find morally objectionable. Otherwise, I have no issue with what I take to be his favored approach, which seems to involve any assault on the exercise of federal administrative power and rule-making, whether that is through the courts or the exercise of nullification by the states. It is promising that so many states are resisting the imposition of additional administrative and funding burdens attendant to expansive federal sweeteners and control.

Heal, You Dogs!

Tags

, , , , , , , , , , ,

Doctor-shortage

In bondage to the State: The Classical Values blog has this interesting quote from Dr. Rand Paul:

With regard to the idea of whether or not you have a right to healthcare, you have to realize what that implies….I’m a physician, that means you have a right to come to my house and conscript me, it means you believe in slavery. It means you’re going to enslave not only me, but the janitor at my hospital, the assistants, the nurses…There’s an implied threat of force, do you have a right to beat down my door with the police, escort me away, and force me to take care of you? That’s is ultimately what the right to free healthcare would be.

It would be “free” only in nominal terms to the patient, and greatly degraded. The gap between the need for health care and the available supply cannot be solved via “conscription” of providers. And caring for the sick is one thing, but granting a “right” to well-care or health maintenance makes the gap much larger. Inadequate compensation to providers is an important subtext here, and it goes to the heart of the conflict. Basic economics tells us that the gap in access will expand if buyers are subsidized and providers are penalized by artificially low prices. The expanded eligibility for Medicaid in many states under Obamacare only exacerbates shortages, as physician reimbursements remain generally low.

Obamacare may have improved access to health care for a small minority of individuals, but only at the expense of penalizing many others, including providers. The program has fallen far short of its goal of covering the uninsured and has failed to “bend the cost curve” (despite false claims to the contrary, which attempt to take credit away from the Great Recession for slowing costs). Obamacare still looks to be unsustainable, as many have predicted. Insurers are now seeking large rate increases in many states, and going forward, they will not have the cushion of government-funded “risk corridors” when premiums fail to cover claims.

A Supreme Court ruling in the King v. Burwell case is due next month. The case has been discussed on this blog twice this spring. The plaintiffs have challenged federal subsidies in states relying on federal insurance exchanges in direct contradiction to the “plain language of the law”. The subsidies were intended to be an inducement to states to set up their own exchanges, but a number of states chose not to do so. A ruling for the plaintiffs would severely damage the Obamacare program, since the subsidies are key to making the relatively extravagant mandated coverage affordable to low-income individuals. However, Joel Zinberg insists that ending federal subsidies will not cause a death spiral.

Still, such a ruling would seem to give Congress and the Republicans an opportunity to craft legislation to replace Obamacare with a more viable program. Republicans seem have been unable to craft a strategy for dealing with this contingency, but their best strategy might be to wait, pass an extension of subsidies until 2017, and dare Obama not to sign it into law.