Scarcity Scarcity Everywhere, And Water Pricing Stinks

Tags

, , , , , , , , , ,

water

What weird irrationality compels water authorities to price “Adam’s Ale” so cheaply, then mercilessly harangue consumers to conserve? The enforcement of sometimes crazy rationing schemes, like watering lawns only on dates ending with the last digit of one’s street address, is but a symptom of this dysfunction. If water is scarce, then it should be priced accordingly. Only then will users voluntarily limit their use to quantities they value at no less than its real resource cost. This might involve changes in agricultural and industrial practices, landscaping and lifestyles. Perhaps there would be fewer lawns and swimming pools installed where water is most scarce. But these actions should be taken voluntarily in response to market incentives.

Water prices are generally regulated and administered, and only rarely established in an actual market. Pricing is usually based on the infrastructure costs of delivering water, as well as the costs of processing required to meet various standards. Again, these prices seldom reflect the real scarcity of water. This is partly due to populist distortions of the idea that water is basic to life, the perception that water is a public good, and the related political appeal of notions like “the water belongs to everyone”. There is also the admirable objective of keeping water affordable for the poor. But unit water prices faced by different users are not uniform: agricultural users sometimes pay as little as 90% less per unit than the generally cheap prices faced by urban consumers. Industrial users are also accorded favorable rates. Needless to say, incentives are way out of line!

When a resource is priced at levels that do not reflect its scarcity, something has to give. The resource will be overused, and overuse of water inflicts severe environmental damage. With water, that can mean draining lakes and killing springs and riverbeds along with the habitat they support, not to mention lower water quality. The waste doesn’t stop there: authorities are sometimes prone to propose costly infrastructure boondoggles to address water needs, such as dams and reservoirs in arid climates from which large quantities of stored water evaporate.

This episode of Econ Talk features a discussion of water mis-pricing and its consequences. (A hat tip on this to the estimable John Crawford). It covers issues in the management of water systems in the U.S. and under-developed countries. It is a very informative discussion, but it neglects one of the most promising methods of pricing, managing and conserving water supplies: marketable permits, or a secondary market in water rights.

Marketable permits involve the assignment of base usage rights using criteria such as estimates of total supplies and the customer’s past usage levels. This base allocation of rights can be dynamic, changing over time with drought conditions or improvements in conservation technology. Usage up to the permitted quantity is priced administratively, as usual, which keeps water affordable to individuals in lower economic strata. Beyond that base level, however, users must acquire additional permits from a willing seller at a mutually agreed-upon price. Trades can take place on a centralized water “exchange” so that prices are observable to all market participants. And trades may take various forms, such as short-term or long-term contracts which may involve prices that differ from “spot”.

How does this help solve the problem of scarcity? The price of water on the secondary market will rise to the point at which users no longer perceive a benefit to marginal flows of water above cost. A higher price encourages voluntary conservation in two ways: it is a direct cash cost of use above one’s base water rights, and it is an opportunity cost of foregoing the sale of permits on water use up to the base assignment. Those best-prepared to conserve can sell excess rights to those least prepared to conserve. The price established by the trade of permits will bear a strong relationship to the actual degree of scarcity.

A hallmark of allocative efficiency is when the marginal value of the resource is equalized across different uses. This condition implies that no gains from trade are left unexploited. But in the case of water, this means that gains in efficiency will be limited unless all users face the same “spot” price. To fully exploit the market’s potential for efficient allocation, large agricultural and industrial users must face a relatively low base price that differs from residential customers only in terms of infrastructural costs. Granted, voluntary trades between users can take place under specialized contracts as long as the terms are publicly available. This allows intensive users to hedge risks to assure that their needs can be met in the future. However, those users will still have to weigh the marginal benefits of certain crops or industrial processes against prices that more accurately reflect scarcity.

This discussion has ignored certain complexities. For example, assigning rights is complicated by the fact that there are almost always multiple sources of water, such as rivers, public and private wells, lakes and runoff capture. There are sometimes different classes of rights-holders on specific sources. Rights on some sources might not be subject to base pricing by a water authority, but water permits could still be sold by these rights-holders on the secondary market, providing an incentive for them to conserve.

There have been political and legal impediments to the development of water markets in the U.S., some of which are discussed here. A recent effort to promote a water market in the western U.S. has arisen in response to drought conditions. Here is a good article from the last link above, a lengthy abstract of a research paper proposing development of a water market in the American West. Of course, there are many academic papers on this topic, but they are mostly gated. I lived in San Antonio in the 1990s when a controversial proposal to build a large reservoir was under debate. This was intended to relieve demands on the Edwards Aquifer, upon which a large area of Texas depended for water. It was voted down by a coalition that included many libertarians and environmentalists. At about that time, I met a natural resource economist from the University of Texas system who proposed the establishment of a water market in south Texas. He had trouble getting local support for the idea; it was politically taboo due to superstitions about an effort to allocate rights (marketable permits) on what is often perceived as a “public good” (despite the exclusivity of its benefits to customers). Later, in 1998, the San Antonio branch of the Federal Reserve Bank of Dallas published this interesting article on the development of a water market in south Texas. To my knowledge, there is still no water market there, but battles over water use and conservation continue.

Will SCOTUS Grant Executive License To Rewrite Laws?

Tags

, , , , , , , , , , , , , ,

congress-obamacare-cartoon

Can a piece of legislation say any old thing, leaving the executive branch as the arbiter over what the law “should” say?  Can the executive decide a law means one thing ex ante and another ex post? That would be bizarre under the U.S. Constitution, but the Obama Administration has arrogated to itself the role of legislator-in-chief in its implementation the Affordable Care Act (ACA), aka Obamacare, effectively rewriting the law by repeatedly granting waivers and delaying key provisions. And the apparent legal doctrine of “executive license” to rewrite laws would be affirmed if the Supreme Court rules for the government in King v. Burwell.

The case, which was argued before the Court this week, revolves around whether the ACA allows subsidies to be paid on health insurance purchased by qualified consumers on federal exchanges. The plaintiffs say no because, in the “plain language of the statute”, subsidies can be paid only for health insurance purchased on exchanges “established by the state”. A ruling is expected in June.

The provision in question was intended to incent state governments to establish their own exchanges. Most states chose not to do so, however, instead opting to allow their citizens to purchase insurance on a federal exchange. Subsequently, the IRS overrode the provision in question by granting subsidies for purchases on any exchange. The case will be historic if the federal exchange subsidies are overturned, but if not, the ruling will still be historic in setting a precedent that the executive branch can enforce a view of Congressional intent so divergent from written law.

The most interesting aspect of the SCOTUS hearing was Justice Kennedy’s expressed concern that a ruling for the plaintiffs would create a situation in which the federal government coerced states into establishing exchanges, posing a conflict with principles of federalism. The Wall Street Journal was fairly quick to point out that the subsidies were intended as an incentive for states, not unlike many other incentives for state participation incorporated into a wide variety of federal programs:

If Governors decline to establish an exchange, their citizens are not entitled to benefits, but that is not coercion. That is the very trade-off that is supposed to encourage states to participate. If the subsidies will flow no matter what, few if any states would become the partners the Administration wanted.

More to the point, federalism is supposed to protect political accountability. Two-thirds of the states made an informed decision to rebuff ObamaCare, but if voters prefer otherwise, they can elect new Governors who won’t. If federal subsidies flow no matter what, then states aren’t presented with a real choice. That isn’t how federalism works in the American system. As Justice Kennedy rightly noted, the exchange decision was partly ‘a mechanism for states to show they had concerns about the wisdom and workability of the act in the form that it was passed.’

Jonathan Adler has some thoughts on the same issues here and here. At the second link, Adler gives a more detailed explanation of Kennedy’s concern, which involves additional regulatory implications for the states. Adler also  covers some court precedents for the kind of “coercion” at issue in King. On one case, New York v. United States, Adler says:

In the very case that established the current anti-commandeering doctrine, the Court said there was no problem with Congress using its regulatory authority to encourage state cooperation.

The Court would be reluctant to rule for the plaintiffs based on a principle contrary to so many of its own previous rulings. Such a justification would appear to undermine the existing extent of federal direction of state activity — a possible silver lining to a ruling for the government. But Adler also notes that what is so unique about the ACA relative to earlier precedents is that so many states decided to opt out, and there is plenty of evidence that they did so with their eyes wide open. The loss of the federal subsidies was not the only consideration in those decisions:

“… while states that choose to forego subsidies are exposing their citizens to an increase in one regulatory burden, they are relieving their citizens of others, and at least some states are perfectly happy to make that choice.

An amusing analogy to the distinction between federal exchanges and state-established  exchanges is made by Jonathan Cohn in the Huffington Post. He contends that federal and state exchanges are comparable to the the choice between butter and oil in a pancake recipe from The Joy of Cooking. You get pancakes either way, says Cohn. Therefore, he asserts that the case against the government in King is based on a specious distinction. Sean Trende at Real Clear Politics point out that the two kinds of pancakes are not the same. If Congress wishes to reward the use of butter, then one should expect the government preserve that distinction in distributing rewards.

Trende points to another distinction missed by Cohn: suppose Congress also said that the batter must be whipped by a blender at 300 rpm. In the case of Obamacare, Congress stated that an exchange must be established by a state to qualify buyers for subsidies, and it did so with the full intent of gaining cooperation from states in shouldering the administrative burdens of the law. Of course, different pancakes might be close enough, but in the end, specific language was used by Congress to create incentives for the use of certain ingredients and a particular mixing technique. The meaning of the pancake law is clear enough and is independent of whether administration officials can dream up substitutes, even if they are right out of The Joy of Cooking.

The four statist justices (some claim they are liberal) emphasized the dire consequences that a ruling for the plaintiffs would have on the insurance market and on individual buyers in states using the federal exchange. While the impact could be mitigated by the Court in various ways, the impact itself has been exaggerated by Obamacare supporters. This piece at Zero Hedge examines the likely impact in detail, but it fails to discuss a few significant benefits related to the employer and individual mandates to residents of states without their own exchanges.

Justice Kennedy is unlikely to side with the government in this case, despite his concerns about coercive federal policy. Justice Roberts was silent for almost the entire hearing, and it is not clear whether he will side with the consequentialists, find another avenue for upholding the subsidies, or defer to the plain language of the law. The Court might engage in a form of avoidance, finding  a way to dismiss the case on unexpected grounds such as a lack of standing (though few consider the plaintiffs’ standing to be an issue). That would effectively grant the administration carte blanche in rewriting legislation.

In Praise of Ticket Scalpers

Tags

, , , , , , , ,

fare-thee-well-2015

I have been a fan of The Grateful Dead since I was a teenager and have seen the band perform somewhere around 35 times prior to Jerry Garcia’s death in 1995 … I actually lost count. This summer, the four surviving original band members, along with some prominent guest musicians, will perform three reunion shows over the July 4th weekend at Chicago’s Soldier Field. They have said that this will be their last performance together.

Demand for tickets was so high that it surprised the band and the promoter. In January, an initial mail order tallied about 65,000 orders for more than 350,000 tickets, far more than the mail-order allotment and the stadium capacity for three days. On-line requests went mostly unfilled as the system was swamped when tickets went on-sale. Chicago Bears season ticket holders had the right of first refusal on a large number of tickets, which is unfortunate given the probable extent of the intersection between Bears fans and the set of Deadheads. And so there is a problem of scarcity and excess demand, a common occurrence for big concerts and sporting events.

Naturally, a secondary market has arisen to allocate the limited supply of tickets available from brokers and other willing sellers. However, as noted at the links above, asking prices on outlets like StubHub, often well above $1,000 per ticket, have shocked observers. Few transactions will actually take place at those prices. Repricing will occur until enough willing buyers are found. Nevertheless, many “Deadheads” are outraged. There are complaints on Facebook from self-righteous Deadheads, boasting of their honor as music fans and condemning the “greed” of resellers. Needless to say, some of the resellers are, in fact, lucky Deadheads who, having landed tickets, now find the prospect of a pecuniary gain from a resale just too good to pass up!

I am very much in favor of a free secondary market and so-called “ticket scalping.” First and foremost, these transactions are voluntary. There is no coercion involved, just a willing buyer and seller who reach a mutually beneficial deal. A buyer will agree to pay a certain price only if that price is less than the subjective value they assign to the ticket. Of course, a potential secondary buyer would rather have been lucky in what amounted to a lottery for tickets. But if not, they are not shut out altogether. A little patience on the secondary market might bring prices well within reach.

Second, the allocative mechanism in play on the secondary market is little appreciated, but it contributes to social gains. Tickets will be allocated to those who value them most highly. In fact, individuals who value their own time most highly might avoid the time and aggravation of participating in the mail order or joining the on-line sales queue. Instead, these individuals know they can fall back on the secondary market to obtain seats, thereby conserving a valuable resource: their time. Some will contend that all tickets should be made available and allocated via some other, non-price mechanism, such as a lottery or a queue, whereby willingness to pay cash is rendered moot. Unfortunately, such mechanisms have severe drawbacks in the presence of excess demand: they tend to waste time for both the lucky and unlucky participants, they may allocate tickets to buyers who value them less highly, they infringe on personal liberty by preventing individuals from taking part in mutually beneficial exchanges, and they waste scarce law enforcement resources.

Another advantage of the allocative mechanism embodied in the secondary market is its ability to create value in the presence of risk. Performers and promoters are loath to price tickets optimally, partly because there is risk in doing so: damage to goodwill with their fan base and the risk that they will over-price tickets and possibly fail to fill the house. Secondary sellers will gladly accept pricing risk, and the frenzy surrounding an active secondary market can serve as a promotional device for performers. Moreover, by allowing tickets to be allocated to buyers who value them most highly, the venue and the community benefit by bringing in the most appreciative crowd, adding to the success and vibrancy of the local entertainment market. A prohibition on scalping closes off a convenient channel through which some of the most valuable customers can obtain seats to events. Here’s what one ticket market scholar states:

… a curtailment of scalping markets would not only prevent allocation according to maximization of utility, it would also have the dynamic effect of reducing in the long term the supply of cultural events! This is very rarely mentioned, but following the adoption of an anti-scalping law in Quebec, industry experts have indicated that cultural centers like the Bell Centre in Montreal have reduced events and potential audiences by some 6% to 11%.

Finally, the fact that prices are high on the secondary market implies great scarcity. The Grateful Dead may have aggravated the situation by stating unequivocally that these would be their last shows. They could have remained silent or vague on that point. But scarcity can be addressed in other ways by performers and promoters: they can agree to price the tickets more highly; they can arrange to perform more shows and appear at more venues; and they can create imperfect substitutes for the actual concert experience, such as providing live-feeds of the show to other venues, including live streaming.

In this case, the band has taken steps to alleviate the shortage. First, they have reconfigured the plan for the floor of the stadium to allow a larger crowd in a “GA Pit” (presumably standing room), and they are opening up the set and directing sound to accommodate seating behind the band. Second, they are discussing the possibility of providing high-quality, live feeds to other venues. This should help to take some of the pressure off prices in the secondary market.

My wish is that the band would also announce additional performances, either in Chicago or a few other cities. My mail order went out on the first day with an early postmark and it is still unanswered. My hopes remain high, but if I don’t get into the show, I’m sure to attend a viewing party!

The FCC’s Net Brutality Order

Tags

, , , , , , , , , ,

fcc

Supporters of so-called net neutrality do not understand the contradiction it represents in promoting implicit subsidies to heavy users  of scarce internet capacity. And supporters fail to understand the role of incentives in allocating scarce resources. Last week the FCC voted 3-2 to classify internet service providers (ISPs) as common carriers under Title II of the Communications Act of 1934, henceforth subjecting them to regulatory rules applied to telephone voice traffic since the 1930s. With this change, which won’t take place until at least this summer, the FCC will be empowered to impose net neutrality rules, which proponents claim will protect web users with a guarantee of equal treatment of all traffic. ISPs would be prohibited from creating “fast lanes” for certain kinds of traffic and pricing them accordingly. The presumption is that under these rules, small users would not be shut out by those with a greater ability to pay.

Like almost every progressive policy prescription, this regulatory initiative insists on biting the hand that feeds. It reflects a failure to properly identify parties standing to gain from such regulation. The distribution of internet usage is highly unequal: less than 10% of all users account for half of all traffic, and half of users account for 95% of traffic. Data origination on the web is also highly unequal: “Two companies (Netflix and Google) use half the total downstream US bandwidth”.

The neutrality rules will assure that those dominating traffic today can continue to absorb a large share of capacity at subsidized prices. Price regulation may require that high-speed streaming of films and events be priced the same as lower-speed downloads of less data-intensive content. So-called “smart” technologies and the “internet of things” will be degraded or fail to reach their potential, and could possibly be of compromised safety, without always-open, dedicated data lanes, as would medical applications that would receive priority in a sane world. Without price incentives:

  1. conservation of existing capacity will not take place in the short-run;
  2. growth in capacity will languish in the short- and long-run;
  3. development of new applications and technologies will be stunted; and
  4. rationing via slowdowns, outages and imposition of usage caps may be necessary. Will these rationing decisions be “neutral”?

The unregulated development of the internet is an incredible success story. FCC commissioner Ajit Pai, who is a critic of net neutrality, makes this point forcefully. In a strong sense, internet development is still in its infancy. New and as yet unimagined web-enabled functionalities will continue to be embedded into everyday objects all around us. This process can only be impeded by government regulation, particularly of a form intended to control one-dimensional services offered by monopolists (i.e., public utilities). Competition in broadband access is growing, and it is enhanced by the ability of providers to co-mingle applications with the so-called “dumb pipe.”

The growth in uses and usage must be enabled by growth in network infrastructure. For that, incentives must be preserved through pricing flexibility and the ability of ISPs to negotiate freely with content providers and application developers. On this point, Pai says:

The record is replete with evidence that Title II regulations will slow investment and innovation in broadband networks. Remember: Broadband networks don’t have to be built. Capital doesn’t have to be invested here. Risks don’t have to be taken. The more difficult the FCC makes the business case for deployment, the less likely it is that broadband providers big and small will connect Americans with digital opportunities.

Pai also asserts that horror stories about greedy ISPs restricting the ability of small users to access the Web are largely a fiction:

The evidence of these … threats? There is none; it’s all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. Examples this picayune and stale aren’t enough to tell a coherent story about net neutrality. The bogeyman never had it so easy.

Then there is the small matter of potential content regulation (see the first link on the list), which some fear could be enabled by the FCC’s action. This would be an obvious threat to an open and free society, and the advent of such rules would discourage growth in internet applications by giving would-be prohibitionists a new way to tie and gag those of whom they disapprove.

Net neutrality and the FCC’s “Open Internet Order” serve the interests of large content providers who would rather not have to pay the long-run marginal cost of the network capacity tied up by their end-users. It represents a distinct form of rent-seeking in data transport services. Allowing ISPs to negotiate with significant content providers allows the transport cost of individual services to be “unbundled”, thereby promoting economic efficiency and avoiding cross-subsidies from lighter to heavier users and uses. As new, intensive applications are introduced, the economic costs and benefits can then be weighed more accurately by prospective customers.

Department of Homeland Skepticism

Tags

, , , , , , , , , ,

homeland_security

Almost any reference to the U.S. as the “homeland” makes me cringe. It has such a jingoistic ring to my ear that I am immediately suspicious of the speaker’s motives: “Propaganda Alert!” I suppose anyone who makes their home in this land has a right to call it their homeland, if they must. The term seems uniquely appropriate for native americans. For others residing in this “nation of immigrants”, the homeland always strikes me as a reference to the country of origin of someone’s ancestors.

It’s all the worse when a government super-agency engaged in a variety of controversial activities uses “homeland” as its middle name. That very name, the Department of Homeland Security (DHS), suggests that whatever it is they do there must be in the interests of our homeland, and therefore beyond reproach.

To me it’s creepy and Orwellian, but the name of the agency is of little import relative to its activities. Now, as the fight over DHS funding — and President Obama’s executive order on immigration — reaches a fever pitch in Congress, Nick Gillespie asks, “Why do we even have a Department of Homeland Security in the first place?”

“Created in 2002 in the mad crush of panic, paranoia, and patriotic pants-wetting after the 9/11 attacks, DHS has always been a stupid idea. Even at the time, creating a new cabinet-level department responsible for 22 different agencies and services was suspect. Exactly how was adding a new layer of bureaucracy supposed to make us safer (and that’s leaving aside the question of just what the hell “homeland security” actually means)? DHS leaders answer to no fewer than 90 congressional committees and subcommittees that oversee the department’s various functions. Good luck with all that.

Gillespie expounds on the profligacy and mismanagement at DHS. It has a voracious appetite for resources and taxpayer funds and is notorious for waste, to say nothing of its less than full-throated enthusiasm for civil liberties:

“The Government Accountability Office (GAO) routinely lists DHS on its ‘high risk’ list of badly run outfits and surveys of federal workers have concluded ‘that DHS is the worst department to work for in the government,’ writes Chris Edwards of the Cato Institute.

That shouldn’t make anyone feel much safer. Gillespie advocates dismantling the entire agency. A high-level org chart for DHS is shown here. Certainly its constituent sub-agencies were able to function before the DHS concept was hatched. There might have been some interagency rivalry, but there was also cooperation. Would a DHS have been better able to anticipate and prevent the 9/11 attacks? That’s doubtful. Given its track record, it’s difficult to see how the DHS bureaucratic umbrella improves security, and it is not a model of cost efficiency despite expectations of reduced duplication of overhead.

Threats by a faction of the GOP to defund DHS enforcement of Obama’s immigration order are creating another deadlock in Congress. The order would grant amnesty to over 5 million illegal immigrants, but a Federal judge has ruled in favor of 26 states that sued to stop enforcement based on the imposition of enforcement costs on the states. GOP leadership would rather approve funding and let the courts do the heavy lifting to stop the order, but the administration has asked the judge to stay his injunction pending appeal. If a stay is granted, and that is unlikely, or if an appeals court overturns the ruling, implementation of the order would go forward before the conclusion of what would likely be a protracted legal process. The de-funders are unwilling to take that chance.

Democrats claim that the effort to defund DHS enforcement of the executive order will shut down the agency, which is nonsense. GOP leadership fears that Republicans will be blamed if there is even a perception of negative consequences. I suspect Obama will do his best to create those perceptions, but the funding gap won’t have much real impact. In any case, I’m with Nick Gillespie: to hell with the DHS administrative umbrella! Releasing the individual security agencies from DHS’s grip would be more likely to reduce costs with no loss of security, and just might promote individual liberty.

Put Consumers In Charge

Tags

, , , , , , , , , , ,

Washington

The interests of consumers should always be placed first. That’s what happens in a free market economy, with the consent of competitive producers, and that is how public policy should be crafted.  Too often, however, regulations and the laws on which they are based are  written primarily with producer interests in mind. Don’t be cowed by the appealing names given to pieces of legislation or their ostensible purposes. These may be couched in terms of consumer protections, but more often than not they create barriers to entry, stifle innovation and confer advantages to big players, thus restricting competition. A case in point is occupational licensing, which inflates prices by preventing the entry of innovative and less costly competitors. In this political exchange, consumers gain “protections” that are often of questionable value, especially when incentives for improved service are blunted by the licensing rules.

Consumer primacy is of value in a general sense, as Richard Ebeling explains in “Consumers’ Sovereignty and Natural Vs. Contrived Scarcities“. When consumers are sovereign in their ability to decide for themselves among competing alternatives, including their own personal comparison of value to price, they essentially take charge of the flow of resources into and out of various uses. And they capture a positive gap between value and price as a personal gain in any transaction to which they are (by definition) a voluntary party. At the same time, producers must reckon with real costs, which reflect natural scarcities. But, by virtue of competition, it is in the interests of producers to deliver the best values to consumers at the lowest prices compatible with costs. Here is part of Ebeling’s introduction:

One of the great myths about the capitalist system is the presumption that businessmen make profits at the expense of the consumers and workers in society. Nothing could be further from the truth. … In the free market, consumers are the sovereign rulers who determine what gets produced, and with what qualities and features. … The ‘captains of industry’ are not the businessmen, but the buying public who steer the directions into which production is taken.

Ebeling gives a number of good examples demonstrating the ways in which this efficient market process is compromised by the hand of government. Regulations, mandates, licensure, price floors and ceilings, taxes and subsidies all act to distort the normal workings of the market, creating direct and indirect scarcities. The perverse effect is to generate a flow of economic rents to producer interests at the expense of consumers (and taxpayers). And that is why is those same producer interests are often inclined to seek market interventions. The successful rent-seeking effort ends in legally-sanctioned restraint of trade.

An example of contrived scarcity given by Ebeling results from protectionist trade policy, which ostensibly “protects” domestic producers and workers from “cutthroat” foreign competition. The plight of workers seems to be an easy sell to the public, though historically protectionism has inured to the benefit of relatively highly-paid workers, often unionized, who have an interest in restricting competition. Consistent with Ebeling’s point of view, Matt Ridley writes that trade policy should be driven by the benefits to domestic consumers, rather than producers. Ridley focuses on the UK’s interests in negotiating a free trade agreement between the United States and the European Union: the Transatlantic Trade and Investment Partnership. The following thoughts from Ridley should be taken to heart by anyone with an interest in trade policy, and especially those who fancy themselves liberal:

The argument for free trade is paradoxical and much misunderstood. Free trade benefits consumers because it is the scourge of expensive or monopolistic national suppliers. It benefits both sides: yet it works unilaterally. Your citizens benefit if you let them buy cheap goods from abroad, while foreigners are punished if their government does not reciprocate. This creates more demand for local services and hence more growth and jobs in the importing country. 

Contrary to what most people think, therefore, it is imports that bring the greatest benefit, not exports — which are the price we have to pay to get the imports. At the centre of the debate lies David Ricardo’s beautiful yet counterintuitive idea of comparative advantage — that it will always pay a country (or a person) to import some goods from another, even if the first country or person is better at making everything. Truly free trade cannot be a predatory phenomenon.

May No Window Be Unbroken

Tags

, , , , , , , , , , ,

Obama Work Done

The misallocation of resources precipitated by regulation is sometimes so thorough that proponents are apt to describe it as a feature, and not a bug! Apparently, that is how some think of new business startups and venture capital funding stimulated by Obamacare. Warren Meyer describes the situation in his post, “Worst Argument For Regulation Ever“. Providers confronting a thicket of new regulations, including a mandate for a massive reconfiguration of medical records, necessarily requires services that were heretofore unnecessary. As Meyer says:

All this investment and activity is going into trying to get back to even from productivity losses imposed by the government, or is being spent addressing government mandates for new services that the market did not want or value. This is a diversion of resources from new value-creation to fixing things, and as such is just the broken windows fallacy re-written in a new form.

The fallacy to which Meyer refers has a deep tradition in economic thinking, with a lineage tracing to Frederick Bastiat. A simple telling is that a broken window leads to more work for the glazier, more spending, and an apparent lift in income. Of course, someone must pay, and the broken window itself represents a loss of physical capital. But there are other consequences, since the glazier receives a payment that could have, and would have, purchased other goods and services that would have been preferred to window repairs. There are many broken windows in the case of Obamacare, including direct hits to providers, medical device manufacturers, and many of the previously insured. It was not enough for proponents to simply extend coverage to the uninsured. That simpler approach would have created plenty of challenges. But instead, Obamacare became a legal and regulatory behemoth in the hope that it would transform the health care industry… into what?

Noble intentions frequently motivate destructive actions out of sheer economic ignorance. That encompasses almost every effort to use government as an active manager of economic or social affairs. That’s the cogent message from Sheldon Richman in “The Economic Way of Thinking About Health Care“. Richman agrees that “health insurance for all” is an outcome to be hoped for, but he derides the notion that activist government can achieve it effectively. First,  the redistributive element in many government intrusions is a questionable economic strategy:

When government provides health insurance through subsidies or Medicare or Medicaid, it presides over the disposal of the fruits of other people’s labor. Government personnel decide who gets what, even though they had no hand in producing the resources they “redistribute.” In other words, they traffic in pilfered property. Hence H.L. Mencken’s immortal insight: ‘Every election is a sort of advance auction sale of stolen goods.’

The central planners decide who gets what in ways that are more destructive than simple redistribution. By way of demonstrating this phenomenon, Richman goes on to discuss the health insurance third-party payment system encouraged by government policy. Employer-paid coverage started as an unintended consequence of WW II wage controls. It also has tax-favored status as a popular fringe benefit. Unfortunately, this led to the bastardization of the concept of insurance itself:

That [tax-favored status] gives employer-provided insurance an appeal it would never have in a free society, where taxation would not distort decision-making. Moreover, the system creates an incentive to extend “insurance” to include noninsurable events simply to take advantage of the tax preference for noncash compensation. Today pseudo-insurance covers screening services and contraception, which of course are elective. (This does not mean they are trivial, only that they are chosen and are not happenings.)

Excess demand, owing to a marginal cost of routine care and elective services to the consumer that appears to be zero, sets off a series of unintended consequences:

… the real prices of medical inputs to rise … the price of insurance goes up; the government’s health care budget rises, requiring higher taxes now or later (because of the debt); and resources and labor flow into the stimulated health care industry and away from other valued purposes, raising the prices of other goods and services. Higher insurance premiums in turn prompt demand for more government subsidies, higher taxes, and more debt.

May that circle be broken. Richman mentions several steps at the link to promote more competitive, comprehensive and affordable health care.

Do Not Rot Productive Capital

Tags

, , , , , , , ,

TrapDoorFail_8008

Dividend and capital gains income are taxed at lower rates than regular wage and salary income. That such income is taxed lightly strikes progressives as offensive, but the intent and effects of these lower rates is not to redistribute income to rentiers. Rather, relatively low dividend and capital gains tax rates are in place because they limit double-taxation, minimize taxation of inflationary “gains”, and reward successful risk-taking.

Dividends, and ultimately capital gains, derive from corporate earnings. Corporate income in the U.S. is taxed at the highest rate in the OECD, with a top federal rate of 38% (though the rate drops to 35% above a certain level of earnings). Dividends may or may not be paid to shareholders from corporate income, but if so, they are subsequently taxed again as personal income. If dividends were taxed as regular income to individuals, the combined federal taxes (corporate and individual) on that marginal income in upper brackets would be in excess of 75%. With state corporate and personal income taxes added on, the after-tax dividend received by an individual shareholder from each dollar of pre-tax corporate income could then be less than 10 cents in some states.

The top federal tax rate on dividends is 20%, versus 39.6% for regular income. One reason that dividend income is taxed at lower rates than wage and salary income is recognition of the confiscatory nature of double taxation, as illustrated above. Realized capital gains are taxed at the same rate as dividends for the same reason. A capital gain is the increase in the value of an asset over time. Such gains are taxed only when an asset is sold, when the gain is realized. The low tax rate on gains from the sale of corporate stock also limits double taxation (and even triple taxation).

Stock prices tend to rise along with the expected stream of future after-tax corporate earnings and dividends. A prospective buyer of shares knows they will incur taxes on future dividends, which limits the price they are willing to pay for the shares. So, higher future earnings will be taxed to the corporation when they occur, higher future dividends will be taxed to the buyer of shares when dividends are eventually paid, and the resulting gain in the share price received by the seller today is taxed as a capital gain to the seller. Triple (and anticipatory) taxation! A relatively low tax rate on capital gains at least helps to limit the damage from the awful incentives created by multiple taxation of the same income.

Another important reason for taxing capital gains more lightly than wages and salaries is that the tax, in the presence of inflation, diminishes the real value of an asset. As an example, compare the following situations in which the price level increases by 20% over five years: Worker Joe earns $10 an hour to start with and $12 an hour at the end of year 5; Saver Dev earns $1 dividends per share of the Prophet Corp (which he plans to hold indefinitely) to start with and $1.20 at the end of year 5; Retiree Cap buys one share of Gaines Corp worth $100 at the start and sells it for $120 at the end of year 5. On a pre-tax basis, these three individuals all keep pace with inflation. The real value of their pre-tax earnings, or the share value in Cap’s case, is unchanged after five years. Cap keeps pace by virtue of a $20 capital gain, so the real value of his share is unchanged.

If all three types of income are taxed at the same rate, Joe and Dev both keep pace with inflation on an after-tax basis as well. But what about Cap? After taxes, the proceeds of his stock sale are $115. Cap’s after-tax gain is only 15%, less than the inflation that occurred, so the real value of his investment was diminished by the combination of inflation and the capital gains tax. The same would be true for farmland, artwork, or any other kind of asset. It is one matter to tax flows of income that change with inflation. It is another to tax changes in property value that would otherwise keep pace with inflation. This is truly a form of wealth confiscation, and it provides a further rationale for taxing capital gains more lightly than wage and salary income, or not at all.

There are further complexities that influence the results. For one thing, all three individuals would suffer real losses if inflation pushed them into higher tax brackets. This is why bracket thresholds are indexed for inflation. Another wrinkle is the “stepped-up basis at death”, by which heirs incur taxable gains only on increases in value that occur after the death of their benefactor. This aspect of the tax code was recently discussed on Sacred Cow Chips here.

The third rationale for taxing capital gains more lightly than wage and salary income is an attempt to improve the risk-return tradeoff: larger rewards, ex ante and ex post, are typically available only with acceptance of higher risk of loss or complete failure. This is true for private actors and from a societal point of view. It is hoped that lighter taxes on contingent rewards will encourage savings and their deployment into promising ventures that may entail high risk.

This post was prompted by a article in The Freeman entitled “A Loophole For the Wealthy? Demystifying Capital Gains“, by Dr. Kim Henry. I was somewhat surprised to learn that Dr. Henry is a dentist. His theme is of interest from a public finance perspective, and he provides a good discussion of the advantages of maintaining a low tax rate on capital gains. My only complaint is with the first of these two points:

“[The capital gains tax rate] is lower for two important reasons:
1. Although the gain is realized in one year, it actually took place over more than one year. The wine did not increase in value just in the year it was sold. It took 30 years to achieve its higher price.
2. Capital gains are not indexed to inflation. …

To be fair, Dr. Henry’s point relative to the time required to achieve a gain probably has more to do with the riskiness of an asset or venture’s returns, rather than the passage of time per se. If an asset’s value increases by 4% per year, the three points raised above (multiple taxation, taxing of inflationary gains, and rewarding successful risk-taking) would be just as valid after year 1 as they are after year 5.

Taxing income from capital is fraught with dangers to healthy investment incentives, which are primary drivers of employment and income growth. Double taxation of corporate income is not helpful. Capital gains taxes suffer from the same defect and others. But capital income is a ripe target for those who wish to score political points by inflaming envy. It’s a dark art.

Precaution Forbids Your Rewards

Tags

, , , , , , , , ,

health-and-safety-cartoon

The precautionary principle (PP) is often used to justify actions that radically infringe on liberty, but it is an unreliable guide to managing risk, both for society and for individuals. Warren Meyer makes this point forcefully in a recent post entitled “A Unified Theory of Poor Risk Management“. The whole post is worth reading, but PP is the focus of second section. Meyer offers the following definition of the PP from Wikipedia:

The precautionary principle or precautionary approach to risk management states that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking an action.

He goes on to explain several problems with PP, the most important of which is its one-sided emphasis on the risks of an activity while dismissing prospective benefits of any kind. Enough said! That shortcoming immediately disqualifies PP as a guide to action. Rather, it justifies  compulsion to not act, which is usually the desired outcome when PP is invoked. We are told to stop burning fossil fuels because CO2 emissions might lead to catastrophic global warming. Yet burning fossil fuels brings enormous benefits to humanity, including real environmental benefits. We are told to stop the cultivation of GMOs because of perceived risks, yet the potential benefits of GMOs are routinely ignored, such as higher yields, improved nutrition, drought resistance and reduced environmental damage. Meyer asks whether there is an irony in ignoring these potential gains, as it entails an acceptance of certain risks. Forced energy shortages would bring widespread economic decline. Less-developed countries face risks of continuing poverty and malnutrition that could otherwise be mitigated.

The terrifying risks cited by PP adherents are generally not well-founded. For example, climate models based on CO2 forcings have extremely poor track records. And whether such hypothetical warming would be costly or beneficial, on balance, is open to debate. The supposed risks of GMOs are largely based on pseudoscience and ignore a vast body of evidence of their safety. As Meyer says:

… the principle is inherently anti-progressThe proposition requires that folks who want to introduce new innovations must prove a negative, and it is very hard to prove a negative — how do I prove there are no invisible aliens in my closet who may come out and eat me someday, and how can I possibly get a scientific consensus to this fact? As a result, by merely expressing that one ‘suspects’ a risk (note there is no need listed for proof or justification of this suspicion), any advance may be stopped cold. Had we followed such a principle consistently, we would still all be subsistence farmers, vassals to our feudal lord.

The PP has obvious appeal to statists and fits comfortably into the philosophy of the regulatory state. But it’s a reasonable conjecture that widespread application of the PP exposes the world to greater natural and economic risks than without the PP. Under laissez-faire capitalism, human action is guided by the rational balancing of benefits against costs and risks, which has brought prosperity everywhere it’s been practiced.

Nullifying The Federal Blob

Tags

, , , , , , , , , ,

nullify-obamacare_big

When must a state acquiesce to the demands of the federal government? The question is not as straightforward as many believe. The U.S. Constitution is fairly explicit in “enumerating” the federal government’s powers, which at least tells us that the answer must be “sometimes,” not simply always or never. Powers not specifically granted to the federal  government are generally reserved by the states. This is the principle of federalism, but in practice it leaves plenty of room for disagreement. The federal government has grown enormously in size and in the scope of its activities. It seems inevitable that tensions will arise over specific questions about the limits of federal authority. And over time, in response to challenges, the courts have interpreted some of the enumerated powers more expansively. There is an ongoing debate over what avenues, in addition to the courts, states may follow in challenging federal power. Some have framed it as a debate over state “nullification” of specific federal laws versus a constitutional convention to establish clearer limits on the reach of federal power.

Recently, nullification has been all the rage, as this article in The Hill makes clear. So-called “mandates” often require states to enforce federal laws, which is likely to provoke some objections. And major pieces of federal legislation have become so complex that details must be sorted out by the administrative agencies in charge of implementation. This involves lots of rule-making and delegation of authority that has frequently imposed burdens on state governments. States are increasingly refusing to cooperate. From The Hill:

The legislative onslaught, which includes bills targeting federal restrictions on firearms, experimental treatments and hemp, reflects growing discord between the states and Washington, state officials say. …

Friction between the states and the federal government dates back to the nation’s earliest days. But there has been an explosion of bills in the last year, according to the Los Angeles-based Tenth Amendment Center, which advocates for the state use of nullification to tamp down on overzealous regulation.

Later in the same article, the author discusses an effort to organize a constitutional convention:

… conservatives are pushing for states to invoke Article 5 of the Constitution and hold a ‘convention of states’ to restrict the power and jurisdiction of the federal government. The group Citizens for Self-Government is leading the charge, and three states — Alaska, Georgia and Florida — have already passed resolutions calling for the convention. Another 26 states are considering legislation this year, according to the group’s president, Mark Meckler. It would take 34 states to call a convention. At the convention, Meckler said the states would work to pass amendments that impose fiscal restraints, regulatory restrictions and term limits on federal officials, including members of the Supreme Court. ‘We’ll have [Article 5] applications pending in 41 states within the next few weeks,’ he said. ‘The goal is to hold a convention in 2016.’

Libertarians are split on the issues of nullification and a constitutional convention. The latter  is addressed by A. Barton Hinkle in Reason, who questions the necessity of a convention and sees certain risks in the effort, such as new provisions that could “backfire”, the possibility of a “runaway convention”, and efforts to riddle the Constitution with “primary laws,” rather than merely improving it as a framework for governing how we are governed.

As for nullification, Robert Levy, board chairman of The CATO Institute, distinguishes between situations in which a state is asked to enforce a federal law and those involving federal enforcement of a law deemed to be unconstitutional by a state. He asserts that states cannot resolve the latter type of dispute via nullification:

Fans of nullification count on the states to check federal tyranny. But sometimes it cuts the other way; states are also tyrannical. Indeed, if state and local governments could invalidate federal law, Virginia would have continued its ban on inter-racial marriages; Texas might still be jailing gay people for consensual sex; and constructive gun bans would remain in effect in Chicago and elsewhere.

… If a state deems a federal law to be unconstitutional, what’s the proper remedy? The answer is straightforward. Because the Supreme Court is the ultimate authority, the remedy is a lawsuit challenging the constitutionality of the suspect federal regulation or statute.

Not surprisingly, the Tenth Amendment Center strongly disagrees with the limits on nullification described by Levy:

Levy’s entire argument rests on the idea that the federal courts possess the sole and final authority to determine the constitutionality of an act. … Levy never addresses the fundamental question facing those who oppose nullification: how does one reconcile the undeniable fact that the state ratifying conventions adopted the Constitution with the understanding that it was creating a general government with specific, limited powers and the idea that a branch of that very same federal government has the final say on the extent of its own powers? Quite simply, you can’t.

These recent efforts to reign in the federal government are exciting. I am watching the progress of the Article 5 convention effort with great interest. I am not sure I buy into Levy’s arguments against nullification because checks on power should cut both ways: the Constitution allows states to retain powers not specifically granted to the federal government, so the states should guard those powers jealously. It matters not whether the question involves state enforcement of a federal law or a federal law that violates states rights. Likewise, powers specifically granted to the federal government should serve as a check on “state-level tyranny”. Again, that leaves plenty of room for disagreement before the courts.