Taking The Air Out Of The Deflation Scare

Tags

, , , , , , , , , , , , , ,

Baby-Pufferfish

Deflation is not the evil so many journalists have been taught to believe. The historical evidence does not support the contention that deflation is always a consequence of “underconsumption”, that it leads to a self-reinforcing spiral, or that it is destructive in and of itself. A new academic paper on the costs of deflation is reviewed here by John Cochrane, who reproduces some of the interesting evidence from the paper showing that deflation is not correlated with output growth historically. Cochrane quotes the paper’s authors:

‘The almost reflexive association of deflation with economic weakness is easily explained. It is rooted in the view that deflation signals an aggregate demand shortfall, which simultaneously pushes down prices, incomes and output. But deflation may also result from increased supply. Examples include improvements in productivity, greater competition in the goods market, or cheaper and more abundant inputs, such as labour or intermediate goods like oil. Supply-driven deflations depress prices while raising incomes and output.’

The Science Times has a succinct review of the same paper:

After analyzing figures going back to 1870 from 38 countries, Borio [one of the co-authors] concludes that declines in consumer prices are not actually the problem. He argues that the negative effects associated with deflation are in reality caused by huge declines in real estate prices and equity values. All this time, he posits, economists have been deceived by the fact that prices for goods and services have at times decreased at the same time that asset prices have gone down, especially during the Great Depression.

An earlier op-ed on deflation by Cochrane was the subject of this Sacred Cow Chips post a few months ago, which noted an unfortunate tendency among traditional Keynesian economists related to the statist agenda they often support:

Quick to blame insufficient private demand for economic ills, they propose to ratchet government to higher levels to make up for the supposed shortfall. That diagnosis is often debatable; the prescription may be a palliative at best and destructive at worst.

Deflation is usually a symptom of other, more primary economic phenomena. Whether it can be taken as a sign of economic malaise depends on the underlying cause. Certainly, as noted above, deflation is quite welcome when it results from supply-driven growth of output, especially if wages are supported by advances in labor productivity.

On the other hand, deflation may be a demand-side symptom of weakness engendered by restrictive monetary policy, fragile confidence among consumers or employers, trade restrictions, excessive taxation, over-regulation, or adjustments to a binge of malinvested capital. It does not follow, however, that a resulting deflation is unhealthy. Quite the opposite: Downward price adjustments help to clear the economy of excesses and pave the way for renewal, as excess goods, capital and other resources are repriced to levels at which purchases become gainful. This may involve more severe declines in some relative prices due to specific excesses, such as real estate. Some recent examples of deflation and reversals of economic weakness are discussed in this post at The Mises Daily.

One consequence of expected deflation is that market interest rates are driven below “real” interest rates, or the rates at which economic agents are indifferent between present and future consumption (abstracting from risk and liquidity premia). The latter is sometimes called the rate of time preference, the natural interest rate, or the originary interest rate. Recently, some short-term market interest rates in Europe have been negative, prompting some to offer arguments that the natural rate may have turned negative. This post by Thorsten Polliet reveals these arguments as nonsense:

If the originary interest rate was near-zero [let alone negative], it means that you prefer two apples available in, say, 1,000 years over one apple available today. A truly zero originary interest rate implies that the actor’s planning horizon or “period of provision” is infinitely long, which is another way of saying that he would never act at all but would continually push the attainment of his goals into the future.

Polleit discusses the fact that market real interest rates may be negative, but that is a consequence of central bank manipulation of nominal market rates, including the Federal Reserve’s so called ZIRP, or zero interest-rate policy. Polleit has this to say about the destructive consequences of this kind of behavior, albeit in extreme form:

Should a central bank really succeed in making all market interest rates negative in real terms, savings and investment would come to a shrieking halt: as time preference and the originary interest rate are always positive, “capitalistic saving” — the accumulation of goods designed for improving the production process — would come to an end.

While Keynesians imagine that expansive government policy can rescue the economy from the ravages of weak private demand, they also know that accumulation of public debt is an unavoidable by-product. That reveals an underlying motive for policies such as ZIRP, as Polite explains:

It is an actually perfidious policy for debasing the real value of outstanding debt; and it is a recipe for wreaking havoc on the economy.

An otherwise innocuous supply-side deflation, or a deflation corrective of demand-side forces, may well be accompanied by intervention by an activist central bank. The ostensible purpose would be to stimulate the demand for goods, but a more direct consequence is a reduction in the government’s interest costs. If the policy succeeds in pushing real market interest rates to zero or below, the intervention may well undermine capital formation and economic growth.

One Year Of Sacred Cow Chipping

harrycarreycowchips

Tomorrow, March 19 is the one-year anniversary of my first blog post on Sacred Cow Chips (SCC). This is my 242nd post and traffic is increasing. Please forgive this bit of self-congratulation, but I think 242 in a year is okay for a guy with a fairly busy professional and family life, plus a few other hobbies. My earliest posts tended to be brief. More recently, I’ve been unable to keep my posts down to a paragraph or two, but perhaps I’ll try to add some shorter posts to the mix in the future. I also had frequent formatting issues in my first month or two of blogging, some of which remain uncorrected. That’s my fault, but in my defense, the editing software on WordPress at the time was primitive compared to subsequent upgrades.

I confess to some self-inflicted pressure to “feed the monster”, to post something after any absence of a few days. Fortunately, finding inspiration for posts is simple. There is always a surfeit of tempting memes on social media, and traditional media provide a steady stream of questionable commentary. And then there are the politicians. Of course, I have some favorite blogs and sites from which I draw ideas on a regular basis.

I have been using the most basic WordPress account plan, which imposes some limits on features and formats. Among other shortcomings, the text on SCC always looks too large to me. I plan on upgrading to a premium plan in the next few days. That might interfere with regular posting activity as I arrange a new format, but we’ll see how it goes.

If you are following my blog, thank you, and I hope you’ll keep visiting. It’s a lot of fun for me! In the meantime, Happy Anniversary, SCC!

Causal Confusion In The Gun Debate

Tags

, , , , , , , , , ,

anarkitty_crowd_control

As a follow up to my recent post on defensive gun uses (DGUs), I think it’s appropriate to discuss international comparisons sometimes cited in support of the anti-gun rights agenda.This was prompted by correspondence from a fellow blogger, to whom I’ll refer as HH, who followed up with a post featuring some international data. I respect HH’s effort to collect the data and to present it with some eloquence, and with a little less rancor than the original correspondence. Nevertheless, the international comparisons are not as straightforward as HH would like to believe.

Let me state at the outset that I am not a big “gun guy”. I support individual liberty and a minimal state apparatus in general, along with gun rights, but I am not affiliated in any way with the NRA or any other pro-gun organization. As I told my well-armed older brother, he would not be impressed with my weaponry. I still keep a nasty, old fireplace iron under my bed. And I have a few rocks in my backyard.

HH believes that the high U.S. homicide rate relative to the handful of other developed countries he mentions (along with India) proves that “gun control works”. I differ for several reasons discussed below.

Causality and Gun Control: HH’s conclusion brings into focus two different aspects of the gun control question. The first is whether a change to more restrictive gun control leads to a reduction in homicides. That is not as obvious an outcome as HH thinks. For example, a gun ban cannot eliminate all guns, especially within limited jurisdictions. (Perhaps the federalist approach is partly why HH considers our gun laws “a mess”, but federalism is a feature of our system, not a bug, not least if it discourages local politicians from enacting ineffective rules.) Black market traffic in guns is likely to be sufficiently profitable to justify the legal risks in the presence of a ban. And the empirical evidence as to whether more stringent gun control reduces homicides is mixed at best (see here, here, here and here).

The empirical evidence presented by HH is not related to changes in gun laws (except for one or two suspect assertions about mass shootings). Instead, cross-country comparisons of homicide rates are given along with a single correlate: “gun laws”. The one data point driving the presumed direction of causality is the U.S., which has lenient gun laws and a high homicide rate relative to the four other countries (five if we include the U.K., from whence HH hails). The comparisons are made with no controls for the history of gun rights and ownership, demographics, other prohibitions, or any other confounding influences. For HH, it’s all because of guns.

Mass Shootings: HH spends some of the post discussing this phenomenon, which is rare albeit horrifying. Mass shootings account for very few of U.S. homicides, and there has been no discernible upward trend in the U.S. (see here, here and here). Moreover, multiple victim shootings are just as common in Europe as they are in the U.S. They usually prompt calls for bans on arbitrarily-defined “assault weapons”, but the bans do little to prevent such tragedies.

Historical Background: Guns owned by private individuals played an important role in the American revolution. In fact, early British attempts to confiscate weapons led to an increase in the hostilities leading up to the war. The Second Amendment of the U.S. Constitution was intended to protect individual gun rights and to protect the nation from future tyrants.

The homicide rate has declined steadily in the U.S. over the past three hundred years, from estimates of more than 30 per 100,000 people in the early 1700s to less than five today. A similar pattern occurred in other parts of the world, but after 1850, the decline in the U.S. failed to keep pace with declines in Europe.

Private guns were integral to westward expansion in the U.S. Leaving aside the tragic consequences for Native Americans, the scramble for resources and the under-developed legal system in the west undoubtedly contributed to homicides. At the same time, the need of settlers to defend life and property in an insecure environment made gun ownership (and DGUs) a necessity. This history and the generally high value placed by Americans on individual rights set the tone for today’s generally permissive attitude toward gun ownership in the U.S.

Alcohol, Drug Prohibition and Homicide: The temporary lows in the homicide rate prior to the 1910s “may have been illusory“, according to this abstract, because many homicides were reported as accidents in that time frame. More accurate reporting created the impression of a rising homicide rate during the 1910s. Alcohol prohibition began in 1920 and contributed to an increase in U.S. homicides until after repeal. Likewise, later in the twentieth century, the drug war, together with a bulge in the youth population, contributed to an even larger increase in the homicide rate. It is interesting that this increase was accompanied by an apparent decrease in the rate of spousal homicide. (A curious aside: one analyst has noted the strong correlation between homicide rates in the U.S. and fluctuations in the use of lead-based paints and leaded gasoline.)

Illegal drugs are just one area of black market activity in which the U.S. is a world leader. The connection between heavier underworld and gang activity and prevalent restrictions on victimless, individual behavior, on the one hand, and homicide rates on the other, helps explain the elevated U.S. homicide rate. The existence of this link is supported by an extremely strong concentration of homicides within specific social networks.

Demographics: The interaction of legal restrictions on behavior and weak economic circumstances is undoubtedly a factor contributing to high homicide rates. It is striking that U.S. homicides are so heavily concentrated within the African American community. The relative lack of legal economic opportunities within the African American community may be connected to greater illegal trade and homicides. Homicide rates are also somewhat elevated among U.S. Hispanics and Native Americans. Among the White and Asian segments of the U.S. population, homicide rates are comparable to those of Europe (and well under India’s rate).

Suicides: My antipathy for anti-gun arguments is probably softest with respect to gun suicides. Guns are certainly “weapons of convenience”, easily transported, fast and highly effective. Within the U.S., there is some evidence that gun ownership and total suicides are positively correlated, despite a negative correlation with non-gun suicides. However, total suicide rates in the U.S. and U.K. are similar. The rates in France and especially Japan are higher, while the rates in Denmark and India are lower. Moreover, suicide is symptomatic of larger social problems that have little to do with gun rights. Our inability as a society to deal effectively with mental health issues probably has much more to do with suicide and homicide rates than gun ownership.

Summary: There are many reasons to discount international comparisons of homicide rates and regulation of firearms. The comparisons often neglect measurement issues, but more importantly, strong conclusions about the efficacy of gun control from such top-line comparisons are often drawn without carefully addressing the question of causality between changes in gun laws and changes in homicide rates. The comparisons also fail to consider variations in the larger historical and legal context within which gun ownership occurs. For a large society like the U.S., there are vast differences in sub-groups that usually reflect other social problems, some of which are created by intrusive government itself.

I close below with some thoughts on HH’s criticism of my original post on DGUs.

DGU Denialism: HH’s objections to my post on DGUs were based on a belief that I:  1) quoted misleading statistics on gun violence in the U.S.; 2) engaged in scaremongering (apparently by quoting a wide range of estimates of DGUs); and 3) used a headline (“When Government Prohibits Self-Defense”) demonstrating a wildly paranoid view of the intent of the U.S. government.

The statistics on gun violence I cited in that post came from the U.S. Department of Justice and The Law Center To Prevent Gun Violence, which are hardly representative of the gun lobby. By providing information on gun homicides, suicides, accidents and nonfatal wounds presented in emergency rooms, I was seeking to provide a fairly comprehensive list of the “downsides” of guns in the U.S. I thought that was only fair as a way to lend perspective on estimates of DGUs. The statistics on gun violence vary from year-to-year, of course, and even the homicide numbers vary across different “official” sources for a given year (the example given at the link is total homicides). For these reasons, my initial intent was to quote ranges. However, not all of the data were available over multiple years from my original sources. Some of the figures were simply DOJ “estimates”. And apparently, my searches did not turn up the most recent data available (most of the figures I quoted were either 2010 or from 2005 – 2010). Well, mea culpa, mea culpa. My range for gun homicides of 10-12 thousand per annum was off, according to HH: it was actually 9 thousand! So, my range should have been broader in view of the continuing decline in gun homicides in the U.S., but I’m heartened to know that they were lower than I thought.

As for DGU’s, it is undeniable that they are a real phenomenon, though HH seems apoplectic that anyone would dare to discuss them. They obviously happen, though no one claims “there is always a good guy with a gun“. In fact, homicide statistics often exclude deaths from DGU’s and police shootings. (In the U.K., apparently one has to be found guilty of a murder for it to be counted as a homicide.)

Since any proposal to limit firearms would be more successful in disarming the law-abiding population than miscreants, it is reasonable to ask whether DGUs would decline more than non-justifiable homicides. Moreover, the low end of the range of DGU estimates I quoted came from DGU skeptics. In any case, I don’t think the following statements qualify me as a “scaremonger”:

Estimates range from under 100 thousand per year to more than 2.5 million. There are reasons to doubt both of the extremes. … Given this range of estimates, it would be conservative to hedge toward the lower end.

Finally, the headline: Now, I like a punchy headline, and I’ll bet HH does too. I also believe that the ultimate goal of the statist anti-gun lobby is to outlaw private firearms. Again, such a policy would have the largest impact on gun possession among the law-abiding population; the headline was meant to convey the consequences of doing so.

When Government Prohibits Self Defense

Tags

, , , ,

gun control

The Obama Administration is dropping a proposed ban on a certain kind of AR-15 ammunition after the ATF was deluged with negative comments. Gun rights supporters asserted that the ban, to be accomplished by administrative fiat, would have constituted a form of “back-door” gun control. There is no doubt that the “right to keep and bear arms” would be compromised by piecemeal bans on various types of ammo. In this case, the rationale for the proposal was that the “green-tip” ammo in question was said to be armor-piercing and therefore a greater threat to law enforcement. A spokesman for the Fraternal Order of Police says that the ammo in question “has historically not posed a law enforcement problem“. Moreover, the Law Enforcement Officers Protection Act of 1986, which banned armor-piercing bullets, specifically exempted the green-tip ammo and other types of rifle ammo because they did not meet “either part of the two-part definition of ‘armor-piercing’“.

Gun control advocates have little sympathy for broad interpretations of second amendment rights granted by the U.S. Constitution. The amendment reads:

A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.

A statist interpretation of this sentence puts “the people”, and more specifically individuals, in a subservient position to the “militia” and ultimately the government. However, we know that the Constitution was intended as a device to limit the power of the federal government and protect individual rights. This is what Glenn Reynolds means by “ordinary constitutional law“. As he notes, “… individual citizens’ lives and autonomy are themselves, in some important aspects, beyond the power of the state to sacrifice.” The right of self-defense, and to bear arms, was part of English common law and was certainly an important issue in the times of the founders, and it is still important today.

Beyond the legal interpretations, an empirical and philosophical debate rages over whether gun violence, including homocides, accidents and suicides, and gun crimes in general, can be weighed against crimes prevented by so-called defensive gun uses (DGUs). Not that DGUs are the end of the pro-gun rights story: private gun ownership in society carries with it an enormous deterrent value against criminality, but that is obviously difficult to quantify.

As a baseline, the annual number of gun deaths in the U.S. is known with a fairly high degree of accuracy. The number of non-justifiable gun homocides each year is roughly 10- 12 thousand (see p. 27 of this publication from the DOJ). The number of accidental gun deaths is typically less than 1 thousand per year (see here for this and the following statistics). About 18-20 thousand gun suicides occur each year, though some of these would have occurred by other means if a gun had not been available. Together, roughly 29-33 thousand gun deaths occur annually in the U.S. Again, some of these deaths would have occurred with or without guns. In addition, in 2010, there were 73,505 non-fatal gunshot wounds treated in emergency rooms. And crime victimization with firearms should be defined more broadly. While the following would double count the deaths cited above, the DOJ reports an annual average of about 250 thousand victimizations involving strangers with guns, and roughly 170 thousand involving known individuals with guns. Also, the DOJ estimates that each year, there are an average of about 180 thousand unreported incidents of victimization involving guns.

These are daunting numbers, but again, some of these incidents would have occurred in the absence of guns. Note as well that violent crime rates have been in decline over most of the past 25 years, including gun crime.

DGUs are phenomena that occur with greater frequency than gun opponents care to admit. DGUs include the actual discharge of a gun in self-defense or merely brandishing or threatening the use of a gun. Estimates range from under 100 thousand per year to more than 2.5 million. There are reasons to doubt both of the extremes. This article by Brian Doherty in Reason and this paper from The CATO Institute do a good job of explaining some of the controversies surrounding measurement of DGUs. The high-end estimates and some of the low-end estimates come from  survey data, but the reliability of both can be called into question. Police reports and media coverage have been used as well, but these are certain to undercount the actual number of DGU incidents, especially for cases in which no shots are fired.

Given this range of estimates, it would be conservative to hedge toward the lower end. One researcher attempted to reconcile the gap in 1997, but he did so with the use of some very rough discounting and gross-up factors that brought the range of annual DGUs up to 256-373 thousand at the low end, and down to 1.2 million at the high end. And while it would be simplistic to assert that these estimates, in any absolute sense, outweigh those given above for gun violence, the DGU estimates are certainly nontrivial by comparison. Again, there is no way to estimate of the value of the general deterrent against violent crime provided by legal gun ownership, but it must be considered to reinforce the DGU side of the ledger.

Case studies cover a variety of crimes prevented by DGUs. But even if you subscribe to the low-end estimates of DGUs, Brian Doherty points out that the statistics are irrelevant to those who have had to defend themselves with guns:

Those people who lived out the stories in any case study collection of newspaper or police reports of DGUs would doubtless find it curious to hear they shouldn’t have had the right to defend themselves, because an insufficiently impressive number of other citizens had done the same. But underestimating the significance of what’s at stake in Second Amendment rights—even though it can clearly be life itself, not to mention dignity—is a favorite pastime of gun controllers and their ideological soldiers.

Finally, to pretend that any form of prohibition can be successful in stamping out objectionable activity is foolhardy. That lesson is offered by the drug war, alcohol prohibition, prostitution laws, and many other misguided attempts to control behavior. The same is even true of laws upon which there is broad consensus. However, there is a difference when government attempts to prohibit victimless behavior. And the difference is more pernicious when government prohibits tools with which citizens can defend themselves against victimhood.

While outright prohibition exceeds the extent of most serious gun control proposals, prohibition is the ultimate goal of anti-gun activists. Laws against gun ownership do not eliminate guns, but they do hinder the possession of guns and self-defense by law-abiding citizens.

Netflix: Oops… No, Let’s Not Regulate The Internet

Tags

, , , , , , , , , , , ,

john-perry-barlow

Netflix was heralded only recently as a strong supporter of net neutrality, but the company has changed its position in the wake the the FCC’s decision to reclassify broadband ISPs as common carriers. The link goes to a Google search page. The top article listed there should be ungated, from L. Gordon Crovitz in the Wall Street Journal. I have posted a number of times on the misguided policy of net neutrality (see here, here, here, and here). While I hesitate to post on the topic again, I think a short description of the Netflix flip-flop, or should I say its “evolving position“, is worthwhile, and especially with a few quotes from the Crovitz article.

Crovitz notes that Netflix videos “take up one-third of broadband nationwide at peak times.” The company’s support for so-called neutrality seemed grounded in its frustration at the prospect of having to negotiate for massive use of resources controlled and sometimes owned by the ISPs. Here’s Crovitz:

Today Netflix is a poster child for crony capitalism. When CEO Reed Hastings lobbied for Internet regulations, all he apparently really wanted was for regulators to tilt the scales in his direction with service providers. Or as Geoffrey Manne of the International Center for Law and Economics put it in Wired: ‘Did we really just enact 300 pages of legally questionable, enormously costly, transformative rules just to help Netflix in a trivial commercial spat?‘”

Indeed! But the powers at Netflix have had a revelation:

Net-neutrality advocates oppose ‘fast lanes’ on the Internet, arguing they put startups at a disadvantage. Netflix could not operate without fast lanes and even built its own content-delivery network to reduce costs and improve quality. This approach will now be subject to the ‘just and reasonable’ test. The FCC could force Netflix to open its proprietary delivery network to competitors and pay broadband providers a ‘fair’ price for its share of usage.

There’s no need for the FCC to override the free-market agreements that make the Internet work so well. Fast lanes like Netflix’s saved the Internet from being overwhelmed, and there is nothing wrong with the ‘zero cap’ approach Netflix is using in Australia. Consumers benefit from lower-priced services.

I will leave you with my favorite part of the Crovitz piece:

Last week John Perry Barlow, the Grateful Dead lyricist-turned-Internet-evangelist, participated in a conference call of Internet pioneers opposed to the FCC treating the Internet as a utility. He called the regulatory step ‘singular arrogance.’

In 1996 Mr. Barlow’s ‘Declaration of the Independence of Cyberspace’ helped inspire a bipartisan consensus for the open Internet: ‘Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.’

Scarcity Scarcity Everywhere, And Water Pricing Stinks

Tags

, , , , , , , , , ,

water

What weird irrationality compels water authorities to price “Adam’s Ale” so cheaply, then mercilessly harangue consumers to conserve? The enforcement of sometimes crazy rationing schemes, like watering lawns only on dates ending with the last digit of one’s street address, is but a symptom of this dysfunction. If water is scarce, then it should be priced accordingly. Only then will users voluntarily limit their use to quantities they value at no less than its real resource cost. This might involve changes in agricultural and industrial practices, landscaping and lifestyles. Perhaps there would be fewer lawns and swimming pools installed where water is most scarce. But these actions should be taken voluntarily in response to market incentives.

Water prices are generally regulated and administered, and only rarely established in an actual market. Pricing is usually based on the infrastructure costs of delivering water, as well as the costs of processing required to meet various standards. Again, these prices seldom reflect the real scarcity of water. This is partly due to populist distortions of the idea that water is basic to life, the perception that water is a public good, and the related political appeal of notions like “the water belongs to everyone”. There is also the admirable objective of keeping water affordable for the poor. But unit water prices faced by different users are not uniform: agricultural users sometimes pay as little as 90% less per unit than the generally cheap prices faced by urban consumers. Industrial users are also accorded favorable rates. Needless to say, incentives are way out of line!

When a resource is priced at levels that do not reflect its scarcity, something has to give. The resource will be overused, and overuse of water inflicts severe environmental damage. With water, that can mean draining lakes and killing springs and riverbeds along with the habitat they support, not to mention lower water quality. The waste doesn’t stop there: authorities are sometimes prone to propose costly infrastructure boondoggles to address water needs, such as dams and reservoirs in arid climates from which large quantities of stored water evaporate.

This episode of Econ Talk features a discussion of water mis-pricing and its consequences. (A hat tip on this to the estimable John Crawford). It covers issues in the management of water systems in the U.S. and under-developed countries. It is a very informative discussion, but it neglects one of the most promising methods of pricing, managing and conserving water supplies: marketable permits, or a secondary market in water rights.

Marketable permits involve the assignment of base usage rights using criteria such as estimates of total supplies and the customer’s past usage levels. This base allocation of rights can be dynamic, changing over time with drought conditions or improvements in conservation technology. Usage up to the permitted quantity is priced administratively, as usual, which keeps water affordable to individuals in lower economic strata. Beyond that base level, however, users must acquire additional permits from a willing seller at a mutually agreed-upon price. Trades can take place on a centralized water “exchange” so that prices are observable to all market participants. And trades may take various forms, such as short-term or long-term contracts which may involve prices that differ from “spot”.

How does this help solve the problem of scarcity? The price of water on the secondary market will rise to the point at which users no longer perceive a benefit to marginal flows of water above cost. A higher price encourages voluntary conservation in two ways: it is a direct cash cost of use above one’s base water rights, and it is an opportunity cost of foregoing the sale of permits on water use up to the base assignment. Those best-prepared to conserve can sell excess rights to those least prepared to conserve. The price established by the trade of permits will bear a strong relationship to the actual degree of scarcity.

A hallmark of allocative efficiency is when the marginal value of the resource is equalized across different uses. This condition implies that no gains from trade are left unexploited. But in the case of water, this means that gains in efficiency will be limited unless all users face the same “spot” price. To fully exploit the market’s potential for efficient allocation, large agricultural and industrial users must face a relatively low base price that differs from residential customers only in terms of infrastructural costs. Granted, voluntary trades between users can take place under specialized contracts as long as the terms are publicly available. This allows intensive users to hedge risks to assure that their needs can be met in the future. However, those users will still have to weigh the marginal benefits of certain crops or industrial processes against prices that more accurately reflect scarcity.

This discussion has ignored certain complexities. For example, assigning rights is complicated by the fact that there are almost always multiple sources of water, such as rivers, public and private wells, lakes and runoff capture. There are sometimes different classes of rights-holders on specific sources. Rights on some sources might not be subject to base pricing by a water authority, but water permits could still be sold by these rights-holders on the secondary market, providing an incentive for them to conserve.

There have been political and legal impediments to the development of water markets in the U.S., some of which are discussed here. A recent effort to promote a water market in the western U.S. has arisen in response to drought conditions. Here is a good article from the last link above, a lengthy abstract of a research paper proposing development of a water market in the American West. Of course, there are many academic papers on this topic, but they are mostly gated. I lived in San Antonio in the 1990s when a controversial proposal to build a large reservoir was under debate. This was intended to relieve demands on the Edwards Aquifer, upon which a large area of Texas depended for water. It was voted down by a coalition that included many libertarians and environmentalists. At about that time, I met a natural resource economist from the University of Texas system who proposed the establishment of a water market in south Texas. He had trouble getting local support for the idea; it was politically taboo due to superstitions about an effort to allocate rights (marketable permits) on what is often perceived as a “public good” (despite the exclusivity of its benefits to customers). Later, in 1998, the San Antonio branch of the Federal Reserve Bank of Dallas published this interesting article on the development of a water market in south Texas. To my knowledge, there is still no water market there, but battles over water use and conservation continue.

Will SCOTUS Grant Executive License To Rewrite Laws?

Tags

, , , , , , , , , , , , , ,

congress-obamacare-cartoon

Can a piece of legislation say any old thing, leaving the executive branch as the arbiter over what the law “should” say?  Can the executive decide a law means one thing ex ante and another ex post? That would be bizarre under the U.S. Constitution, but the Obama Administration has arrogated to itself the role of legislator-in-chief in its implementation the Affordable Care Act (ACA), aka Obamacare, effectively rewriting the law by repeatedly granting waivers and delaying key provisions. And the apparent legal doctrine of “executive license” to rewrite laws would be affirmed if the Supreme Court rules for the government in King v. Burwell.

The case, which was argued before the Court this week, revolves around whether the ACA allows subsidies to be paid on health insurance purchased by qualified consumers on federal exchanges. The plaintiffs say no because, in the “plain language of the statute”, subsidies can be paid only for health insurance purchased on exchanges “established by the state”. A ruling is expected in June.

The provision in question was intended to incent state governments to establish their own exchanges. Most states chose not to do so, however, instead opting to allow their citizens to purchase insurance on a federal exchange. Subsequently, the IRS overrode the provision in question by granting subsidies for purchases on any exchange. The case will be historic if the federal exchange subsidies are overturned, but if not, the ruling will still be historic in setting a precedent that the executive branch can enforce a view of Congressional intent so divergent from written law.

The most interesting aspect of the SCOTUS hearing was Justice Kennedy’s expressed concern that a ruling for the plaintiffs would create a situation in which the federal government coerced states into establishing exchanges, posing a conflict with principles of federalism. The Wall Street Journal was fairly quick to point out that the subsidies were intended as an incentive for states, not unlike many other incentives for state participation incorporated into a wide variety of federal programs:

If Governors decline to establish an exchange, their citizens are not entitled to benefits, but that is not coercion. That is the very trade-off that is supposed to encourage states to participate. If the subsidies will flow no matter what, few if any states would become the partners the Administration wanted.

More to the point, federalism is supposed to protect political accountability. Two-thirds of the states made an informed decision to rebuff ObamaCare, but if voters prefer otherwise, they can elect new Governors who won’t. If federal subsidies flow no matter what, then states aren’t presented with a real choice. That isn’t how federalism works in the American system. As Justice Kennedy rightly noted, the exchange decision was partly ‘a mechanism for states to show they had concerns about the wisdom and workability of the act in the form that it was passed.’

Jonathan Adler has some thoughts on the same issues here and here. At the second link, Adler gives a more detailed explanation of Kennedy’s concern, which involves additional regulatory implications for the states. Adler also  covers some court precedents for the kind of “coercion” at issue in King. On one case, New York v. United States, Adler says:

In the very case that established the current anti-commandeering doctrine, the Court said there was no problem with Congress using its regulatory authority to encourage state cooperation.

The Court would be reluctant to rule for the plaintiffs based on a principle contrary to so many of its own previous rulings. Such a justification would appear to undermine the existing extent of federal direction of state activity — a possible silver lining to a ruling for the government. But Adler also notes that what is so unique about the ACA relative to earlier precedents is that so many states decided to opt out, and there is plenty of evidence that they did so with their eyes wide open. The loss of the federal subsidies was not the only consideration in those decisions:

“… while states that choose to forego subsidies are exposing their citizens to an increase in one regulatory burden, they are relieving their citizens of others, and at least some states are perfectly happy to make that choice.

An amusing analogy to the distinction between federal exchanges and state-established  exchanges is made by Jonathan Cohn in the Huffington Post. He contends that federal and state exchanges are comparable to the the choice between butter and oil in a pancake recipe from The Joy of Cooking. You get pancakes either way, says Cohn. Therefore, he asserts that the case against the government in King is based on a specious distinction. Sean Trende at Real Clear Politics point out that the two kinds of pancakes are not the same. If Congress wishes to reward the use of butter, then one should expect the government preserve that distinction in distributing rewards.

Trende points to another distinction missed by Cohn: suppose Congress also said that the batter must be whipped by a blender at 300 rpm. In the case of Obamacare, Congress stated that an exchange must be established by a state to qualify buyers for subsidies, and it did so with the full intent of gaining cooperation from states in shouldering the administrative burdens of the law. Of course, different pancakes might be close enough, but in the end, specific language was used by Congress to create incentives for the use of certain ingredients and a particular mixing technique. The meaning of the pancake law is clear enough and is independent of whether administration officials can dream up substitutes, even if they are right out of The Joy of Cooking.

The four statist justices (some claim they are liberal) emphasized the dire consequences that a ruling for the plaintiffs would have on the insurance market and on individual buyers in states using the federal exchange. While the impact could be mitigated by the Court in various ways, the impact itself has been exaggerated by Obamacare supporters. This piece at Zero Hedge examines the likely impact in detail, but it fails to discuss a few significant benefits related to the employer and individual mandates to residents of states without their own exchanges.

Justice Kennedy is unlikely to side with the government in this case, despite his concerns about coercive federal policy. Justice Roberts was silent for almost the entire hearing, and it is not clear whether he will side with the consequentialists, find another avenue for upholding the subsidies, or defer to the plain language of the law. The Court might engage in a form of avoidance, finding  a way to dismiss the case on unexpected grounds such as a lack of standing (though few consider the plaintiffs’ standing to be an issue). That would effectively grant the administration carte blanche in rewriting legislation.

In Praise of Ticket Scalpers

Tags

, , , , , , , ,

fare-thee-well-2015

I have been a fan of The Grateful Dead since I was a teenager and have seen the band perform somewhere around 35 times prior to Jerry Garcia’s death in 1995 … I actually lost count. This summer, the four surviving original band members, along with some prominent guest musicians, will perform three reunion shows over the July 4th weekend at Chicago’s Soldier Field. They have said that this will be their last performance together.

Demand for tickets was so high that it surprised the band and the promoter. In January, an initial mail order tallied about 65,000 orders for more than 350,000 tickets, far more than the mail-order allotment and the stadium capacity for three days. On-line requests went mostly unfilled as the system was swamped when tickets went on-sale. Chicago Bears season ticket holders had the right of first refusal on a large number of tickets, which is unfortunate given the probable extent of the intersection between Bears fans and the set of Deadheads. And so there is a problem of scarcity and excess demand, a common occurrence for big concerts and sporting events.

Naturally, a secondary market has arisen to allocate the limited supply of tickets available from brokers and other willing sellers. However, as noted at the links above, asking prices on outlets like StubHub, often well above $1,000 per ticket, have shocked observers. Few transactions will actually take place at those prices. Repricing will occur until enough willing buyers are found. Nevertheless, many “Deadheads” are outraged. There are complaints on Facebook from self-righteous Deadheads, boasting of their honor as music fans and condemning the “greed” of resellers. Needless to say, some of the resellers are, in fact, lucky Deadheads who, having landed tickets, now find the prospect of a pecuniary gain from a resale just too good to pass up!

I am very much in favor of a free secondary market and so-called “ticket scalping.” First and foremost, these transactions are voluntary. There is no coercion involved, just a willing buyer and seller who reach a mutually beneficial deal. A buyer will agree to pay a certain price only if that price is less than the subjective value they assign to the ticket. Of course, a potential secondary buyer would rather have been lucky in what amounted to a lottery for tickets. But if not, they are not shut out altogether. A little patience on the secondary market might bring prices well within reach.

Second, the allocative mechanism in play on the secondary market is little appreciated, but it contributes to social gains. Tickets will be allocated to those who value them most highly. In fact, individuals who value their own time most highly might avoid the time and aggravation of participating in the mail order or joining the on-line sales queue. Instead, these individuals know they can fall back on the secondary market to obtain seats, thereby conserving a valuable resource: their time. Some will contend that all tickets should be made available and allocated via some other, non-price mechanism, such as a lottery or a queue, whereby willingness to pay cash is rendered moot. Unfortunately, such mechanisms have severe drawbacks in the presence of excess demand: they tend to waste time for both the lucky and unlucky participants, they may allocate tickets to buyers who value them less highly, they infringe on personal liberty by preventing individuals from taking part in mutually beneficial exchanges, and they waste scarce law enforcement resources.

Another advantage of the allocative mechanism embodied in the secondary market is its ability to create value in the presence of risk. Performers and promoters are loath to price tickets optimally, partly because there is risk in doing so: damage to goodwill with their fan base and the risk that they will over-price tickets and possibly fail to fill the house. Secondary sellers will gladly accept pricing risk, and the frenzy surrounding an active secondary market can serve as a promotional device for performers. Moreover, by allowing tickets to be allocated to buyers who value them most highly, the venue and the community benefit by bringing in the most appreciative crowd, adding to the success and vibrancy of the local entertainment market. A prohibition on scalping closes off a convenient channel through which some of the most valuable customers can obtain seats to events. Here’s what one ticket market scholar states:

… a curtailment of scalping markets would not only prevent allocation according to maximization of utility, it would also have the dynamic effect of reducing in the long term the supply of cultural events! This is very rarely mentioned, but following the adoption of an anti-scalping law in Quebec, industry experts have indicated that cultural centers like the Bell Centre in Montreal have reduced events and potential audiences by some 6% to 11%.

Finally, the fact that prices are high on the secondary market implies great scarcity. The Grateful Dead may have aggravated the situation by stating unequivocally that these would be their last shows. They could have remained silent or vague on that point. But scarcity can be addressed in other ways by performers and promoters: they can agree to price the tickets more highly; they can arrange to perform more shows and appear at more venues; and they can create imperfect substitutes for the actual concert experience, such as providing live-feeds of the show to other venues, including live streaming.

In this case, the band has taken steps to alleviate the shortage. First, they have reconfigured the plan for the floor of the stadium to allow a larger crowd in a “GA Pit” (presumably standing room), and they are opening up the set and directing sound to accommodate seating behind the band. Second, they are discussing the possibility of providing high-quality, live feeds to other venues. This should help to take some of the pressure off prices in the secondary market.

My wish is that the band would also announce additional performances, either in Chicago or a few other cities. My mail order went out on the first day with an early postmark and it is still unanswered. My hopes remain high, but if I don’t get into the show, I’m sure to attend a viewing party!

The FCC’s Net Brutality Order

Tags

, , , , , , , , , ,

fcc

Supporters of so-called net neutrality do not understand the contradiction it represents in promoting implicit subsidies to heavy users  of scarce internet capacity. And supporters fail to understand the role of incentives in allocating scarce resources. Last week the FCC voted 3-2 to classify internet service providers (ISPs) as common carriers under Title II of the Communications Act of 1934, henceforth subjecting them to regulatory rules applied to telephone voice traffic since the 1930s. With this change, which won’t take place until at least this summer, the FCC will be empowered to impose net neutrality rules, which proponents claim will protect web users with a guarantee of equal treatment of all traffic. ISPs would be prohibited from creating “fast lanes” for certain kinds of traffic and pricing them accordingly. The presumption is that under these rules, small users would not be shut out by those with a greater ability to pay.

Like almost every progressive policy prescription, this regulatory initiative insists on biting the hand that feeds. It reflects a failure to properly identify parties standing to gain from such regulation. The distribution of internet usage is highly unequal: less than 10% of all users account for half of all traffic, and half of users account for 95% of traffic. Data origination on the web is also highly unequal: “Two companies (Netflix and Google) use half the total downstream US bandwidth”.

The neutrality rules will assure that those dominating traffic today can continue to absorb a large share of capacity at subsidized prices. Price regulation may require that high-speed streaming of films and events be priced the same as lower-speed downloads of less data-intensive content. So-called “smart” technologies and the “internet of things” will be degraded or fail to reach their potential, and could possibly be of compromised safety, without always-open, dedicated data lanes, as would medical applications that would receive priority in a sane world. Without price incentives:

  1. conservation of existing capacity will not take place in the short-run;
  2. growth in capacity will languish in the short- and long-run;
  3. development of new applications and technologies will be stunted; and
  4. rationing via slowdowns, outages and imposition of usage caps may be necessary. Will these rationing decisions be “neutral”?

The unregulated development of the internet is an incredible success story. FCC commissioner Ajit Pai, who is a critic of net neutrality, makes this point forcefully. In a strong sense, internet development is still in its infancy. New and as yet unimagined web-enabled functionalities will continue to be embedded into everyday objects all around us. This process can only be impeded by government regulation, particularly of a form intended to control one-dimensional services offered by monopolists (i.e., public utilities). Competition in broadband access is growing, and it is enhanced by the ability of providers to co-mingle applications with the so-called “dumb pipe.”

The growth in uses and usage must be enabled by growth in network infrastructure. For that, incentives must be preserved through pricing flexibility and the ability of ISPs to negotiate freely with content providers and application developers. On this point, Pai says:

The record is replete with evidence that Title II regulations will slow investment and innovation in broadband networks. Remember: Broadband networks don’t have to be built. Capital doesn’t have to be invested here. Risks don’t have to be taken. The more difficult the FCC makes the business case for deployment, the less likely it is that broadband providers big and small will connect Americans with digital opportunities.

Pai also asserts that horror stories about greedy ISPs restricting the ability of small users to access the Web are largely a fiction:

The evidence of these … threats? There is none; it’s all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. Examples this picayune and stale aren’t enough to tell a coherent story about net neutrality. The bogeyman never had it so easy.

Then there is the small matter of potential content regulation (see the first link on the list), which some fear could be enabled by the FCC’s action. This would be an obvious threat to an open and free society, and the advent of such rules would discourage growth in internet applications by giving would-be prohibitionists a new way to tie and gag those of whom they disapprove.

Net neutrality and the FCC’s “Open Internet Order” serve the interests of large content providers who would rather not have to pay the long-run marginal cost of the network capacity tied up by their end-users. It represents a distinct form of rent-seeking in data transport services. Allowing ISPs to negotiate with significant content providers allows the transport cost of individual services to be “unbundled”, thereby promoting economic efficiency and avoiding cross-subsidies from lighter to heavier users and uses. As new, intensive applications are introduced, the economic costs and benefits can then be weighed more accurately by prospective customers.

Department of Homeland Skepticism

Tags

, , , , , , , , , ,

homeland_security

Almost any reference to the U.S. as the “homeland” makes me cringe. It has such a jingoistic ring to my ear that I am immediately suspicious of the speaker’s motives: “Propaganda Alert!” I suppose anyone who makes their home in this land has a right to call it their homeland, if they must. The term seems uniquely appropriate for native americans. For others residing in this “nation of immigrants”, the homeland always strikes me as a reference to the country of origin of someone’s ancestors.

It’s all the worse when a government super-agency engaged in a variety of controversial activities uses “homeland” as its middle name. That very name, the Department of Homeland Security (DHS), suggests that whatever it is they do there must be in the interests of our homeland, and therefore beyond reproach.

To me it’s creepy and Orwellian, but the name of the agency is of little import relative to its activities. Now, as the fight over DHS funding — and President Obama’s executive order on immigration — reaches a fever pitch in Congress, Nick Gillespie asks, “Why do we even have a Department of Homeland Security in the first place?”

“Created in 2002 in the mad crush of panic, paranoia, and patriotic pants-wetting after the 9/11 attacks, DHS has always been a stupid idea. Even at the time, creating a new cabinet-level department responsible for 22 different agencies and services was suspect. Exactly how was adding a new layer of bureaucracy supposed to make us safer (and that’s leaving aside the question of just what the hell “homeland security” actually means)? DHS leaders answer to no fewer than 90 congressional committees and subcommittees that oversee the department’s various functions. Good luck with all that.

Gillespie expounds on the profligacy and mismanagement at DHS. It has a voracious appetite for resources and taxpayer funds and is notorious for waste, to say nothing of its less than full-throated enthusiasm for civil liberties:

“The Government Accountability Office (GAO) routinely lists DHS on its ‘high risk’ list of badly run outfits and surveys of federal workers have concluded ‘that DHS is the worst department to work for in the government,’ writes Chris Edwards of the Cato Institute.

That shouldn’t make anyone feel much safer. Gillespie advocates dismantling the entire agency. A high-level org chart for DHS is shown here. Certainly its constituent sub-agencies were able to function before the DHS concept was hatched. There might have been some interagency rivalry, but there was also cooperation. Would a DHS have been better able to anticipate and prevent the 9/11 attacks? That’s doubtful. Given its track record, it’s difficult to see how the DHS bureaucratic umbrella improves security, and it is not a model of cost efficiency despite expectations of reduced duplication of overhead.

Threats by a faction of the GOP to defund DHS enforcement of Obama’s immigration order are creating another deadlock in Congress. The order would grant amnesty to over 5 million illegal immigrants, but a Federal judge has ruled in favor of 26 states that sued to stop enforcement based on the imposition of enforcement costs on the states. GOP leadership would rather approve funding and let the courts do the heavy lifting to stop the order, but the administration has asked the judge to stay his injunction pending appeal. If a stay is granted, and that is unlikely, or if an appeals court overturns the ruling, implementation of the order would go forward before the conclusion of what would likely be a protracted legal process. The de-funders are unwilling to take that chance.

Democrats claim that the effort to defund DHS enforcement of the executive order will shut down the agency, which is nonsense. GOP leadership fears that Republicans will be blamed if there is even a perception of negative consequences. I suspect Obama will do his best to create those perceptions, but the funding gap won’t have much real impact. In any case, I’m with Nick Gillespie: to hell with the DHS administrative umbrella! Releasing the individual security agencies from DHS’s grip would be more likely to reduce costs with no loss of security, and just might promote individual liberty.