Relieving the U.S. Public Toilet Shortage: User Fees


, , , , , , , , , , , , , , , , ,

The musical comedy Urinetown opened in 2001 and ran for 965 performances, not a bad run by Broadway standards. The show, which is still performed in theaters around the country, is a melodramatic farce: a town tries to deal with a water shortage by mandating that all townspeople use pay toilets controlled by a malevolent private utility. Despite the play’s premise, pay toilets are a solution to the very real problem of finding decent facilities, or any facility, in which to relieve oneself in public places. Anyone who has ever strolled the streets of a city has encountered this problem from time-to-time. But in the U.S., where local budgets are typically strapped, the choice is often between scarce and decrepit free toilets or no toilets at all. Otherwise, those seeking relief must rely on the kindness business owners or pass laws allowing non-patrons to commandeer businesses’ bathrooms at will. Toilets with user fees, however, are an alternative that should get more emphasis.

In part, the theme of Urinetown reflects a longstanding notion among anti-capitalists that pay toilets are a disgustingly unfair solution to these urgent needs. One can imagine the logic: everyone has a need and a right to make waste, so we should all have access to sparkling public toilets for free! There is also the presumed misogyny of charging at stalls but not urinals (which are cheaper to maintain, after all), but overcoming that problem should not present a great technical hurdle. And surely pay toilets could be made to accept EBT cards, or locally-issued pee-for-free cards for the homeless.

Yes, we all make waste. However, most of us are so modest and fastidious that we quite literally “internalize the externality” we’d otherwise impose on others were we to seek relief in the street or behind trees in the park. We hold it and sometimes incur high costs in search of a restroom. Those are costs many of us would willingly pay to avoid.

As Alex Tabarrok says in “Legalize Pay Toilets“, outrage over pay toilets, very much like the kind expressed in Urinetown, is what led to outright bans on pay toilets in America during the 1970s (also see Sophie House’s discussion of the need for pay toilets at Citylab). According to Tabarrok, “In 1970 there were some 50,000 pay toilets in America and by 1980 there were almost none.” Many travelers know, however, that pay toilets are fairly commonplace in Europe.

In the wake of pay-toilet bans in America, and without the flow of revenue, those one-time pay toilets were not well-maintained nor replaced. In that sense, hostility to the concept of pay toilets is responsible for the paucity and abysmal condition of most public restrooms today. Public restrooms are often plagued by a tragedy of the commons. And when you do see a “free” public restroom in relatively good condition (in an airport, on a turnpike, or elsewhere), it is usually because its costs are cross-subsidized by payments for other goods and services offered in those facilities. It’s not as if you don’t pay for the bathrooms.

There is no question of a willingness to pay, but legal obstacles to pay toilets remain. Pay toilets are still very uncommon. New York City actually decriminalized public urination a few years ago, an odd way to deal with the shortage of restrooms. Some cities, such as Philadelphia, have initiated efforts to bring back pay toilets, but they have made little headway. Just last year, the toilet paper producer Charmin ran a successful publicity campaign in New York City by testing a mobile toilet-sharing service (à la Uber ride-sharing) called Charmin Van-GO. The company described the test as a big success in terms of publicity, but apparently the service has not been offered on a continuing basis.

The economic problem posed by full bladders and bowels on the public square can be solved with relative efficiency using the price mechanism: pay toilets. The flow of revenue can defray the costs of restrooms and their maintenance, easing the strain on public budgets and covering the cost of keeping them clean. Pay toilets can be provided publicly or built and operated by private providers. Pricing the use of toilets, whether offered publicly or privately, helps focus resources at the point of need. Free public toilets, in contrast, are scarce and typically unsanitary. Funding public restrooms through taxation, rather than user fees, involves a loss of efficiency because taxpayers are often distinct from actual users. Forcing purveyors of food and drink (or anything of value) to offer bathroom access to “free riders” creates another obvious source of inefficiency. Allowing the use of EBT cards at pay toilets, while overcoming certain objections, would also involve inefficiencies, but at least they’d be limited to subsidies for a small proportion of the bathroom-going public. Given the alternatives under the status quo, our cities would be far more pleasant if they were flush with pay toilets.

Certainty Laundering and Fake Science News


, , , , , , , , , ,

Intriguing theories regarding all kinds of natural and social phenomena abound, but few if any of those theories can be proven with certainty or even validated at a high level of statistical significance. Yet we constantly see reports in the media about scientific studies purporting to prove one thing or another. Naturally, journalists pounce on interesting stories, and they can hardly be blamed when scientists themselves peddle “findings” that are essentially worthless. Unfortunately, the scientific community is doing little to police this kind of malpractice. And incredible as it seems, even principled scientists can be so taken with their devices that they promote uncertain results with few caveats.

Warren Meyer coined the term “certainty laundering” to describe a common form of scientific malpractice. Observational data is often uncontrolled and/or too thin to test theories with any degree of confidence. What’s a researcher to do in the presence of such great uncertainties? Start with a theoretical model in which X is true by assumption and choose parameter values that seem plausible. In all likelihood, the sparse data that exist cannot be used to reject the model on statistical grounds. The data are therefore “consistent with a model in which X is true”. Dramatic headlines are then within reach. Bingo!

The parallel drawn by Meyer between “certainty laundering” and the concept of money laundering is quite suggestive. The latter is a process by which economic gains from illegal activities are funneled through legal entities in order to conceal their subterranean origins. Certainty laundering is a process that may encompass the design of the research exercise, its documentation, and its promotion in the media. It conceals from attention the noise inherent in the data upon which the theory of X presumably bears.

Another tempting exercise that facilitates certainty laundering is to ask how much a certain outcome would have changed under some counterfactual circumstance, call it Z. For example, while atmospheric CO2 concentration increased by roughly one part per 10,000 (0.01%) over the past 60 years, Z might posit that the change did not take place. Then, given a model that embodies a “plausible” degree of global temperature sensitivity to CO2, one can calculate how different global temperatures would be today under that counterfactual. This creates a juicy but often misleading form of attribution. Meyer refers to this process as a way of “writing history”:

Most of us are familiar with using computer models to predict the future, but this use of complex models to write history is relatively new. Researchers have begun to use computer models for this sort of retrospective analysis because they struggle to isolate the effect of a single variable … in their observational data.”

These “what-if-instead” exercises generally apply ceteris paribus assumptions inappropriately, presuming the dominant influence of a single variable while ignoring other empirical correlations which might have countervailing effects. The exercise usually culminates in a point estimate of the change “implied” by X, without any mention of possible errors in the estimated sensitivity nor any mention of the possible range of outcomes implied by model uncertainty. In many such cases, the actual model and its parameters have not been validated under strict statistical criteria.

Meyer goes on to describe a climate study from 2011 that was quite blatant about its certainty laundering approach. He provides the following quote from the study:

These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”

At the time, Meyer wrote the following critique:

[Note the first and last sentences of this paragraph] First, that there is not sufficiently extensive and accurate observational data to test a hypothesis. BUT, then we will create a model, and this model is validated against this same observational data. Then the model is used to draw all kinds of conclusions about the problem being studied.

This is the clearest, simplest example of certainty laundering I have ever seen. If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?”

In “Imprecision and Unsettled Science“, I wrote about the process of calculating global surface temperatures. That process is plagued by poor quality and uncertainties, yet many climate scientists and the media seem completely unaware of these problems. They view global and regional temperature data as infallible, but in reality these aggregated readings should be recognized as point estimates with wide error bands. Those bands imply that the conclusions of any research utilizing aggregate temperature data are subject to tremendous uncertainty. Unfortunately, that fact doesn’t get much play.

As Ashe Schow explains, junk science is nothing new. Successful replication rates of study results in most fields are low, and the increasing domination of funding sources by government tends to promote research efforts supporting the preferred narratives of government bureaucrats.

But perhaps we’re not being fair to the scientists, or most scientists at any rate. One hopes that the vast majority theorize with the legitimate intention of explaining phenomena. The unfortunate truth is that adequate data for testing theories is hard to come by in many fields. Fair enough, but Meyer puts his finger on a bigger problem: One simply cannot count on the media to apply appropriate statistical standards in vetting such reports. Here’s his diagnosis of the problem in the context of the Fourth National Climate Assessment and its estimate of the impact of climate change on wildfires:

The problem comes further down the food chain:

  1. When the media, and in this case the US government, uses this analysis completely uncritically and without any error bars to pretend at certainty — in this case that half of the recent wildfire damage is due to climate change — that simply does not exist
  2. And when anything that supports the general theory that man-made climate change is catastrophic immediately becomes — without challenge or further analysis — part of the ‘consensus’ and therefore immune from criticism.”

That is a big problem for science and society. A striking point estimate is often presented without adequate emphasis on the degree of noise that surrounds it. Indeed, even given a range of estimates, the top number is almost certain to be stressed more heavily. Unfortunately, the incentives facing researchers and journalists are skewed toward this sort of misplaced emphasis. Scientists and other researchers are not immune to the lure of publicity and the promise of policy influence. Sensational point estimates have additional value if they support an agenda that is of interest to those making decisions about research funding. And journalists, who generally are not qualified to make judgements about the quality of scientific research, are always eager for a good story. Today, the spread of bad science, and bad science journalism, is all the more virulent as it is propagated by social media.

The degree of uncertainty underlying a research result just doesn’t sell, but it is every bit as crucial to policy debate as a point estimate of the effect. Policy decisions have expected costs and benefits, but the costs are often front-loaded and more certain than the hoped-for benefits. Any valid cost-benefit analysis must account for uncertainties, but once a narrative gains steam, this sort of rationality is too often cast to the wind. Cascades in public opinion and political momentum are all too vulnerable to the guiles of certainty laundering. Trends of this kind are difficult to reverse and are especially costly if the laundered conclusions are wrong.

Human Potential Exceeds the Human Burden


, , , , , , , , , , , , , , , ,

Are human beings a burden, and in what way? Between two camps of opinion on this question are many shades of thought, and some inconsistencies. But whether the discussion is centered on the macro-societal level or the family level, the view of people and population growth as burdensome promotes centralized social control and authoritarian rule, with an attendant imposition of burdens on human freedom and productive effort.

Naysaying Greens

The environmental Left views people as a net burden on resources while failing to recognize their resource value, without which our world would yield little in the way of food and other comforts. It is mankind’s ability to process and transform raw materials that makes the planet so hospitable.

The world’s human population has increased by a factor of 18 times in the last 400 years, but food supplies have grown even faster. Each person has potential as a resource capable of a net positive contribution to societal and global well being. If we wrongly conclude that people are burdensome, however, it offers a rationale to statists for regulating the lives of individuals, preventing them from producing and consuming as they would otherwise choose.

Sirens of Dependency

There will always be individuals who cannot provide for themselves, sometimes due to temporary circumstances and sometimes as a permanent condition. If the latter, these individuals find themselves in the lower tail of the distribution of human productive capacity. The undeniable burden of this lower tail for humanity can be dealt with through various social support structures, including family, religious organizations, private social organizations, and the public safety net.

People of true compassion have always helped to fill this need privately and voluntarily, but “compassionate” motives can be a false and corrupting when the public sector becomes the tool of choice. Actual and potential beneficiaries of public largess can vote for their alms at the expense of others, along with those well-meaning partisans who confuse forced redistribution with compassion. And benefits and taxes often create disincentives that undermine a society’s productive dynamic. Under such circumstances, the lower tail and its burden of dependency grows larger than necessary, and society’s ability to carry that burden is diminished.

Burdensome Children

Children are unable to provide for themselves up to varying ages, so they do create an economic burden for their parents. That burden might loom large in the event of an unexpected pregnancy, but most parents find the burden well worth bearing, whether planned or unplanned, ex ante and ex post, and for reasons that often have little to do with material concerns. But many individuals and families in the lower tail simply cannot bear the economic burden on their own; others not in the lower tail might simply find the prospective burden of an unexpected pregnancy a bit too heavy or inconvenient for non-economic reasons.

Solutions are available, of course. They range from sexual abstinence and prophylactics to adoption services, as well as hard sacrifice by new parents. And then there is abortion. The pro-choice Left makes the argument that children are so burdensome as to justify the termination of pregnancies at almost any stage. The ease with which they make that argument and traffic in the imposition of that burden upon the innocent is horrific. Furthermore, regimes dominated by the Left have often instituted formal population control measures, and Western leftists such as the late Margaret Sanger, founder of Planned Parenthood, have advocated strenuously for eugenics.

Burdens at the Border

Are migrants a burden or a blessing? In general, the latter, because mobility allows individuals to exploit economic opportunities, with consequent gains to themselves and to those who demand their services. This is generally true from the perspective of nations; it is the basis of the traditional economic argument in favor of liberalized, legal immigration to which I subscribe. But some partisans on both sides of the immigration debate accept the idea that immigrants impose a burden. That may be correct under some circumstances.

Opponents of immigration reform certainly identify immigrants as a burden to productive citizens and taxpayers. Critics of border control, on the other hand, are motivated by compassion for political refugees or economically disadvantaged immigrants, whether employment opportunities exist for them or not. In fact, would-be immigrants are often attracted by generous public benefits in the receiving country, and so they are likely to add to a country’s lower-tail burden, as I’ve described it. But the no-borders crowd insists that society must shoulder any burden created by the combined effect of an open border with generous public benefits, and even immediate voting rights.

The Burdens of Overbearance

The Left imagines that people create many burdens, but the Left is happy to impose many burdens in pursuit of their “ideal” society: planned by experts, egalitarian, highly regulated, profit-free, and green. They wish to “save the planet” by imposing burdens, regulating and restricting economic growth and sparing no expense to minimize the human “footprint”. They wish to fund redistributive social programs by burdening productive resources with taxes, while crowding-out private efforts to provide charitable relief. They wish to prevent the perceived burden of children by offering, and even funding publicly, the “choice” to impose an ultimate burden on those too weak to register a protest. And they wish to burden taxpayers by availing all potential migrants, without question, of generous public benefits.

Burdens are a fact of life, but people with the freedom to exploit their own effort and ingenuity for gain have increasingly shouldered their own burdens and much more. Over the last few centuries, human ingenuity has expanded the effective quantity of all resources by many orders of magnitude. In so doing, the scale and scope of real poverty have been reduced dramatically. But those who would deign to manage our burdens for us, under the authority of the state, are more threatening to our well being than beneficent.

The Disastrous Boomerang Effect of Fire Suppression


, , , , , , , , , , , , , , ,

We can lament the tragic forest fires burning in California, but a discussion of contributing hazards and causes is urgent if we are to minimize future conflagrations. The Left points the finger at climate change. Donald Trump, along with many forestry experts, point at forest mismanagement. Whether you believe in climate change or not, Trump is correct on this point. However, he blames the state of California when in fact a good deal of the responsibility falls on the federal government. And as usual, Trump has inflamed passions with unnecessarily aggressive rhetoric and threats:

There is no reason for these massive, deadly and costly forest fires in California except that forest management is so poor. Billions of dollars are given each year, with so many lives lost, all because of gross mismanagement of the forests. Remedy now or no more Fed payments.”

Trump was condemned for his tone, of course, but also for the mere temerity to discuss the relationship between policy and fire hazards at such a tragic moment. Apparently, it’s a fine time to allege causes that conform to the accepted wisdom of the environmental Left, but misguided forest management strategy is off-limits.

The image at the top of this post is from the cover of a book by wildlife biologist George E. Gruell, published in 2001. The author includes hundreds of historical photos of forests in the Sierra Nevada range from as early as 1849. He pairs them with photos of the same views in the late 20th century, such as the photo inset on the cover shown above. The remarkable thing is that the old forests were quite thin by comparison. The following quote is from a review of the book on Amazon:

Even the famed floor of Yosemite is now mostly forested with conifers. I myself love conifers but George makes an interesting point that these forests are “man made” and in many ways are unhealthy from the standpoint that they lead to canopy firestorms that normally don’t exsist when fires are allowed to naturally burn themselves out. Fire ecology is important and our fear of forest fires has led to an ever worsening situation in the Sierra Nevada.”

I posted this piece on forest fires and climate change three months ago. There is ample reason to attribute the recent magnitude of wildfires to conditions influenced by forest management policy. The contribution of a relatively modest change in average temperatures over the past several decades (but primarily during the 1990s) is rather doubtful. And the evidence that warming-induced drought is the real problem is weakened considerably by the fact that the 20th century was wetter than normal in California. In other words, recent dry conditions represent something of a return to normal, making today’s policy-induced overgrowth untenable.

Wildfires are a natural phenomenon and have occurred historically from various causes such as lightning strikes and even spontaneous combustion of dry biomass. They are also caused by human activity, both accidental and intentional. In centuries past, Native Americans used so-called controlled or prescribed burns to preserve and restore grazing areas used by game. In the late 19th and early 20th centuries, fire suppression became official U.S. policy, leading to an unhealthy accumulation of overgrowth and debris in American forests over several decades. This trend, combined with a hot, dry spell in the 1930s, led to sprawling wildfires. However, Warren Meyer says the data on burnt acreage during that era was exaggerated because the U.S. Forest Service insisted on counting acres burned by prescribed burns in states that did not follow its guidance against the practice.

The total acreage burned by wildfires in the U.S. was minimal from the late 1950s to the end of the century, when a modest uptrend began. In California, while the number of fires continued to decline over the past 30 years, the trend in burnt acreage has been slightly positive. Certainly this year’s mega-fires will reinforce that trend. So the state is experiencing fewer but larger fires.

The prior success in containing fires was due in part to active logging and other good forest management policies, including prescribed burns. However, the timber harvest declined through most of this period under federal fire suppression policies, California state policies that increased harvesting fees, and pressure from environmentalists. The last link shows that the annual “fuel removed” from forests in the state has declined by 80% since the 1950s. But attitudes could be changing, as both the state government and environmentalists (WSJ, link could be gated) are beginning to praise biomass harvesting as a way to reduce wildfire risk. Well, yes!

The reason wildfire control ever became a priority is the presence of people in forest lands, and human infrastructure as well. Otherwise, the fires would burn as they always have. Needless to say, homes or communities surrounded by overgrown forests are at great risk. In fact, it’s been reported that the massive Camp Fire in Northern California was caused by a PG&E power line. If so, it’s possible that the existing right-of-way was not properly maintained by PG&E, but it may also be that rights-of-way are of insufficient width to prevent electrical sparks from blowing into adjacent forests, and that’s an especially dangerous situation if those forests are overgrown.

Apparently Donald Trump is under the impression that state policies are largely responsible for overgrown and debris-choked forests. In fact, both federal and state environmental regulations have played a major role in discouraging timber harvesting and prescribed burns. After all, the federal government owns about 57% of the forested land in California. Much of the rest is owned privately or is tribal land. Trump’s threat to withhold federal dollars was his way of attempting to influence state policy, but the vast bulk of federal funds devoted to forest management is dedicated to national forests. A relatively small share subsidizes state and community efforts. Disaster-related funding is and should be a separate matter, but Trump made the unfortunate suggestion that those funds are at issue. Nevertheless, he was correct to identify the tremendous fire hazard posed by overgrown forests and excessive debris on the forest floor. Changes to both federal and state policy must address these conditions.

For additional reading, I found this article to give a balanced treatment of the issues.

If You’re Already Eligible, Your Benefits Are Safe


, , , , , , , , , , , , ,

I’m always hearing fearful whines from several left-of-center retirees in my circle of my acquaintances: they say the GOP wants to cut their Social Security and Medicare benefits. That expression of angst was reprised as a talking point just before the midterm election, and some of these people actually believe it. Now, I’m as big a critic of these entitlement programs as anyone. They are in very poor financial shape and in dire need of reform. However, I know of no proposal for broad reductions in Social Security and Medicare benefits for now-eligible retirees. In fact, thus far President Trump has refused to consider substantive changes to these programs. And let’s not forget: it was President Obama who signed into law the budget agreement that ended spousal benefits for “file and suspend” Social Security claimants.

Both Social Security (SS) and Medicare are technically insolvent and reform of some kind should happen sooner rather than later. It does not matter that their respective trust funds still have positive balances — balances that the federal government owes to these programs. The trust fund balances are declining, and every dollar of decline is a dollar the government pays back to the programs with new borrowing! So the trust funds should give no comfort to anyone concerned with the health of either of these programs or federal finances.

Members of both houses of Congress have proposed steps to shore up SS and Medicare. A number of the bills are summarized and linked here. The range of policy changes put forward can be divided into several categories: tax hikes, deferred benefit cuts, and other, creative reforms. Future retirees will face lower benefits under many of these plans, but benefit cuts for current retirees are not on the table, except perhaps for expedient victims at high income levels.

There is some overlap in the kinds of proposals put forward by the two parties. One bipartisan proposal in 2016 called for reduced benefits for newly-eligible retired workers starting in 2022, among a number of other steps. Republicans have proposed other types of deferred benefit cuts. These include increasing the age of full eligibility for individuals reaching initial (and partial) eligibility in some future year. Generally, if these kinds of changes were to become law now, they would have their first effects on workers now in their mid-to-late fifties.

Another provision would switch the basis of the cost-of-living adjustment (COLA) to an index that more accurately reflects how consumers shift their purchases in response to price changes (see the last link). The COLA change would cause a small reduction in the annual adjustment for a typical retiree, but that is not a future benefit reduction: it is a reduction in the size of an annual benefit increase. However, one Republican proposal would eliminate the COLA entirely for high-income beneficiaries (see the last link) beginning in several years. A few other proposals, including the bipartisan one linked above, would switch to an index that would yield slightly more generous COLAs.

Democrats have favored increased payroll taxes on current high earners and higher taxes on the benefits of wealthy retirees. Republicans, on the other hand, seem more willing to entertain creative reforms. For example, one recent bill would have allowed eligible new parents to take benefits during a period of leave after childbirth, with a corresponding reduction in their retirement benefits (in present value terms) via increases in their retirement eligibility ages. That would have almost no impact on long-term solvency, however. Another proposal would have allowed retirees a choice to take a portion of any deferred retirement credits (for declining immediate benefits) as a lump sum. According to government actuaries, the structure of that plan had little impact on the system’s insolvency, but there are ways to present workers with attractive tradeoffs between immediate cash balances and future benefits that would reduce insolvency.

The important point is that enhanced choice can be in the best interests of both future retirees and long-term solvency. That might include private account balances with self-directed investment of contributions or a voluntary conversion to a defined contribution system, rather than the defined benefits we have now. The change to defined contributions appears to have worked well in Sweden, for example. And thus far, Republicans seem more amenable to these creative alternatives than Democrats.

As for Medicare, the only truth to the contention that the GOP, or anyone else, has designs on reducing the benefits of current retirees is confined the to the possibility of trimming benefits for the wealthy. The thrust of every proposal of which I am aware is for programmatic changes for future beneficiaries. This snippet from the Administration’s 2018 budget proposal is indicative:

Traditional fee-for-service Medicare would always be an option available to current seniors, those near retirement, and future generations of beneficiaries. Fee-for-service Medicare, along with private plans providing the same level of health coverage, would compete for seniors’ business, just as Medicare Advantage does today. The new program, however, would also adopt the competitive structure of Medicare Part D, the prescription drug benefit program, to deliver savings for seniors in the form of lower monthly premium costs.”

There was a bogus claim last year that pay-as-you-go (Paygo) rules would force large reductions in Medicare spending, but Medicare is subject to cuts affecting only 4% of the budgeted amounts under the Paygo rules, and Congress waived the rules in any case. Privatization of Medicare has provoked shrieks from certain quarters, but that is merely the expansion of Medicare Advantage, which has been wildly popular among retirees.

Both Social Security and Medicare are in desperate need of reform, and while rethinking the fundamental structures of these programs is advisable, the immediate solutions offered tend toward reduced benefits for future retirees, later eligibility ages,  and higher payroll taxes from current workers. The benefits of currently eligible retirees are generally “grandfathered” under these proposals, the exception being certain changes related to COLAs and Medicare benefits for high-income retirees. The tendency of politicians to rely on redistributive elements to enhance solvency is unfortunate, but with that qualification, my retiree friends need not worry so much about their benefits. I suspect at least some of them know that already.

Missouri Prop B: the Unintended Consequences of Wishful Thinking


, , , , , , , , , , ,

Proposition B sounds really good to many Missouri voters: all we have to do to help low-wage workers is declare that they must be paid a higher wage. That’s the pitch, of course. But voters should hear the cruel truth about the unintended consequences of this well-intentioned and ill-considered proposition on the ballot this week:

  1. Businesses are likely to increase prices to compensate for a higher mandated wage, which hurts all consumers, but especially the poor.
  2. Some low-skilled job losses or lost hours are assured, and they will hit the very least-skilled the hardest. No matter the legal minimum, the real minimum wage is always zero.
  3. Such job losses have long-term consequences: lost job experience that the least-skilled desperately need to get ahead.
  4. The harms will have a disparate impact on minorities.
  5. Large employers can substitute capital for low-skilled labor: automated kiosks to take orders and increasingly sophisticated robots to perform tasks. Again, the real minimum wage is always zero. As I’ve said before on this blog, automate no job before its time. But that’s what Prop B will encourage.
  6. Employers can make other compensatory changes. That includes reduced fringe benefits and break times, increased production quotas, and less desirable shifts for minimum wage workers.
  7. A large share of the presumed beneficiaries of a higher minimum wage are not impoverished. Many are teenagers or young adults living with their parents.
  8. All of the preceding points argue that an increase in the minimum wage is not an effective method of targeting poverty reduction. In fact, the harm it inflicts is targeted at the most needy. 
  9. Small employers have less flexibility than large employers, and Prop B would place them at a competitive disadvantage. To that extent, a higher wage floor is most damaging to “mom & pop”, locally-owned businesses, and their employees. Again, the real minimum wage is always zero.

At least 24 earlier posts appear on this blog covering the topic of minimum wages. You can see most of them here. The points above are explored in more detail in those posts.

William Evan and David MacPherson of the Show-Me Institute have estimated the magnitude of the harms that are likely to result if Prop B is approved by voters on November 6, and they are significant. The voters of Missouri should not be seeking ways to make the state’s business environment less competitive.

Voters should keep in mind that wages in an unfettered market reflect the realities of labor demand and labor supply. Wages and other forms of compensation reflect the actual quantity, quality and productivity of available labor supplies. And for unskilled labor, which is often supplied by those who lack experience, a wage that matches their marginal productivity is one that provides that valuable experience. The last thing they need is for tasks requiring little skill to be performed by more experienced employees, or by machines. We cannot wish away these realities, and we cannot declare them suspended by law. Such efforts will have winners and losers, of course, though the former might not ever recognize the ephemeral nature of their gains. And as long as there is freedom of private decision-making, the consequences of such legal efforts will cause harm to those least able to withstand it.

Economic Freedom and Mobility Reduce Poverty; Alms Are Impotent


, , , , , , , , , , ,

It’s very difficult to lift people out of poverty via redistribution or philanthropy. Small gains in income can be expected at best, but there are far more powerful ways to improve well being. These have to do with expanding the fundamental freedoms, rights and rewards available to private individuals. Harvard’s Lant Pritchard divides these efforts into two broad categories: policies that improve labor mobility, and those that lead to gains in-place via economic growth. His working paper, “Alleviating Global Poverty: Labor Mobility, Direct Assistance, and Economic Growth”, is available here.

Economic Benefits of Migration 

Pritchard first explains that the freedom to migrate across borders in pursuit of economic opportunity allows workers from low-productivity countries to contribute much greater output in high productivity countries. In so doing, the workers gain far more than can be practically accomplished via direct aid, and according to Pritchard, at zero or little cost. So granting this freedom is a much more effective anti-poverty measure than aid payments.

Pritchard seems to imply that this is a persuasive economic argument for open borders. On that question, I take the position that countries are sovereign entities and that their citizens possess the right to determine the extent of immigration flows. And in fact, there are real costs of immigration flows that must be considered. Pritchard’s paper offers a powerful rationale for liberalizing immigration quotas, but here again, he dismisses certain issues that limit even that more narrow argument.

The prospective economic gains of the immigrants themselves are important, of course, but the economic needs of the destination country matter too. In the U.S., employers in many markets face a shortage of low-skilled labor, so immigration quotas bind on those markets. Making them less binding would certainly encourage economic growth. A greater influx of younger workers from abroad would also help America weather its demographic crisis, narrowing the shortfall in funding entitlement programs like Social Security and Medicare. Unfortunately, to those who do not already recognize these needs, Pritchard’s contribution is likely to carry little weight.

Still, Pritchard’s assertion that the cost of liberalized immigration is zero needs further examination. First, there are the very real costs of vetting and processing new immigrants. Second, unless all immigrants and employers are matched ex ante, which is virtually impossible, there will be adjustment costs that continue at least until the matching is complete. In the interim, and even post-employment, new immigrants might well require public aid to support themselves and their families. It is also quite likely that new tax revenue generated by immigrants will be insufficient to pay the full incremental costs of public resources consumed in providing marginal infrastructure, education, and other public subsidies.

Pritchard employs static calculations of the net benefits to be gained through greater labor mobility “at the margin”, but as the absorption of new immigrants into the workforce takes place, excess demands for low-skilled workers may turn into excess supplies, creating downward pressure on wages. In the presence of a minimum wage, that implies unemployment and a probable drain on public resources. So the source of the benefits discussed by Pritchard should not be viewed as limitless. He offers some mild rebuttals of this point and references one of his own papers in so doing, but the possibility cannot and should not be dismissed.

Economic Benefits of Economic Freedom

Pritchard’ second major point of emphasis involves the effectiveness of different kinds of private and public direct assistance, or “treatments”, in producing income gains over time. He offers evidence that the gains are relatively weak. He contrasts this with the potential gains from “growth accelerations” stemming from a variety of causes. The upside of a normal business cycle is one form, but that doesn’t really count if the gains are lost on the downside.

The most profound form of growth acceleration occurs upon the advent of a liberalized social order. This may accompany the downfall of an authoritarian government, the stabilization of a formerly unsound monetary regime, or as more sophisticated market institutions take hold in a formerly primitive economy. The main point is that there are fundamental social underpinnings of growth. These are the many dimensions of economic freedom: secure property rights, freedom of contract, minimal regulatory interference, low taxes, and competitive markets for goods and capital. These conditions are so straightforward that in developed economies we take many of them for granted, through they are threatened even there. But these conditions are sadly lacking in much of the under-developed world.


Allowing workers to migrate freely in search of the best opportunities is undoubtedly more powerful in improving their welfare than any form of direct assistance. That is a fundamental truth put forward by Lant Pritchard. However, in-migration can come with significant costs for the destination country. Therefore, immigration laws should allow sufficient flexibility with respect to flows to enable the capture of economic gains from immigration when they exist. Pritchard also emphasizes that economic freedom and the growth acceleration it makes possible do far more to reduce poverty than massive private and public efforts at direct assistance, however well-intentioned. Several earlier posts on Sacred Cow Chips have highlighted the impotency of redistribution for eliminating poverty. The Left has a tendency to dismiss such views as mere ideological assertion, but it is much more than that: it is the difference between penury and prosperity.

Climbing Up: Economic Mobility In the U.S.


, , , , , , , ,

One of the great sacred cows of current economic discourse is that U.S. living standards have been stagnant for decades, coincident with a severe lack of economic mobility (I know, those are goats!). These assertions have been made by people with the training to know better, and by members of the commentariat who certainly would not know better. But Russ Roberts has a great article on the proper measurement of these trends and how poorly that case stacks up. I have made some of the same points in the past (and here), but Roberts’ synthesis is excellent.

Those who insist that income growth has languished or even declined in real terms over the past 40 years have erred in several ways. They usually ignore non-wage benefits (for which workers often receive favorable tax treatment) and other forms of income. Roberts notes that income tax returns leave about 40% of income unreported, and a lot of it goes to individuals in lower income strata. In addition, the studies often use flawed inflation gauges, fail to adjust correctly for various demographic trends in the identification of “households”, and most importantly, fail to follow the same individuals over time. The practice of taking “snapshots” of the income distribution at two different points in time, and then comparing the same percentiles from those snapshots, is inappropriate for addressing the question of income mobility. Instead, the question is how specific individuals or cohorts have migrated across time. Generally incomes grow as people age through their working lives.

Roberts discusses some studies that follow individuals over time, rather than percentiles, to see how they have fared:

From a study comparing the 1960s and the early 2000s:

“… 84% earned more than their parents, corrected for inflation. But 93% of the children in the poorest households, the bottom 20% surpassed their parents. Only 70% of those raised in the top quintile exceeded their parent’s income.”

 In another study compared children born in 1980:

… 70% of children born in 1980 into the bottom decile exceed their parents’ income in 2014. For those born in the top 10%, only 33% exceed their parents’ income.”

Another study finds:

The children from the poorest families ended up twice as well-off as their parents when they became adults. The children from the poorest families had the largest absolute gains as well. Children raised in the top quintile did no better or worse than their parents once those children became adults.”

The next study cited by Roberts compares adults at two stages of life:

The study looks at people who were 35–40 in 1987 and then looks at how they were doing 20 years later, when they are 55–60. The median income of the people in the top 20% in 1987 ended up 5% lower twenty years later. The people in the middle 20% ended up with median income that was 27% higher. And if you started in the bottom 20%, your income doubled. If you were in the top 1% in 1987, 20 years later, median income was 29% lower.”

And here’s one more:

… when you follow the same people, the biggest gains go to the poorest people. The richest people in 1980 actually ended up poorer, on average, in 2014. Like the top 20%, the top 1% in 1980 were also poorer on average 34 years later in 2014.”

These studies show impressive mobility across the income distribution, but is it still true that overall incomes have been flat? No, for reasons mentioned earlier: growth in benefits and unreported income have been dramatic, and inflation measures used to “deflate” nominal income income gains are notoriously poor. When the prices of many goods are expressed in terms of labor hours, there is no doubt that living standards have advanced tremendously. It is all the more impressive in view of the quality improvements that have occurred over the years.

The purported income stagnation and lack of mobility are also said to be associated with an increasingly unequal distribution of income. The OECD reports that the distribution of income in the U.S. is relatively unequal compared to other large, developed countries, but the definitions and accuracy of these comparisons are not without controversy. A more accurate accounting for incomes after redistribution via taxes and transfer payments would place the U.S. in the middle of the pack. And while measures of income inequality have trended upward, consumption inequality has not, which suggests that the income comparisons may be distorted.

Contrary to the oft-repeated narrative, U.S. living standards have not stagnated since the 1970s, nor have U.S. households been plagued by a lack of economic mobility. It’s easy to understand the confusion suffered by journalists on these points, but it’s horrifying to realize that such mistaken interpretations of data are actually issued by economists. Even more disappointing is that these misguided narratives are favorite talking points of class warriors and redistributionists, whose policy recommendations would bring-on real stagnation and immobility. That’s the subject of a future post, or posts. For now, I’ll let it suffice to say that it is the best guarantee of mobility is the preservation of economic freedom and opportunity by limiting the size and scope of government, creating a more neutral tax code, and encouraging markets to flourish.

The Non-Trend In Hurricane Activity


, , , , , , , , , ,

People are unaccountably convinced that there is an upward trend in severe weather events due to global warming. But there is no upward trend in the data on either the frequency or severity of those events. Forget, for the moment, the ongoing debate about the true extent of climate warming. In fact, I’ll stipulate that warming has occurred over the past 40 years, though most of it was confined to the jump roughly coincident with two El Ninos in the 1990s; there’s been little if any discernible trend since. But what about the trend in severe weather? I’ve heard people insist that it is true, but a few strong hurricanes do not constitute a trend.

The two charts at the top of this post were created by hurricane expert Ryan N. Maue. I took them from an article by David Middleton., but visit Maue’s web site on tropical cyclone activity for more. The last month plotted is September 2018, so the charts do not account for Hurricane Michael and the 2018 totals are for a partial year. The first nine months of each year typically accounts for about 3/4 of annual tropical cyclones, so 2018 will be a fairly strong year. Nevertheless, the charts refute the contention that there has been an upward trend in tropical cyclone activity. In fact, in the lower chart, the years following the 1990s increase in global temperatures is shown to have been a time a lower cyclone energy. Roy Spencer weighs in on the negative trend in major landfalling hurricanes in the U.S. and Florida stretching over many decades.

Warren Meyer blames ‘”media selection bias” for the mistaken impression of dangerous trends that do not exist. That is, the news media are very likely to report extreme events, as they should, but they are very unlikely to report a paucity of extreme events, no matter how lengthy or unusual the dearth:

Does anyone doubt that if we were having a record-heavy tornado season, this would be leading every newscast?  [But] if a record-heavy year is newsworthy, shouldn’t a record-light year be newsworthy as well?  Apparently not.” 

It so happens that 2018, thus far, has seen very close to a record low number of tornadoes in the U.S.

Meyer also highlights the frequent use of misleading statistics on the real value of damage from natural disasters. That aggregate value has almost certainly grown over the years, but it had nothing to do with the number or severity of natural disasters. Meyer explains:

Think about places where there are large natural disasters in the US — two places that come to mind are California fires and coastal hurricanes. Do you really think that the total property value in California or on the US coastline has grown only at inflation? You not only have real estate price increases, but you have the value of new construction. The combination of these two is WAY over the 2-3% inflation rate.”

Recent experiences are always the most vivid in our minds. The same is true of broad impressions drawn from reports on the most recent natural disasters. The drama and tragedy of these events should never be minimized, and the fact that there is no upward trend in cyclone activity is no consolation to victims of those disasters. Still, the media can’t seem to resist the narrative that the threat of such events is increasing, even if it can’t be proven. Indeed, even if it’s not remotely correct. Reporters are human and generally not good at science, and they are not immune to the tendency to exaggerate the significance of events upon which they report. A dangerous, prospective trend is at once scary, exciting, and possibly career-enhancing. As for the public, sheer repetition is enough to convince most people that such a threat is undeniable… that everybody knows it… that the trend is already underway. The fact is that the upward trend in hurricane activity (and other kinds of severe weather) is speculative, not real.

Injecting Competition Into Health Care


, , , , , , , , , , , , ,

Competitive pressures in U.S. health care delivery are weak to nonexistent, and their absence is among the most important drivers of our country’s high medical costs. Effective competition requires multiple providers and/or substitutes, transparent prices, and budget-conscious buyers, but all three are missing or badly compromised in most markets for health care services. This was exacerbated by Obamacare, but even now there are developments in “retail” health care that show promise for the future of competition in health care markets. The situation is not irreversible, but some basic policy issues must be addressed.

John Cochrane maintains that the question of “who will pay” for health care, while important, has distracted us from the matter of fostering more competition among providers:

The discussion over health policy rages over who will pay — private insurance, companies, “single payer,” Obamacare, VA, Medicare, Medicaid, and so on — as if once that’s decided everything is all right — as if once we figure out who is paying the check, the provision of health care is as straightforward a service as the provision of restaurant food, tax advice, contracting services, airline travel, car repair, or any other reasonably functional market for complex services.”

We face a severe tradeoff in health care: how to provide for the needs of more patients (e.g., the uninsured, or a growing elderly population) without driving up the cost of care? As a policy matter, provider resources should not be viewed as fixed; their quantity and the efficiency with which those resources are utilized are responsive to forces that can be harnessed. Fixing the supply side of the health care market by improving the competitive environment is the one sure way to deliver more care at lower cost.

Fishy Hospital Contracts

Cochrane discusses some anti-competitive arrangements in health care delivery, quoting liberally from an article by Anna Wilde Mathews in The Wall Street Journal, “Behind Your Rising Health-Care Bills: Secret Hospital Deals That Squelch Competition“:

Dominant hospital systems use an array of secret contract terms to protect their turf and block efforts to curb health-care costs. As part of these deals, hospitals can demand insurers include them in every plan and discourage use of less-expensive rivals. Other terms allow hospitals to mask prices from consumers, limit audits of claims, add extra fees and block efforts to exclude health-care providers based on quality or cost.”

Mathews’ article is gated, but Cochrane quotes enough of its content to convey the dysfunction described there. Also of interest is Cochrane’s speculation that the hospital contract arrangements are driven largely by cross subsidies mandated by government:

The government mandates that hospitals cover indigent care, and medicare and medicaid below cost. The government doesn’t want to raise taxes to pay for it. So the government allows hospitals to overcharge insurance (i.e. you and me, eventually). But overcharges can’t withstand competition, so the government allows, encourages, and even requires strong limits on competition.”

The Role of Cross Subsidies

In this connection, Cochrane notes the perverse ways in which Medicare and Medicaid compensate providers, allowing large provider organizations to charge more than small  ones for the same services. Again, that helps the hospitals cover the costs of mandated care, regulatory costs, and the high administrative and physical costs of running large facilities. It also creates an obvious incentive to consolidate, reaping higher charges on an expanded flow of services and squelching potential competition. And of course the cross subsidies create incentives for large providers to lock-in business from insurers under restrictive contract agreements. Such acts restrain trade, pure and simple.

Cross subsidies, or building subsidies into the prices that buyers must pay, are thus an impediment to competition in health care, beyond the poor incentives they create for subsidized and non-subsidized buyers. So the “who pays” question rears it’s head after all. When subsidies are necessary to provide for those truly unable to pay for care, it is far better to compensate those individuals directly without distorting prices. That represents a huge policy change, but it would also help restore competition.

Competitive Sprouts

John C. Goodman provides a number of examples of how well competition in health care delivery can work. Most of them are about “retail medicine”, as it’s been called. This includes providers like MinuteClinic (CVS), LASIK and cosmetic surgery, concierge doctors, and “retail” surgical services. Goodman also mentions MediBid, a platform on which doctors bid to provide services for patients, and Ameriflex, which matches employers with concierge doctors. These services, which either bypass third-party payers or connect employer-payers with competitive providers, are having a real impact on the ability of patients to obtain care at a lower cost. Goodman says:

I am often asked if the free market can work in health care. My quick reply is: That is the only thing that works. At least, it is the only thing that works well.”


Some of the most pernicious Obamacare cross subsidies have been dismantled via elimination of the individual mandate and allowing individuals to purchase short-term insurance. Nonetheless, U.S. health care delivery is still riddled with cross subsidies and excessive regulation of providers, including all the distortions caused by third-party payments and the tax code. Many buyers lack an incentive for price sensitivity. They face restrictions on their choice of providers, they don’t know the prices being charged, and they often don’t care because at the margin, someone else is paying. Fostering competition in health care delivery does not necessarily require an end to third-party payments, but the cross subsidies must go, employers should actively seek competitive solutions to controlling health care costs, price transparency must improve, and consumers must face incentives that encourage economies.