• About

Sacred Cow Chips

Sacred Cow Chips

Tag Archives: Robin Hanson

The Scary Progress and Hairy Promise of AI

18 Tuesday Apr 2023

Posted by Nuetzel in Artificial Intelligence, Existential Threats, Growth

≈ Leave a comment

Tags

Agentic Behavior, AI Bias, AI Capital, AI Risks, Alignment, Artificial Intelligence, Ben Hayum, Bill Gates, Bryan Caplan, ChatGPT, Clearview AI, Dumbing Down, Eliezer Yudkowsky, Encryption, Existential Risk, Extinction, Foom, Fraud, Generative Intelligence, Greta Thunberg, Human capital, Identity Theft, James Pethokoukis, Jim Jones, Kill Switch, Labor Participation Insurance, Learning Language Models, Lesswrong, Longtermism, Luddites, Mercatus Center, Metaculus, Nassim Taleb, Open AI, Over-Employment, Paul Ehrlich, Pause Letter, Precautionary Principle, Privacy, Robert Louis Stevenson, Robin Hanson, Seth Herd, Synthetic Media, TechCrunch, TruthGPT, Tyler Cowen, Universal Basic Income

Artificial intelligence (AI) has become a very hot topic with incredible recent advances in AI performance. It’s very promising technology, and the expectations shown in the chart above illustrate what would be a profound economic impact. Like many new technologies, however, many find it threatening and are reacting with great alarm, There’s a movement within the tech industry itself, partly motivated by competitive self-interest, calling for a “pause”, or a six-month moratorium on certain development activities. Politicians in Washington are beginning to clamor for legislation that would subject AI to regulation. However, neither a voluntary pause nor regulatory action are likely to be successful. In fact, either would likely do more harm than good.

Leaps and Bounds

The pace of advance in AI has been breathtaking. From ChatGPT 3.5 to ChatGPT 4, in a matter of just a few months, the tool went from relatively poor performance on tests like professional and graduate entrance exams (e.g., bar exams, LSAT, GRE) to very high scores. Using these tools can be a rather startling experience, as I learned for myself recently when I allowed one to write the first draft of a post. (Despite my initial surprise, my experience with ChatGPT 3.5 was somewhat underwhelming after careful review, but I’ve seen more impressive results with ChatGPT 4). They seem to know so much and produce it almost instantly, though it’s true they sometimes “hallucinate”, reflect bias, or invent sources, so thorough review is a must.

Nevertheless, AIs can write essays and computer code, solve complex problems, create or interpret images, sounds and music, simulate speech, diagnose illnesses, render investment advice, and many other things. They can create subroutines to help themselves solve problems. And they can replicate!

As a gauge of the effectiveness of models like ChatGPT, consider that today AI is helping promote “over-employment”. That is, there are a number of ambitious individuals who, working from home, are holding down several different jobs with the help of AI models. In fact, some of these folks say AIs are doing 80% of their work. They are the best “assistants” one could possibly hire, according to a man who has four different jobs.

Economist Bryan Caplan is an inveterate skeptic of almost all claims that smack of hyperbole, and he’s won a series of bets he’s solicited against others willing to take sides in support of such claims. However, Caplan thinks he’s probably lost his bet on the speed of progress on AI development. Needless to say, it has far exceeded his expectations.

Naturally, the rapid progress has rattled lots of people, including many experts in the AI field. Already, we’re witnessing the emergence of “agency” on the part of AI Learning Language Models (LLMs), or so called “agentic” behavior. Here’s an interesting thread on agentic AI behavior. Certain models are capable of teaching themselves in pursuit of a specified goal, gathering new information and recursively optimizing their performance toward that goal. Continued gains may lead to an AI model having artificial generative intelligence (AGI), a superhuman level of intelligence that would go beyond acting upon an initial set of instructions. Some believe this will occur suddenly, which is often described as the “foom” event.

Team Uh-Oh

Concern about where this will lead runs so deep that a letter was recently signed by thousands of tech industry employees, AI experts, and other interested parties calling for a six-month worldwide pause in AI development activity so that safety protocols can be developed. One prominent researcher in machine intelligence, Eliezer Yudkowsky, goes much further: he believes that avoiding human extinction requires immediate worldwide limits on resources dedicated to AI development. Is this a severely overwrought application of the precautionary principle? That’s a matter I’ll consider at greater length below, but like Caplan, I’m congenitally skeptical of claims of impending doom, whether from the mouth of Yudkowsky, Greta Thunberg, Paul Ehrlich, or Nassim Taleb.

As I mentioned at the top, I suspect competition among AI developers played a role in motivating some of the signatories of the “AI pause” letter, and some of the non-signatories as well. Robin Hanson points out that Sam Altman, the CEO of OpenAI, did not sign the letter. OpenAI (controlled by a nonprofit foundation) owns ChatGPT and is the current leader in rolling out AI tools to the public. ChatGPT 4 can be used with the Microsoft search engine Bing, and Microsoft’s Bill Gates also did not sign the letter. Meanwhile, Google was caught flat-footed by the ChatGPT rollout, and its CEO signed. Elon Musk (who signed) wants to jump in with his own AI development: TruthGPT. Of course, the pause letter stirred up a number of members of Congress, which I suspect was the real intent. It’s reasonable to view the letter as a means of leveling the competitive landscape. Thus, it looks something like a classic rent-seeking maneuver, buttressed by the inevitable calls for regulation of AIs. However, I certainly don’t doubt that a number of signatories did so out of a sincere belief that the risks of AI must be dealt with before further development takes place.

The vast dimensions of the supposed AI “threat” may have some libertarians questioning their unequivocal opposition to public intervention. If so, they might just as well fear the potential that AI already holds for manipulation and control by central authorities in concert with their tech and media industry proxies. But realistically, broad compliance with any precautionary agreement between countries or institutions, should one ever be reached, is pretty unlikely. On that basis, a “scout’s honor” temporary moratorium or set of permanent restrictions might be comparable to something like the Paris Climate Accord. China and a few other nations are unlikely to honor the agreement, and we really won’t know whether they’re going along with it except for any traceable artifacts their models might leave in their wake. So we’ll have to hope that safeguards can be identified and implemented broadly.

Likewise, efforts to regulate by individual nations are likely to fail, and for similar reasons. One cannot count on other powers to enforce the same kinds of rules, or any rules at all. Putting our faith in that kind of cooperation with countries who are otherwise hostile is a prescription for ceding them an advantage in AI development and deployment. Regulation of the evolution of AI will likely fail. As Robert Louis Stevenson once wrote, “Thus paternal laws are made, thus they are evaded”. And if it “succeeds, it will leave us with a technology that will fall short of its potential to benefit consumers and society at large. That, unfortunately, is usually the nature of state intrusion into a process of innovation, especially when devised by a cadre of politicians with little expertise in the area.

Again, according to experts like Yudkowsky, AGI would pose serious risks. He thinks the AI Pause letter falls far short of what’s needed. For this reason, there’s been much discussion of somehow achieving an alignment between the interests of humanity and the objectives of AIs. Here is a good discussion by Seth Herd on the LessWrong blog about the difficulties of alignment issues.

Some experts feel that alignment is an impossibility, and that there are ways to “live and thrive” with unalignment (and see here). Alignment might also be achieved through incentives for AIs. Those are all hopeful opinions. Others insist that these models still have a long way to go before they become a serious threat. More on that below. Of course, the models do have their shortcomings, and current models get easily off-track into indeterminacy when attempting to optimize toward an objective.

But there’s an obvious question that hasn’t been answered in full: what exactly are all these risks? As Tyler Cowen has said, it appears that no one has comprehensively catalogued the risks or specified precise mechanisms through which those risks would present. In fact, AGI is such a conundrum that it might be impossible to know precisely what threats we’ll face. But even now, with deployment of AIs still in its infancy, it’s easy to see a few transition problems on the horizon.

White Collar Wipeout

Job losses seem like a rather mundane outcome relative to extinction. Those losses might come quickly, particularly among white collar workers like programmers, attorneys, accountants, and a variety of administrative staffers. According to a survey of 1,000 businesses conducted in February:

“Forty-eight percent of companies have replaced workers with ChatGPT since it became available in November of last year. … When asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say ‘definitely,’ while 26% say ‘probably.’ … Within 5 years, 63% of business leaders say ChatGPT will ‘definitely’ (32%) or ‘probably’ (31%) lead to workers being laid off.”

A rapid rate of adoption could well lead to widespread unemployment and even social upheaval. For perspective, that implies a much more rapid rate of technological diffusion than we’ve ever witnessed, so this outcome is viewed with skepticism in some quarters. But in fact, the early adoption phase of AI models is proceeding rather quickly. You can use ChatGPT 4 easily enough on the Bing platform right now!

Contrary to the doomsayers, AI will not just enhance human productivity. Like all new technologies, it will lead to opportunities for human actors that are as yet unforeseen. AI is likely to identify better ways for humans to do many things, or do wonderful things that are now unimagined. At a minimum, however, the transition will be disruptive for a large number of workers, and it will take some time for new opportunities and roles for humans to come to fruition.

Robin Hanson has a unique proposal for meeting the kind of challenge faced by white collar workers vulnerable to displacement by AI, or for blue collar workers who are vulnerable to displacement by robots (the deployment of which has been hastened by minimum wage and living wage activism). This treatment of Hanson’s idea will be inadequate, but he suggests a kind of insurance or contract sold to both workers and investors by owners of assets likely to be insensitive to AI risks. The underlying assets are paid out to workers if automation causes some defined aggregate level of job loss. Otherwise, the assets are paid out to investors taking the other side of the bet. Workers could buy these contracts themselves, or employers could do so on their workers’ behalf. The prices of the contracts would be determined by a market assessment of the probability of the defined job loss “event”. Governmental units could buy the assets for their citizens, for that matter. The “worker contracts” would be cheap if the probability of the job-loss event is low. Sounds far-fetched, but perhaps the idea is itself an entrepreneurial opportunity for creative players in the financial industry.

The threat of job losses to AI has also given new energy to advocates of widespread adoption of universal basic income payments by government. Hanson’s solution is far preferable to government dependence, but perhaps the state could serve as an enabler or conduit through which workers could acquire AI and non-AI capital.

Human Capital

Current incarnations of AI are not just a threat to employment. One might add the prospect that heavy reliance on AI could undermine the future education and critical thinking skills of the general population. Essentially allowing machines to do all the thinking, research, and planning won’t inure to the cognitive strength of the human race, especially over several generations. Already people suffer from an inability to perform what were once considered basic life skills, to say nothing of tasks that were fundamental to survival in the not too distant past. In other words, AI could exaggerate a process of “dumbing down” the populace, a rather undesirable prospect.

Fraud and Privacy

AI is responsible for still more disruptions already taking place, in particular violations of privacy, security, and trust. For example, a company called Clearview AI has scraped 30 billion photos from social media and used them to create what its CEO proudly calls a “perpetual police lineup”, which it has provided for the convenience of law enforcement and security agencies.

AI is also a threat to encryption in securing data and systems. Conceivably, AI could be of value in perpetrating identity theft and other kinds of fraud, but it can also be of value in preventing them. AI is also a potential source of misleading information. It is often biased, reflecting specific portions of the on-line terrain upon which it is trained, including skewed model weights applied to information reflecting particular points of view. Furthermore, misinformation can be spread by AIs via “synthetic media” and the propagation of “fake news”. These are fairly clear and present threats of social, economic, and political manipulation. They are all foreseeable dangers posed by AI in the hands of bad actors, and I would include certain nudge-happy and politically-motivated players in that last category.

The Sky-Already-Fell Crowd

Certain ethicists with extensive experience in AI have condemned the signatories of the “Pause Letter” for a focus on “longtermism”, or risks as yet hypothetical, rather than the dangers and wrongs attributable to AIs that are already extant: TechCrunch quotes a rebuke penned by some of these dissenting ethicists to supporters of the “Pause Letter”:

“‘Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,’ they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.”

So these ethicists bemoan AI’s presumed contribution to the strength and concentration of “existing power structures”. In that, I detect just a whiff of distaste for private initiative and private rewards, or perhaps against the sovereign power of states to allow a laissez faire approach to AI development (or to actively sponsor it). I have trouble taking this “rebuke” too seriously, but it will be fruitless in any case. Some form of cooperation between AI developers on safety protocols might be well advised, but competing interests also serve as a check on bad actors, and it could bring us better solutions as other dilemmas posed by AI reveal themselves.

Imagining AI Catastrophes

What are the more consequential (and completely hypothetical) risks feared by the “pausers” and “stoppers”. Some might have to do with the possibility of widespread social upheaval and ultimately mayhem caused by some of the “mundane” risks described above. But the most noteworthy warnings are existential: the end of the human race! How might this occur when AGI is something confined to computers? Just how does the supposed destructive power of AGIs get “outside the box”? It must do so either by tricking us into doing something stupid, hacking into dangerous systems (including AI weapons systems or other robotics), and/or through the direction and assistance of bad human actors. Perhaps all three!

The first question is this: why would an AGI do anything so destructive? No matter how much we might like to anthropomorphize an “intelligent” machine, it would still be a machine. It really wouldn’t like or dislike humanity. What it would do, however, is act on its objectives. It would seek to optimize a series of objective functions toward achieving a goal or a set of goals it is given. Hence the role for bad actors. Let’s face it, there are suicidal people who might like nothing more than to take the whole world with them.

Otherwise, if humanity happens to be an obstruction to solving an AGI’s objective, then we’d have a very big problem. Humanity could be an aid to solving an AGI’s optimization problem in ways that are dangerous. As Yudkowsky says, we might represent mere “atoms it could use somewhere else.” And if an autonomous AGI were capable of setting it’s own objectives, without alignment, the danger would be greatly magnified. An example might be the goal of reducing carbon emissions to pre-industrial levels. How aggressively would an AGI act in pursuit of that goal? Would killing most humans contribute to the achievement of that goal?

Here’s one that might seem far-fetched, but the imagination runs wild: some individuals might be so taken with the power of vastly intelligent AGI as to make it an object of worship. Such an “AGI God” might be able to convert a sufficient number of human disciples to perpetrate deadly mischief on its behalf. Metaphorically speaking, the disciples might be persuaded to deliver poison kool-aid worldwide before gulping it down themselves in a Jim Jones style mass suicide. Or perhaps the devoted will survive to live in a new world mono-theocracy. Of course, these human disciples would be able to assist the “AGI God” in any number of destructive ways. And when brain-wave translation comes to fruition, they better watch out. Only the truly devoted will survive.

An AGI would be able to create the illusion of emergency, such as a nuclear launch by an adversary nation. In fact, two or many adversary nations might each be fooled into taking actions that would assure mutual destruction and a nuclear winter. If safeguards such as human intermediaries were required to authorize strikes, it might still be possible for an AGI to fool those humans. And there is no guarantee that all parties to such a manufactured conflict could be counted upon to have adequate safeguards, even if some did.

Yudkowsky offers at least one fairly concrete example of existential AGI risk:

“A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”

There are many types of physical infrastructure or systems that an AGI could conceivably compromise, especially with the aid of machinery like robots or drones to which it could pass instructions. Safeguards at nuclear power plants could be disabled before steps to trigger melt down. Water systems, rivers, and bodies of water could be poisoned. The same is true of food sources, or even the air we breathe. In any case, complete social disarray might lead to a situation in which food supply chains become completely dysfunctional. So, a super-intelligence could probably devise plenty of “imaginative” ways to rid the earth of human beings.

Back To Earth

Is all this concern overblown? Many think so. Bryan Caplan now has a $500 bet with Eliezer Yudkowsky that AI will not exterminate the human race by 2030. He’s already paid Yudkowsky, who will pay him $1,000 if we survive. Robin Hanson says “Most AI Fear Is Future Fear”, and I’m inclined to agree with that assessment. In a way, I’m inclined to view the AI doomsters as highly sophisticated, change-fearing Luddites, but Luddites nevertheless.

Ben Hayum is very concerned about the dangers of AI, but writing at LessWrong, he recognizes some real technical barriers that must be overcome for recursive optimization to be successful. He also notes that the big AI developers are all highly focused on safety. Nevertheless, he says it might not take long before independent users are able to bootstrap their own plug-ins or modules on top of AI models to successfully optimize without running off the rails. Depending on the specified goals, he thinks that will be a scary development.

James Pethokoukis raises a point that hasn’t had enough recognition: successful innovations are usually dependent on other enablers, such as appropriate infrastructure and process adaptations. What this means is that AI, while making spectacular progress thus far, won’t have a tremendous impact on productivity for at least several years, nor will it pose a truly existential threat. The lag in the response of productivity growth would also limit the destructive potential of AGI in the near term, since installation of the “social plant” that a destructive AGI would require will take time. This also buys time for attempting to solve the AI alignment problem.

In another Robin Hanson piece, he expresses the view that the large institutions developing AI have a reputational Al stake and are liable for damages their AI’s might cause. He notes that they are monitoring and testing AIs in great detail, so he thinks the dangers are overblown.:

“So, the most likely AI scenario looks like lawful capitalism…. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives.”

In the longer term, the chief focus of the AI doomsters, Hanson is truly an AI optimist. He thinks AGIs will be “designed and evolved to think and act roughly like humans, in order to fit smoothly into our many roughly-human-shaped social roles.” Furthermore, he notes that AI owners will have strong incentives to monitor and “delimit” AI behavior that runs contrary to its intended purpose. Thus, a form of alignment is achieved by virtue of economic and legal incentives. In fact, Hanson believes the “foom” scenario is implausible because:

“… it stacks up too many unlikely assumptions in terms of our prior experiences with related systems. Very lumpy tech advances, techs that broadly improve abilities, and powerful techs that are long kept secret within one project are each quite rare. Making techs that meet all three criteria even more rare. In addition, it isn’t at all obvious that capable AIs naturally turn into agents, or that their values typically change radically as they grow. Finally, it seems quite unlikely that owners who heavily test and monitor their very profitable but powerful AIs would not even notice such radical changes.”

As smart as AGIs would be, Hanson asserts that the problem of AGI coordination with other AIs, robots, and systems would present insurmountable obstacles to a bloody “AI revolution”. This is broadly similar to Pethokoukis’ theme. Other AIs or AGIs are likely to have competing goals and “interests”. Conflicting objectives and competition of this kind will do much to keep AGIs honest and foil malign AGI behavior.

The kill switch is a favorite response of those who think AGI fears are exaggerated. Just shut down an AI if its behavior is at all aberrant, or if a user attempts to pair an AI model with instructions or code that might lead to a radical alteration in an AI’s level of agency. Kill switches would indeed be effective at heading off disaster if monitoring and control is incorruptible. This is the sort of idea that begs for a general solution, and one hopes that any advance of that nature will be shared broadly.

One final point about AI agency is whether autonomous AGIs might ever be treated as independent factors of production. Could they be imbued with self-ownership? Tyler Cowen asks whether an AGI created by a “parent” AGI could legitimately be considered an independent entity in law, economics, and society. And how should income “earned” by such an AGI be treated for tax purposes. I suspect it will be some time before AIs, including AIs in a lineage, are treated separately from their “controlling” human or corporate entities. Nevertheless, as Cowen says, the design of incentives and tax treatment of AI’s might hold some promise for achieving a form of alignment.

Letting It Roll

There’s plenty of time for solutions to the AGI threat to be worked out. As I write this, the consensus forecast for the advent of real AGI on the Metaculus online prediction platform is July 27, 2031. Granted, that’s more than a year sooner than it was 11 days ago, but it still allows plenty of time for advances in controlling and bounding agentic AI behavior. In the meantime, AI is presenting opportunities to enhance well being through areas like medicine, nutrition, farming practices, industrial practices, and productivity enhancement across a range of processes. Let’s not forego these opportunities. AI technology is far too promising to hamstring with a pause, moratoria, or ill-devised regulations. It’s also simply impossible to stop development work on a global scale.

Nevertheless, AI issues are complex for all private and public institutions. Without doubt, it will change our world. This AI Policy Guide from Mercatus is a helpful effort to lay out issues at a high-level.

New Theory: Great Woke Filter Conceals Life In the Cosmos

03 Friday Jun 2022

Posted by Nuetzel in Central Planning, Extraterrestrial Life, Space Travel

≈ Leave a comment

Tags

Asymptotic Burnout, Baumol's Disease, Club of Rome, Equilibrating Process, Fermi Paradox, Grabby Aliens, Hard-Step Model, Homeostatic Awakening, Innovation, Interstellar Travel, Limits to Growth, Market Incentives, Michael L. Wong, Robin Hanson, Selection Bias, Singularity, Stuart Bartlett, Superlinearity, Thomas Malthus, Unbounded Growth, Unidentified Aerial Phenomena, William Baumol

A recent academic paper seeks to explain the Fermi Paradox by asserting that all civilizations must either collapse or reach a point of homeostasis. The paper cites tensions between population growth, resource scarcity, limits to technical innovation, and ultimately political resistance to growth. The Fermi Paradox (FP) is the observation that by now, we should have detected or heard from an alien civilization if the universe has so much potential for intelligent life. But if those civilizations fail to advance beyond a certain level, they don’t develop the technical prowess to explore outside their own stellar neighborhoods or even become detectable from great distances.

The new paper, by Michael L. Wong and Stuart Bartlett (WB), says these outcomes might be the result of “asymptotic burnout” — followed by either civilizational collapse or a “homeostatic awakening”. Never has “get woke, go broke” been so palpable! Certain sections of the WB paper read like an encyclopedia of leftist apocalyptic speculation, dressed up in mathematics and assumed to generalize to any civilization of intelligent beings in the universe. The incredible vastness of outer space suggests that it might never be possible for us to detect these kinds of homebound, low-tech civilizations, whether constrained by scarcities and moribund technologies or hamstrung by their own politics. Similarly, they might not be able to detect us.

Great Filters

There are other, similar explanations of FP. All of those fall under the heading of “Great Filters”, and I’m not sure WB have come up with anything new in that regard except for the “woke” spin. Great filters can be extinction events, such as intra-planetary hostilities culminating in the reckless use of weapons of mass destruction. Or unfortunate collisions with massive asteroids, which are a matter of time. Malthusian outcomes have been discussed in the context of great filters as well. In the past, I’ve discussed the limitations imposed by collectivist social structures on a civilization’s potential to achieve interstellar travel. I’m not the only one. The kind of “awakening” posited by WB would certainly demand the centralization of economic decision-making, though they envision conditions under which the “awakening” is a rational and enlightened decision.

Grabby Civilizations

A bit of a digression here: one of the most interesting explanations for FP that I’ve heard is from economist Robin Hanson and several co-authors. Hanson, by the way, wrote the original paper on great filters. His more recent insight is the likelihood of an earth-bound selection bias: there must be reasons why we haven’t seen alien activity in earth’s backward light cone, assuming they exist. The light cone defines an area of space-time we have observed, or could have observed had we been looking. To have been within our light cone, an event coordinate’s distance from us in space must have been less than or equal to the time it takes for its light to arrive here. For example, we can see what happened on the surface of the Sun fifteen minutes ago because at the Sun’s distance, it takes just ten minutes for its light to reach us. However, an event on the Sun that occurred five minutes ago is still outside our backward light cone. Likewise, if a star is 100,000 light years away, we cannot see events that occurred there within the past 99,999 years.

Hanson and his co-authors focus on the timescales and “hard steps”, or critical evolutionary transitions, necessary for intelligent life to develop in a solar system. They construct a probability model suggesting that the birth of human civilization was likely on the early end of the time distribution of civilizational beginnings in the universe. That means there probably aren’t many distant civilizations we could possibly have seen in our light cone. We’d be more likely to detect them if they are sufficiently advanced to be so-called “grabby” civilizations, but that kind of technological development takes a long time. “Grabby” civilizations (or their machines) are capable of expanding their reach across the stars at high speed, some significant fraction of the speed of light. They can be expected to visibly alter the volume of space they control by settling, mining, building large structures, etc…. An interesting (and perhaps counterintuitive) result is that the faster such a civilization expands, the less likely we’d have seen them in our backward light cone. And we haven’t, which argues for a higher speed of alien conquest, all else equal.

In another post, Hanson estimates that the time until we meet another grabby civilization centers on about 1 billion years if we expand. So grabby civilizations are quite rare if they exist. That doesn’t rule out the possibility that we might detect or encounter a much less technically advanced civilization. Nevertheless, Hanson strongly believes in the reality of Great Filters and believes that human civilization is likely to encounter certain filters that we cannot even anticipate.

The explanation for FP offered by Hanson, et al is nuanced, and it is my favorite, given my fascination with the possibility of extraterrestrial life. Even if the development of human civilization is not especially “early”, the number of interstellar civilizations, grabby or not, is probably still quite small at this juncture. And no doubt space travel is tough! These civilizations and their interstellar pioneers might not endure long enough to cover the distances necessary to reach us. Even more pertinent is that we’ve really only been “looking” in earnest for maybe ten decades at the most, and without complete coverage or much precision. Alien origins or spatial conquests within the last 100 years at distances exceeding 100 light years would not yet be visible to us. And again, it’s remotely possible that there is a grabby civilization whose expansion will intersect with us sometime in the near future, but it is still too distant to be within our backward light cone. If closing on us fast enough, it could have been within a single light year six months ago and we would not yet know it!

Do Civilizations Scale Like Cities?

Now let’s return to the kind of great filter put forward by WB. They first appeal to the observation that cities scale superlinearly. That is, in cross-sectional data, the relationship between city population and various measures of income or output (and other metrics) are linear in logs with a coefficient greater than 1. That means a city with twice the population of another would generate more than twice as much income.

There are reasons why we’d expect city size to be associated with greater productivity, such as an abundance of collaborative opportunities and economies of agglomeration. However, WB assert that it is impossible for a city to sustain a superlinear growth relationship over time, requiring “unbounded growth”, without periodic bursts of innovation. Otherwise, a city encounters a growth “singularity”. WB maintain that the inability of innovation to sustain unbounded growth manifests in a cascade of failure in such a city, or at least homeostasis.

WB go on from there to claim that a civilization, as it advances, will become so interconnected via technology that it can be treated analytically like a single super-city. This assumption, that whole worlds scale like cities, offers WB an analytical convenience. They assume that population growth outstrips the supply of finite resources with an inadequate pace of innovation. WB further propose that civilizations confronting these barriers might undergo “awakenings” under which zero growth is accepted as a goal.

Of course, the growth of a city will stagnate when its size overwhelms its ability to meet demands. A city might be under severe resource constraints. There are external phenomena that can cause a city to languish. All this depends upon the unique vulnerabilities of individual cities. Certainly a widespread dearth of innovation could do the trick. A planetary civilization might be subject to similar constraints or limiting events. Some planets might be resource poor or have especially hostile natural environments. Aliens unfortunate enough to be there will not and cannot become “grabby”. But WB’s hypothesis amounts to the assertion that no civilization can hope to achieve “grabbiness”.

Faults In the Clouds of Delusion

The WB argument is misguided on several levels. First, there is only limited evidence that the scaling of cities is time invariant — that the relationships hold up as cities grow over time —no singularity required! After all, the super-linear relationship referenced by WB is based almost entirely on cross-sectional data. Moreover, the scaling assertion is atheoretic. Rationales are offered based on human social connections and presumed, fixed technical relationships between city population and such things as energy use and infrastructure requirements. However, the discussion is completely devoid of the equilibrating processes found in market economies and the guidance of the price mechanism. Instead, growth simply rages on until the pace of innovation and limited resources can no longer support it.

WB appear to assume that a planet’s finite pool of resources places a hard limit on the advancement of civilization. This is more than a bit reminiscent of the Club of Rome and it’s “Limits to Growth”, or the popular understanding of Thomas Malthus’ writings. That understanding is based on a purely biological model of human needs. which was spectacularly wrong in its prediction of worldwide famine. But that was only a starting point for Malthus, who believed in the power of markets. And even in primitive markets, the very scarcity with which biological needs conflict is what incentivizes greater efficiencies and substitutes. When something gets especially scarce, the market signals to users that they must conserve, on one hand, and it also incentivizes those able to commandeer resources. The latter act to fill the need with greater supplies, close substitutes, or inventive alternatives. Again, these kinds of equilibrating tendencies don’t seem to be of any consequence to WB.

The focus on super-linearity and the relationship between population and economic and other metrics obscures another reality: global fertility rates have been declining for decades and are now below replacement levels in many parts of the world. In addition, we know that birth rates tend to decline as income rises, which directly undermines WB’s concern about super-linearity. The unsustainable population growth envisioned by WB is unlikely to occur, much less overwhelm the ability of resources and innovation to provide for growth in human well-being. WB also ignore the fact that in-migration to cities is a primary contributor to their population growth, whereas in-migration has not been observed at the global level… at least that we’re aware!

What is never in short supply is human ingenuity, if we allow it to work. It enables us to identify and extract new reserves of resources previously hidden to us, and every new efficiency increases the effective reserves of resources already available. Mankind is now on the cusp of an era in which mining of scarce materials from the moon, asteroids, and other planets will be possible.

WB are correct that there are obstacles to urban growth, but they seem only dimly aware of the underlying reasons. Cities must provide myriad services to their residents. Many of those services will experience meager productivity gains relative to goods production, and consequently increased costs of services over time. This is an old problem known among economists as Baumol’s disease, after William Baumol. While it is not limited to cities, it can be especially acute in urban areas. The cost escalation may be severe for services such as education, health care, law enforcement, and the judicial system, which are certainly critical to the economic viability of cities. However, there will be future innovations and even automation of some of these services that boost productivity. Still, they are bound to mostly rise in cost relative to sectors with high average growth in productivity, such as manufacturing. Baumol’s disease is unlikely to tank the world economy. It is simply a fact of economic evolution: relative prices change, and low productivity sectors will suffer cost escalation.

The kind of “awakening” WB anticipate would only occur if individuals are willing sacrifice their liberties en masse, or if elites coerce them to do so. Perhaps there are beings who never imagine the kinds of liberties humans expect, or at least wish for. If so, I’d wager their average intelligence is too low to accomplish space travel anyway. We’ve learned from theory and history that socialism imposes severe constraints on growth. That’s why I once proposed that civilizations capable of interstellar travel will have avoided those chains.

Conclusion

Wong and Bartlett attempt to explain the Fermi Paradox based on the “asymptotic burnout” of civilizations. That is, they believe it’s extremely unlikely that any civilization can ever advance to interstellar travel, or as Hanson would put it, to be “grabby”. WB rely on an analogy between the so-called super-linearity of city scales and the scales of planetary civilizations. They generalize super-linearity to the time domain. In other words, WB make the heroic assumptions that the economic aggregates of planetary civilizations scale over time as cities scale cross-sectionally.

WB then claim that civilizations will confront limits to advancement based on their inability to sustain their pace of innovation. This amounts to Malthusian pessimism writ large. Today, human civilization, while not without its problems, is nowhere near the limits of its growth, and we are nearly ready to reach out beyond the confines of our planet for access to new stocks of resources. There are vast stores of unexploited energy even here on earth, and there are a number of relatively new energy technologies that are either available now or still in development. And there will be much more. Like the Club of Rome, WB lack an adequate appreciation for the power of markets and incentives to solve economic problems, which includes spurring innovation.

Finally, WB make the wholly unsupported conjecture that some civilizations will undergo “awakenings”, choosing to adopt homeostasis rather than growth. WB might or might not realize it, but this implies an abandonment of market institutions in favor of centrally-planned stagnation, and not a little coercion. Perhaps we should view WB’s hypothesis as a cautionary tale: get woke, go broke! Certainly, a homeostatic civilization that relies upon the ignorance of central planners will never develop the capacity for interstellar travel. It simply cannot generate the wealth or expertise necessary to do so. In fact, they are more likely to suffer bouts of mass starvation than any sort of middling prosperity. We probably haven’t seen other civilizations yet, and maybe we’re “early” on the development time-scale for civilizations, but when and if aliens arrive, it won’t be thanks to socialist “awakenings”. WP are at least correct in that regard.

Tax Returns, Politics and Privacy

12 Sunday May 2019

Posted by Nuetzel in Privacy, Taxes

≈ Leave a comment

Tags

Adam Grewal, Appraisal Techniques, Donald Trump, Impeachment, IRS, Jeffrey Carter, Legislative Purpose, Loss Carry Forward, Richard Neal, Robert Mueller, Robin Hanson, Steve Mnuchin, Tax Minimization, Trasparency, Tyler Cowan, Universal Tax Disclosure

It’s a constitutional crisis! Or so claim congressional Democrats, but at this point it looks more like a one-party panic attack. They keep sniffing the trailing fumes of the Mueller investigation, which turned up nothing on the President, or at least nothing worth prosecuting. There is also an ongoing dispute over the President’s tax returns, which he has chosen not to make public. Last week, House Ways and Means Committee Chairman Richard Neal subpoenaed the IRS for six years of Trump’s tax returns, but that is likely to be ignored. There is no law or requirement that Trump release the returns, and the IRS would be under no obligation to comply with the subpoena if it has “no legislative purpose”, as Treasury Secretary Steve Mnuchin said of an earlier request by Neal. For his part, Trump has falsely claimed to the public that an ongoing audit prevents him from releasing his tax documents, but he is fully within his legal rights to withhold his returns, at least for now. His decision is, no doubt, political and it may be wise to that extent. Nevertheless, the suspicion that Trump is a tax cheat is fueled by his very reluctance to make the returns public.

Constitutional Protection

The legality of Trump’s refusals to make the returns public is established in the Constitution, according to law professor Adam Grewal of the University of Iowa:

“Though a federal statute seemingly compels the IRS to furnish, on request, anyone’s tax returns to some congressional committees, a statute cannot transcend the constitutional limits on Congress’s investigative authority. Congress enjoys a near-automatic right to review a President’s tax returns only in the impeachment context.”

If explicit action is taken to impeach the President, justifiably or not, then presumably he or the IRS would be forced to turn over his tax returns to Congress. Even then, however, it would probably become the subject of a protracted court fight.

Partisan Charges

It’s not surprising that Trump has engaged expensive tax experts for the Trump organization and his personal taxes. Of course he has! Anyone in his position would be crazy not to. Minimizing taxes is a complex undertaking even for those having far less wealth and business complexity than a Donald Trump. There is no reason why he should have foregone any tax advantages for which he or his business was entitled. And in fact, he was entitled to use losses on a number of failed enterprises over the years to offset other income for tax purposes. Under these circumstances, a tax liability of zero is not terribly surprising.

Specific claims that Trump is a tax cheat are as yet unfounded. As Jeffrey Carter explains, there is an array of tax provisions intended to provide incentives to businesses precisely because tax law has been crafted to encourage business activity; real estate development is no exception. The idea is that businesses encourage employment, income, incremental tax revenue, and eventually more development. While I generally oppose tax provisions that impinge on specific kinds of human activity, there is nothing illegal or even immoral about taking advantage of tax rules that exist. In fact, there are legal tax maneuvers that can allow a successful real estate development business to generate continuing tax losses.

There are allegations that the Trump organization used fraudulent appraisals to understate values of buildings as a means of minimizing taxes. A variety of appraisal techniques are used in commercial real estate, each involving a series of assumptions and possible adjustments. Appraisals might be especially difficult for complex properties such as large, high-end gambling developments. Perhaps reviews of appraisals are part of the ongoing IRS audit to which Trump referred. There’s little doubt that Trump’s tax advisors would have sought to use the most advantageous techniques and assumptions that would pass scrutiny by the IRS and other tax authorities. However, it is unlikely that he was intimately involved in the appraisal process himself. The audit should determine whether their methods were excessive, not a swarm of politicians and leftist journalists. The penalties for any past understatement of taxes might be financially significant, but his presidency would almost certainly survive such a finding.

Again, Trump may be wise to withhold his tax returns. In today’s political environment, every deduction, credit, and loss carry-forward would be characterized by Democrats and the media as an affront to the American people. In fact, most American taxpayers attempt to minimize their taxes, as well they should. In a world with a simple, sane tax code, a simple definition of taxable income, and a competent IRS, there would be little reason for the clamor over public disclosure of tax data by public officials or candidates for office.

Universal Tax Disclosure? No

That brings me to the subject of a rather striking proposal: Robin Hanson believes that all tax returns should be made publicly available: yours, mine and Donald Trump’s. That change was made in the U.S. in 1924, but soon reversed, according to Hanson. It is done today in Norway, though the identity of anyone seeking that information on a taxpayer is made available to the taxpayer. Without the latter condition, the idea seems like an invitation to voyeurism, or worse. The several rationales offered by Hanson all tend to fall under the rubric that “transparency is good”. He includes critical remarks from Tyler Cowan on the proposal, dismissing them all on various grounds. But I happen to agree with Cowan that not all transparency is good. In fact, my first reaction is that the proposal would be an unnecessary extension of the intrusion into private affairs made by government taxation of income.

Universal tax disclosure might have some value in discouraging tax evasion, and perhaps the IRS could create a schedule of buy-off rates by income level at which tax information would be kept private. However, I’m skeptical of the other benefits cited by Hanson. For one thing, if the identity of the inquirer is revealed, many of the purported benefits would be nullified by discouraging the queries. To the extent that transparency has value, many credit transactions or credit payment mechanisms already require verification of income. Insurance underwriting is also sometimes dependent on proof of income. I am skeptical that the ability of workers to collect information from the tax returns of other individuals would greatly improve the efficiency of labor markets. The value of income data to counter-parties in other kinds of relationships, such as prospective marriage, would seem to be balanced by the value of privacy. Hanson says that people don’t place a high value on privacy, but it clearly has value, and I’m not sure his Twitter poll with a single price point is a valid test of the proposition. And again, with the simple tax code we should have, the benefits of acquiring the tax returns of politicians would boil down to an opportunity for shaming the rich and “tax pinchfists” (successful tax minimizers), which is what some of this is about anyway.

Conclusion

Donald Trump’s tax returns are a prize that his detractors hope will reveal an abundance of classist political fodder and perhaps even evidence of misdeeds. They can only hope. Unless Articles of Impeachment are drafted in the House of Representatives, the Constitution protects President Trump’s tax returns from congressional scrutiny. Trump is probably wise to resist disclosure of his taxes, since the returns would be picked over by the Left and criticized for any whiff of tax management, legal or otherwise. Trump’s businesses hired experts to aggressively minimize tax liabilities, but there is no evidence that they engineered any illegal maneuvers.

Finally, to suggest that all tax returns be made publicly accessible is to support a massive invasion of privacy. Then again, the very imposition of our complex income tax code is a massive invasion of privacy, and one that creates a substantial compliance burden on all income earners.

A Voluntary Redistribution of Sex

11 Friday May 2018

Posted by Nuetzel in Free markets, Prohibition, Redistribution, Uncategorized

≈ 1 Comment

Tags

Abigail Hall, Alex Tabarrok, Incel, Involuntary Celibate, Lux Alptraum, Prohibition, Prostitution, Redistribution of Sex, Robin Hanson, Ross Douthat, Sex Robots

“Incels” have received plenty of bad publicity since the horrifying van attack in Toronto two weeks ago. It was preceded in 2014 by a killing rampage in California perpetrated by an individual with a similar profile. In case you haven’t heard, an incel is an involuntary celibate, either male or female, though male incels have garnered nearly all of the recent attention. Whatever their other characteristics, incels share a loneliness and an unmet desire for intimacy with other human beings.

Lux Alptraum shares her views about the differences between male and female incels. She blames “angry, straight men” and “toxic masculinity” for both the violence that’s recently come to be associated with incels and the relative inattention paid to the plight of female incels. I value her perspective on the issue of female incels. There are obviously extreme misogynists among males in the incel “community”. Some are so enraged by their plight that they engage in on-line bullying, and a plainly deranged segment of incels, including the perpetrators of the crimes mentioned above, have advocated violent retribution against those they deem responsible for their low sexual status. That means just about anyone who can find a partner.

Alptraum paints male incels with a very broad brush, however. Similarly, various leftist writers have categorized incels as predominantly “right wing” and even racist, but involuntary celibacy and misogyny do not lie conveniently along a two-dimensional political spectrum. Incels are present in many groups, crossing racial, religious, and political lines. There are incels among the transgendered and undoubtedly in the gay community. Gay individuals can exist in relative isolation in towns across America. Physical disabilities may condemn individuals to involuntary celibacy. And not all incels are “ugly”; instead, they may suffer from severe social awkwardness. But there are bound to be incels who live quiet lives, unhappy, but adjusted to their circumstances, more or less.

The recent focus on incels has prompted some interesting questions. Ross Douthat’s opinion piece in The New York Times asks whether anyone has a “right to sex”, as some incels have asserted. Robin Hanson discusses the idea of a “redistribution of sex“, noting in a follow-up post that governments throughout history have influenced the distribution of sex through policies enforcing monogamy, for example, or banning prostitution. Voluntary agreements to exchange sex for remuneration are one way to alter the distribution. In fact, to demonstrate the lengths to which a government could go to redistribute sex and intervene against “sex inequality”, Hanson mentions policies of cash redistribution, funded by taxpayers, to compensate incels for the services of prostitutes. There are examples of such benefits for the disabled. Here is Alex Tabarrok on that subject:

“In the UK charities exist to help match sex workers with the disabled. Similar services are available in Denmark and in the Netherlands and in those countries (limited) taxpayer funds can be used to pay for sexual disability services.”

Subsidies and charity aside, it’s easy to understand why prohibition of sexual services for hire would be seen as an injustice by those unable to find partners willing to grant sexual benefits. From a libertarian perspective, trade in sex should be regarded as a natural right, like the freedom to engage in any other mutually beneficial transaction, so long as it does no harm to third parties. One’s body is one’s own property, and it should not be for government — or others — to decide how it will be used.

Laws against prostitution do great harm to society and to the individuals involved in the sex business. Forget about ending prostitution. That will never happen. According to  Abigail Hall, there are about 1 million prostitutes working in the U.S. They almost all work underground, with the exception of those operating in legal brothels in Nevada. Prohibition keeps the price up, but the workers capture a low share of those returns. Their bosses are harsh masters relative to those in legal businesses. These workers cannot report crimes against them, so they are often subject to the worst kinds of abuse. Illegality usually means they don’t have access to good health care, which places customers at greater risk. Legalizing (or decriminalizing) prostitution would reduce or eliminate these problems. From Hall:

“By legalizing the sex trade, we would allow those involved in the sex trade to come out from the shadows, use legitimate business practices and legal channels, and decrease the likelihood that women will be trafficked by violent groups of criminals. … As prostitution becomes a legitimate profession, it allows for prostitutes to be more open with their doctors about their sexual history and seek treatment for STIs and other problems.”

Many object that prostitution exploits women, legal or not, and that it exploits low-income women disproportionately. But there will be voluntary sellers as long as there is a market, again, legal or not. And there will be a market. As for a disparate impact on the poor, Hall says:

“The fact that those who select prostitution as a profession may be poor is inconsequential…. It may be true that some women who work as prostitutes would strongly prefer another profession. Even if this is the case, women who voluntarily choose prostitution as a means of income should be allowed to practice their profession in the safest environment possible.”

The ongoing development of “sex robots” offers an avenue through which incels might enjoy activity that approximates sex with a human being. These robots are becoming increasingly realistic, and their costs are likely to decline dramatically in coming years. For incels with a congenital inability to interact with other human beings, this option might be far preferable to hiring the services of a prostitute. And the introduction of both male and female sex robots into senior care facilities might reduce the likelihood that sexually aggressive residents will abuse others. It happens.

Free markets are amazing in their ability to maximize the well being of both consumers and producers of a good or service. Trades are mutually beneficial and therefore are voluntary, and price signals redirect resources to their most valued uses. The prohibition on prostitution, however, has made it a very dangerous business for practitioners and customers alike. Prohibition has led to dominance by organized crime interests and local strong-men and -women. It has also thickened the intersection of prostitution with other prohibited activities, such as the drug trade. This creates a toxic criminal environment within which women are trapped and abused. Legalizing prostitution would liberate these individuals and create safer conditions for them and their customers. Private solutions would still be available to those who wish to keep prostitution out of their buildings or neighborhoods. And legalization is one way that sex could be made safely and voluntarily accessible to incels. Perhaps, one day soon, the availability of sex robots will help incels satisfy their desires as well. Some incels will still harbor strong resentment toward those for whom sex is not out of reach. Nevertheless, it is reasonable to ask whether such a “voluntary redistribution of sex” would not produce unambiguous social benefits. To deny these benefits to groups like the disabled, or really to anyone with a physical or emotional inability to find a willing partner, and to insist that sex workers be exposed to danger and abuse, is not just priggish, but cruel.

Would You Tax Coastal Development?

14 Monday Dec 2015

Posted by Nuetzel in Central Planning, Global Warming

≈ Leave a comment

Tags

Carbon forcing, central planning, Climate Alarmism, Climate Change, Coastal development, Coastal tax, Federal Flood Insurance, FEMA, Glacier Melts, Glenn Reynolds, Pigouvian subsidies, Pigouvian Taxes, Robin Hanson, Sea Ice Extent, Strait of Gibraltar, Subsidies, Taxing development

Sea Level

If sea levels are truly rising due to climate change, then public policy should stop encouraging new development in coastal areas. Stipulating that this threat is real for the moment, serious and damaging encroachment of the seas might be 50 years away or more. By that time, many of today’s coastal buildings will be gone, or at least candidates for replacement, under realistic assumptions about the average lives of structures. A relatively low-cost approach to the threat of rising seas would be to stop building along the most vulnerable coasts right now and move new development inland. Yet no one wants to do that, least of all coastal property owners. But there is little discussion of this alternative even among the true believers of a coming global warming apocalypse. Why not?

This and related questions have been asked recently by several writers, including Glenn Reynolds and economist Robin Hanson. There are alternatives to discouraging new construction along coasts. Other expensive abatement projects can be pursued, now and later, such as sea walls or even adding land mass excavated from the sea floor or inland. In fact, the prospect of damming the Strait of Gibraltar to protect Mediterranean coastlines has been discussed. The expense of such an unprecedented public works project is what prompted Hanson’s post. To the extent that such remedial projects are not funded privately, they represent social costs arising from coastal development.

The federal government still subsidizes flood insurance on many coastal properties, though efforts to phase-out this FEMA program have been underway for a few years. However, governments seem only too willing to undertake the investment in public infrastructure and ongoing maintenance made necessary by new coastal development. And like other development projects, tax abatements and other subsidies are still granted for coastal development. Why do these policies escape notice from coastal green elites?  Public outlays with private beneficiaries along threatened coasts are an immediate drain on resources, relieving private developers and property buyers of shoreline risk.

Reynolds (perhaps tongue-in-cheek) and Hanson suggest that new development should be taxed in coastal areas. That, and ending subsidies for development along coasts, is an economically and ecologically defensible alternative to the public expense of ubiquitous sea walls. However, a coastal tax might not be in the immediate interests of elites  who claim that mankind faces an insurmountable global warming problem. Better to put off these sorts of remedial measures, especially while you can tax and regulate fossil fuels, and maybe live on the coast!

The position of the warmist community is that carbon emitters must cease and desist, in the hope that the seas will stop rising. They are willing to destroy entire industries (fossil fuels) in pursuit of their goals, but are unlikely to achieve them without inflicting drastic economic harm. If greens are so amenable to central control of economic activity and individual behavior (so long as they are at the controls), it would be prudent to take precautions now that will help to minimize the damage later. Discouraging coastal development with taxes and denial of subsidies is the sort of classic intervention that any Pigouvian planner should love. There is even evidence that sea levels have been much higher at times in the past. An earnest central planner might say that coastal development should always be discouraged to mitigate the risk of destruction.

I am skeptical of alarmist claims, including those related to rising sea levels. In fact, the connection between carbon emissions, global temperatures and sea levels is not well established, and whether sea levels are rising due to human activity is a matter of some dispute. Furthermore, global sea ice extent is not declining dramatically, if at all, and the storied glacier melts have been greatly exaggerated. Climate activists pursue their agenda despite the gross inaccuracy of past carbon-forcing forecasts, the gaping uncertainty surrounding model predictions going forward, and the crushing expense of the measures they advocate. The expense, however, is not one that activists expect to compromise their own standard of living. They either assume that it will be borne by others or that their draconian prescriptions will usher in an era of “sustainability”, powered by new, renewable energy sources. Not many of these alarmists would boast that their policies can quickly reverse the sea level rises they’ve told us to fear, but they dare not suggest taxes on coastal development until they see more convincing evidence. At least that much is sensible, if ironic!

Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • Immigration and Merit As Fiscal Propositions
  • Tariff “Dividend” From An Indigent State
  • Almost Looks Like the Fed Has a 3% Inflation Target
  • Government Malpractice Breeds Health Care Havoc
  • A Tax On Imports Takes a Toll on Exports

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library
  • Scattered Showers and Quicksand

Blog at WordPress.com.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

Scattered Showers and Quicksand

Musings on science, investing, finance, economics, politics, and probably fly fishing.

  • Subscribe Subscribed
    • Sacred Cow Chips
    • Join 128 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...