• About

Sacred Cow Chips

Sacred Cow Chips

Category Archives: Science

Conformity and Suppression: How Science Is Not “Done”

26 Thursday Jan 2023

Posted by Nuetzel in Political Bias, Science

≈ Leave a comment

Tags

Breakthrough Findings, Citation Politics, Citation Practices, Climate science, Conformist Science, Covid Lockdowns, Disruptive Science, Mary Worley Montagu, Matt Ridley, NASA, Nature Magazine, Politicized Science, President Dwight Eisenhower, Public Health, Scientism, Scott Sumner, Steven F. Hayward, Wokeness

I’m not terribly surprised to learn that scientific advancement has slowed over my lifetime. A recent study published in the journal Nature documented a secular decline in the frequency of “disruptive” or “breakthrough” scientific research across a range of fields. Research has become increasingly dominated by “incremental” findings, according to the authors. The graphic below tells a pretty dramatic story:

The index values used in the chart range “from 1 for the most disruptive to -1 for the least disruptive.” The methodology used to assign these values, which summarize academic papers as well as patents, produces a few oddities. Why, for example, does the tech revolution of the last 40 years create barely a blip in the technology index in the chart above? And why have tech research and social science research always been more “disruptive” than other fields of study?

Putting those questions aside, the Nature paper finds trends that are basically consistent across all fields. Apparently, systematic forces have led to declines in these measures of breakthrough scientific findings. The authors try to provide a few explanations as to the forces at play: fewer researchers, incrementalism, and a growing role of large-team research that induces conformity. But if research has become more incremental, that’s more accurately described as a manifestation of the disease, rather than a cause.

Conformity

Steven F. Hayward skewers the authors a little, and perhaps unfairly, stating a concern held by many skeptics of current scientific practices. Hayward says the paper:

“… avoids the most significant and obvious explanation with the myopia of Inspector Clouseau, which is the deadly confluence of ideology and the increasingly narrow conformism of academic specialties.”

Conformism in science is nothing new, and it has often interfered with the advancement of knowledge. The earliest cases of suppression of controversial science were motivated by religious doctrine, but challenges to almost any scientific “consensus” seem to be looked upon as heresy. Several early cases of suppression are discussed here. Matt Ridley has described the case of Mary Worley Montagu, who visited Ottoman Turkey in the early 1700s and witnessed the application of puss from smallpox blisters to small scratches on the skin of healthy subjects. The mild illness this induced led to immunity, but the British medical establishment ridiculed her. A similar fate was suffered by a Boston physician in 1721. Ridley says:

“Conformity is the enemy of scientific progress, which depends on disagreement and challenge. Science is the belief in the ignorance of experts, as [the physicist Richard] Feynman put it.”

When was the Scientific Boom?

I couldn’t agree more with Hayward and Ridley on the damaging effects of conformity. But what gave rise to our recent slide into scientific conformity, and when did it begin? The Nature study on disruptive science used data on papers and patents starting in 1945. The peak year for disruptive science within the data set was … 1945, but the index values were relatively high over the first two decades of the data set. Maybe those decades were very special for science, with a variety of applications and high-profile accomplishments that have gone unmatched since. As Scott Sumner says in an otherwise unrelated post, in many ways we’ve failed to live up to our own expectations:

“In retrospect, the 1950s seem like a pivotal decade. The Boeing 707, nuclear power plants, satellites orbiting Earth, glass walled skyscrapers, etc., all seemed radically different from the world of the 1890s. In contrast, airliners of the 2020s look roughly like the 707, we seem even less able to build nuclear power plants than in the 1960s, we seem to have a harder time getting back to the moon than going the first time, and we still build boring glass walled skyscrapers.”

It’s difficult to put the initial levels of the “disruptiveness” indices into historical context. We don’t know whether science was even more disruptive prior to 1945, or how the indices used by the authors of the Nature article would have captured it. And it’s impossible to say whether there is some “normal” level of disruptive research. Is a “normal” index value equal to zero, which we now approach as an asymptote?

Some incredible scientific breakthroughs occurred decades before 1945, to take Einstein’s theory of relativity as an obvious example. Perhaps the index value for physical sciences would have been much higher at that time, were it measured. Whether the immediate post-World War II era represented an all-time high in scientific disruption is anyone’s guess. Presumably, the world is always coming from a more primitive base of knowledge. Discoveries, however, usually lead to new and deeper questions. The authors of the Nature article acknowledge and attempt to test for the “burden” of a growing knowledge base on the productivity of subsequent research and find no effect. Nevertheless, it’s possible that the declining pattern after 1945 represents a natural decay following major “paradigm shifts” in the early twentieth century.

The Psychosis Now Known As “Wokeness”

The Nature study used papers and patents only through 2010. Therefore, the decline in disruptive science predates the revolution in “wokeness” we’ve seen over the past decade. But “wokeness” amounts to a radicalization of various doctrines that have been knocking around for years. The rise of social justice activism, critical theory, and anthropomorphic global warming theology all began long before the turn of the century and had far reaching effects that extended to the sciences. The recency of “wokeness” certainly doesn’t invalidate Hayward and Ridley when they note that ideology has a negative impact on research productivity. It’s likely, however, that some fields of study are relatively immune to the effects of politicization, such as the physical sciences. Surely other fields are more vulnerable, like the social sciences.

Citations: Not What They Used To Be?

There are other possible causes of the decline in disruptive science as measured by the Nature study, though the authors believe they’ve tested and found these explanations lacking. It’s possible that an increase in collaborative work led to a change in citation practices. For example, this study found that while self-citation has remained stable, citation of those within an author’s “collaboration network” has declined over time. Another paper identified a trend toward citing review articles in Ecology Journals rather than the research upon which those reviews were based, resulting in incorrect attribution of ideas and findings. That would directly reduce the measured “disruptiveness” of a given paper, but it’s not clear whether that trend extends to other fields.

Believe it or not, “citation politics” is a thing! It reflects the extent to which a researcher should suck-up to prominent authors in a field of study, or to anyone else who might be deemed potentially helpful or harmful. In a development that speaks volumes about trends in research productivity, authors are now urged to append a “Citation Diversity Statement” to their papers. Here’s an academic piece addressing the subject of “gendered citation practices” in contemporary physics. The 11 authors of this paper would do well to spend more time thinking about problems in physics than in obsessing about whether their world is “unfair”.

Science and the State

None of those other explanations are to disavow my strong feeling that science has been politicized and that it is harming our progress toward a better world. In fact, it usually leads us astray. Perhaps the most egregious example of politicized conformism today is climate science, though the health sciences went headlong toward a distinctly unhealthy conformism during the pandemic (and see this for a dark laugh).

Politicized science leads to both conformism and suppression. Here are several channels through which politicization might create these perverse tendencies and reduce research productivity or disruptiveness:

  • Political or agenda-driven research is driven by subjective criteria, rather than objective inquiry and even-handed empiricism
  • Research funding via private or public grants is often contingent upon whether the research can be expected to support the objectives of the funding NGOs, agencies, or regulators. The gravy train is reserved for those who support the “correct” scientific narrative
  • Promotion or tenure decisions may be sensitive to the political implications of research
  • Government agencies have been known to block access to databases funded by taxpayers when a scientist wishes to investigate the “wrong questions”
  • Journals and referees have political biases that may influence the acceptance of research submissions, which in turn influences the research itself
  • The favorability of coverage by a politicized media influences researchers, who are sensitive to the damage the media can do to one’s reputation
  • The influence of government agencies on media treatment of scientific discussion has proven to be a potent force
  • The chance that one’s research might have a public policy impact is heavily influenced by politics
  • The talent sought and/or attracted to various fields may be diminished by the primacy of political considerations. Indoctrinated young activists generally aren’t the material from which objective scientists are made

Conclusion

In fairness, there is a great deal of wonderful science being conducted these days, despite the claims appearing in the Nature piece and the politicized corruption undermining good science in certain fields. Tremendous breakthroughs are taking place in areas of medical research such as cancer immunotherapy and diabetes treatment. Fusion energy is inching closer to a reality. Space research is moving forward at a tremendous pace in both the public and private spheres, despite NASA’s clumsiness.

I’m sure there are several causes for the 70-year decline in scientific “disruptiveness” measured in the article in Nature. Part of that decline might have been a natural consequence of coming off an early twentieth-century burst of scientific breakthroughs. There might be other clues related to changes in citation practices. However, politicization has become a huge burden on scientific progress over the past decade. The most awful consequences of this trend include a huge misallocation of resources from industrial planning predicated on politicized science, and a meaningful loss of lives owing to the blind acceptance of draconian health policies during the Covid pandemic. When guided by the state or politics, what passes for science is often no better than scientism. There are, however, even in climate science and public health disciplines, many great scientists who continue to test and challenge the orthodoxy. We need more of them!

I leave you with a few words from President Dwight Eisenhower’s Farewell Address in 1961, in which he foresaw issues related to the federal funding of scientific research:

“Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades.

In this revolution, research has become central; it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.

Today, the solitary inventor, tinkering in his shop, has been over shadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”

Myth Makers in Lab Coats

02 Friday Apr 2021

Posted by Nuetzel in Climate science, Research Bias, Science

≈ Leave a comment

Tags

Cambridge, Canonization Effect, Citation Bias, Climate Change, Climatology, Lee Jussim, Medical Science, Model Calibration, National Oceanic and Atmospheric Administration, Pandemic, Political Bias, Psychology Today, Publication Bias, Repication Crisis, Reporting Bias, Spin

The prestige of some elements of the science community has taken a beating during the pandemic due to hugely erroneous predictions, contradictory pronouncements, and misplaced confidence in interventions that have proven futile. We know that medical science has suffered from a replication crisis, and other areas of inquiry like climate science have been compromised by politicization. So it seemed timely when a friend sent me this brief exposition of how “scientific myths” are sometimes created, authored by Lee Jussim in Psychology Today. It’s a real indictment of the publication process in scientific journals, and one can well imagine the impact these biases have on journalists, who themselves are prone to exaggeration in their efforts to produce “hot” stories.

The graphic above appears in Jussim’s article, taken from a Cambridge study of reporting and citation biases in research on treatments for depression. But as Jussim asserts, the biases at play here are not “remotely restricted to antidepressant research”.

The first column of dots represent trial results submitted to journals for publication. A green dot signifies a positive result: that the treatment or intervention was associated with significantly improved patient outcomes. The red dots are trials in which the results were either inconclusive or the treatment was associated with detrimental outcomes. The trials were split about equally between positive and non-positive findings, but far fewer of the trials with non-positive findings were published. From the study:

“While all but one of the positive trials (98%) were published, only 25 (48%) of the negative trials were published. Hence, 77 trials were published, of which 25 (32%) were negative.“

The third column shows that even within the set of published trials, certain negative results were NOT reported or secondary outcomes were elevated to primary emphasis:

“Ten negative trials, however, became ‘positive’ in the published literature, by omitting unfavorable outcomes or switching the status of the primary and secondary outcomes.“

The authors went further by classifying whether the published narrative put a “positive spin” on inconclusive or negative results (yellow dots):

“… only four (5%) of 77 published trials unambiguously reported that the treatment was not more effective than placebo in that particular trial.“

Finally, the last column represents citations of the published trials in subsequent research, where the size of the dots corresponds to different levels of citation:

“Compounding the problem, positive trials were cited three times as frequently as negative trials (92 v. 32 citations. … Altogether, these results show that the effects of different biases accumulate to hide non- significant results from view.“

As Jussim concludes, it’s safe to say these biases are not confined to antidepressant research. He also writes of the “canonization effect”, which occurs when certain conclusions become widely accepted by scientists:

“It is not that [the] underlying research is ‘invalid.’ It is that [the] full scope of findings is mixed, but that the mixed nature of those findings does not make it into what gets canonized.“

I would say canonization applies more broadly across areas of research. For example, in climate research, empirics often take a back seat to theoretical models “calibrated” over short historical records. The theoretical models often incorporate “canonized” climate change doctrine which, on climatological timescales, can only be classified as speculative. Of course, the media and public has difficulty distinguishing this practice from real empirics.

All this is compounded by the institutional biases introduced by the grant-making process, the politicization of certain areas of science (another source of publication bias), and mission creep within government bureaucracies. In fact, some of these agencies control the very data upon which much research is based (the National Oceanic and Atmospheric Administration, for example), and there is credible evidence that this information has been systematically distorted over time.

The authors of the Cambridge study discuss efforts to mitigate the biases in published research. Unfortunately, reforms have met with mixed success at best. The anti-depressant research reflects tendencies that are all too human and perhaps financially motivated. Add to that the political motivation underlying the conduct of broad areas of research and the dimensions of the problem seem almost insurmountable without a fundamental revolution of ethics within the scientific community. For now, the biases have made “follow the science” into something of a joke.

Certainty Laundering and Fake Science News

05 Wednesday Dec 2018

Posted by Nuetzel in Global Warming, Risk, Science

≈ Leave a comment

Tags

Ashe Schow, Certainty Laundering, Ceteris Paribus, Fake News, Fake Science, Fourth Annual Climate Assessment, Money Laundering, Point Estimates, Statistical Significance, Warren Meyer, Wildfires

Intriguing theories regarding all kinds of natural and social phenomena abound, but few if any of those theories can be proven with certainty or even validated at a high level of statistical significance. Yet we constantly see reports in the media about scientific studies purporting to prove one thing or another. Naturally, journalists pounce on interesting stories, and they can hardly be blamed when scientists themselves peddle “findings” that are essentially worthless. Unfortunately, the scientific community is doing little to police this kind of malpractice. And incredible as it seems, even principled scientists can be so taken with their devices that they promote uncertain results with few caveats.

Warren Meyer coined the term “certainty laundering” to describe a common form of scientific malpractice. Observational data is often uncontrolled and/or too thin to test theories with any degree of confidence. What’s a researcher to do in the presence of such great uncertainties? Start with a theoretical model in which X is true by assumption and choose parameter values that seem plausible. In all likelihood, the sparse data that exist cannot be used to reject the model on statistical grounds. The data are therefore “consistent with a model in which X is true”. Dramatic headlines are then within reach. Bingo!

The parallel drawn by Meyer between “certainty laundering” and the concept of money laundering is quite suggestive. The latter is a process by which economic gains from illegal activities are funneled through legal entities in order to conceal their subterranean origins. Certainty laundering is a process that may encompass the design of the research exercise, its documentation, and its promotion in the media. It conceals from attention the noise inherent in the data upon which the theory of X presumably bears.

Another tempting exercise that facilitates certainty laundering is to ask how much a certain outcome would have changed under some counterfactual circumstance, call it Z. For example, while atmospheric CO2 concentration increased by roughly one part per 10,000 (0.01%) over the past 60 years, Z might posit that the change did not take place. Then, given a model that embodies a “plausible” degree of global temperature sensitivity to CO2, one can calculate how different global temperatures would be today under that counterfactual. This creates a juicy but often misleading form of attribution. Meyer refers to this process as a way of “writing history”:

“Most of us are familiar with using computer models to predict the future, but this use of complex models to write history is relatively new. Researchers have begun to use computer models for this sort of retrospective analysis because they struggle to isolate the effect of a single variable … in their observational data.”

These “what-if-instead” exercises generally apply ceteris paribus assumptions inappropriately, presuming the dominant influence of a single variable while ignoring other empirical correlations which might have countervailing effects. The exercise usually culminates in a point estimate of the change “implied” by X, without any mention of possible errors in the estimated sensitivity nor any mention of the possible range of outcomes implied by model uncertainty. In many such cases, the actual model and its parameters have not been validated under strict statistical criteria.

Meyer goes on to describe a climate study from 2011 that was quite blatant about its certainty laundering approach. He provides the following quote from the study:

“These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”

At the time, Meyer wrote the following critique:

“[Note the first and last sentences of this paragraph] First, that there is not sufficiently extensive and accurate observational data to test a hypothesis. BUT, then we will create a model, and this model is validated against this same observational data. Then the model is used to draw all kinds of conclusions about the problem being studied.

This is the clearest, simplest example of certainty laundering I have ever seen. If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?”

In “Imprecision and Unsettled Science“, I wrote about the process of calculating global surface temperatures. That process is plagued by poor quality and uncertainties, yet many climate scientists and the media seem completely unaware of these problems. They view global and regional temperature data as infallible, but in reality these aggregated readings should be recognized as point estimates with wide error bands. Those bands imply that the conclusions of any research utilizing aggregate temperature data are subject to tremendous uncertainty. Unfortunately, that fact doesn’t get much play.

As Ashe Schow explains, junk science is nothing new. Successful replication rates of study results in most fields are low, and the increasing domination of funding sources by government tends to promote research efforts supporting the preferred narratives of government bureaucrats.

But perhaps we’re not being fair to the scientists, or most scientists at any rate. One hopes that the vast majority theorize with the legitimate intention of explaining phenomena. The unfortunate truth is that adequate data for testing theories is hard to come by in many fields. Fair enough, but Meyer puts his finger on a bigger problem: One simply cannot count on the media to apply appropriate statistical standards in vetting such reports. Here’s his diagnosis of the problem in the context of the Fourth National Climate Assessment and its estimate of the impact of climate change on wildfires:

“The problem comes further down the food chain:

  1. When the media, and in this case the US government, uses this analysis completely uncritically and without any error bars to pretend at certainty — in this case that half of the recent wildfire damage is due to climate change — that simply does not exist
  2. And when anything that supports the general theory that man-made climate change is catastrophic immediately becomes — without challenge or further analysis — part of the ‘consensus’ and therefore immune from criticism.”

That is a big problem for science and society. A striking point estimate is often presented without adequate emphasis on the degree of noise that surrounds it. Indeed, even given a range of estimates, the top number is almost certain to be stressed more heavily. Unfortunately, the incentives facing researchers and journalists are skewed toward this sort of misplaced emphasis. Scientists and other researchers are not immune to the lure of publicity and the promise of policy influence. Sensational point estimates have additional value if they support an agenda that is of interest to those making decisions about research funding. And journalists, who generally are not qualified to make judgements about the quality of scientific research, are always eager for a good story. Today, the spread of bad science, and bad science journalism, is all the more virulent as it is propagated by social media.

The degree of uncertainty underlying a research result just doesn’t sell, but it is every bit as crucial to policy debate as a point estimate of the effect. Policy decisions have expected costs and benefits, but the costs are often front-loaded and more certain than the hoped-for benefits. Any valid cost-benefit analysis must account for uncertainties, but once a narrative gains steam, this sort of rationality is too often cast to the wind. Cascades in public opinion and political momentum are all too vulnerable to the guiles of certainty laundering. Trends of this kind are difficult to reverse and are especially costly if the laundered conclusions are wrong.

Follow Sacred Cow Chips on WordPress.com

Recent Posts

  • Immigration and Merit As Fiscal Propositions
  • Tariff “Dividend” From An Indigent State
  • Almost Looks Like the Fed Has a 3% Inflation Target
  • Government Malpractice Breeds Health Care Havoc
  • A Tax On Imports Takes a Toll on Exports

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014

Blogs I Follow

  • Passive Income Kickstart
  • OnlyFinance.net
  • TLC Cholesterol
  • Nintil
  • kendunning.net
  • DCWhispers.com
  • Hoong-Wai in the UK
  • Marginal REVOLUTION
  • Stlouis
  • Watts Up With That?
  • Aussie Nationalist Blog
  • American Elephants
  • The View from Alexandria
  • The Gymnasium
  • A Force for Good
  • Notes On Liberty
  • troymo
  • SUNDAY BLOG Stephanie Sievers
  • Miss Lou Acquiring Lore
  • Your Well Wisher Program
  • Objectivism In Depth
  • RobotEnomics
  • Orderstatistic
  • Paradigm Library
  • Scattered Showers and Quicksand

Blog at WordPress.com.

Passive Income Kickstart

OnlyFinance.net

TLC Cholesterol

Nintil

To estimate, compare, distinguish, discuss, and trace to its principal sources everything

kendunning.net

The Future is Ours to Create

DCWhispers.com

Hoong-Wai in the UK

A Commonwealth immigrant's perspective on the UK's public arena.

Marginal REVOLUTION

Small Steps Toward A Much Better World

Stlouis

Watts Up With That?

The world's most viewed site on global warming and climate change

Aussie Nationalist Blog

Commentary from a Paleoconservative and Nationalist perspective

American Elephants

Defending Life, Liberty and the Pursuit of Happiness

The View from Alexandria

In advanced civilizations the period loosely called Alexandrian is usually associated with flexible morals, perfunctory religion, populist standards and cosmopolitan tastes, feminism, exotic cults, and the rapid turnover of high and low fads---in short, a falling away (which is all that decadence means) from the strictness of traditional rules, embodied in character and inforced from within. -- Jacques Barzun

The Gymnasium

A place for reason, politics, economics, and faith steeped in the classical liberal tradition

A Force for Good

How economics, morality, and markets combine

Notes On Liberty

Spontaneous thoughts on a humble creed

troymo

SUNDAY BLOG Stephanie Sievers

Escaping the everyday life with photographs from my travels

Miss Lou Acquiring Lore

Gallery of Life...

Your Well Wisher Program

Attempt to solve commonly known problems…

Objectivism In Depth

Exploring Ayn Rand's revolutionary philosophy.

RobotEnomics

(A)n (I)ntelligent Future

Orderstatistic

Economics, chess and anything else on my mind.

Paradigm Library

OODA Looping

Scattered Showers and Quicksand

Musings on science, investing, finance, economics, politics, and probably fly fishing.

  • Subscribe Subscribed
    • Sacred Cow Chips
    • Join 128 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Sacred Cow Chips
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...