Table of Contents >> Show >> Hide
- The Blind Spot: When “Evidence-Based” Forgets the “Based”
- Priors 101: “Maybe” Is Not Automatically 50%
- Why Alternative Medicine Attracts Low Priors Like a Magnet Attracts… Well, Iron
- How “Positive” Trials Happen When the Treatment Doesn’t
- What a Prior-Savvy Evidence Standard Looks Like
- A Practical Guide for Patients (and Busy Humans Who Don’t Want a PhD in Statistics)
- Conclusion: The “Dirty Little Secret” Isn’t DirtyIt’s Just Math
- Experiences in the Wild: How Prior Probability Shows Up in Real Life (and Usually at the Worst Time)
“Evidence-based alternative medicine” sounds like the best of both worlds: the warm fuzzies of “natural” plus the cold hard
receipts of science. Who wouldn’t want that? The catch is that evidence doesn’t float in a vacuum. It lives in a
world where some claims start out more likely than others. And that awkward, mathy, party-pooping reality has a name:
prior probability.
Prior probability is the quiet bouncer at the club of medical knowledge. You can show up with a flashy “statistically significant”
p-value and a confident grin, but the bouncer still asks: “Yeah… but how likely was this to be true before you ran the study?”
If the answer is “about as likely as a goldfish driving a forklift,” then your evidence needs to be extraordinarynot
just “p < 0.05 and vibes.”
This matters most in the places where alternative medicine tends to camp out: claims with weak mechanisms, stretchy definitions
of “works,” and a long tradition of explaining away negative results. If you’ve ever wondered how a treatment can “have studies”
yet still not be convincing, welcome to the part where we talk about the base rates, the biases, and the brutal honesty of Bayes.
The Blind Spot: When “Evidence-Based” Forgets the “Based”
Traditional evidence-based medicine (EBM) brought a huge upgrade: randomized controlled trials, systematic reviews, and an insistence
on measuring outcomes rather than merely admiring theories. But EBM can develop a blind spot when it treats every hypothesis like it
starts at 50/50like the universe flips a coin every time you propose a treatment.
Science-based medicine (SBM) pushes back on that idea. It says: evidence is not just a spreadsheet of results; it’s
evidence interpreted through scientific knowledgebiology, chemistry, physiology, and what we already know about how the
world behaves. A claim that fits well with established science begins with a higher prior probability than a claim that requires
rewriting physics on a napkin.
In “evidence-based alternative medicine,” the blind spot becomes a lifestyle. Implausible treatmentshomeopathy, “energy healing,”
distant prayer as a measurable medical interventionare often presented as if they’re merely underfunded underdogs. But
the real issue isn’t funding. It’s that the claims start with extremely low plausibility.
Priors 101: “Maybe” Is Not Automatically 50%
Prior probability is simply your best estimate of how likely a claim is before new data arrives. In Bayesian reasoning, you
combine that prior with the study’s results to get a posterior probabilityhow confident you should be after seeing the
evidence.
Here’s the punchline: when prior probability is low, modest evidence won’t move the needle much. And in fields where
lots of hypotheses are wrong (or are “wrong-ish,” meaning any effect is tiny or inconsistent), “positive” studies can be
mathematically expected even when nothing real is happening.
A simple numeric gut-check
Imagine 1,000 different “alternative” claims being testedherbs, magnets, detox foot baths, crystals with leadership qualities.
Let’s be generous and say only 10 of those claims are actually effective in a meaningful way (a 1% prior that a randomly
chosen claim is true). Now run studies with:
- 80% power (pretty good) to detect a real effect
- 5% false-positive rate (the classic p < 0.05 threshold)
Out of 10 true treatments, about 8 will show up “positive.” Great. But out of 990 false treatments, about 50 will also show up
“positive” just by chance. That’s 58 positive results… and only 8 are real. In this scenario,
a “statistically significant” finding has a decent chance of being a mirage.
This is one reason why replication, preregistration, transparent methods, and realistic priors matter. It’s also why
“There are studies!” is not the same thing as “There is good evidence.”
Why Alternative Medicine Attracts Low Priors Like a Magnet Attracts… Well, Iron
Not everything labeled “alternative” is equally implausible. Some approaches (like exercise, mindfulness, and certain nutrition
strategies) have mechanisms that make sense and measurable effects that pass the sniff test. But many marquee alternative claims are
famous precisely because they promise a lot while asking science for very littleespecially when it comes to mechanism.
Homeopathy: Dilution That Dilutes Believability
Homeopathy claims that ultra-diluted substances can treat illness, often diluted beyond the point where any molecules of the original
substance remain. The proposed explanations (“water memory,” “succussion” as magical activation) clash with basic chemistry and physics.
That doesn’t automatically make it falsebut it does make the prior probability extremely low.
And then the evidence arrives. When higher-quality trials and systematic reviews are considered, homeopathy generally fails to show
effects beyond placebo. That patternstrong claims, weak mechanism, and outcomes that fade as trial quality improvesis exactly what
low priors predict.
Energy Healing & Reiki: The Mechanism Is Mostly “Trust Me, Bro”
Reiki and other “biofield” therapies often propose an undetectable energy that can be manipulated to heal. The problem isn’t that
science is “closed-minded.” It’s that extraordinary claims require either extraordinary measurements (show the energy) or extraordinary
outcomes (show large, consistent clinical effects under rigorous blinding). In practice, the research base is mixed and often limited
by small sample sizes, subjective outcomes, and difficulty maintaining true blinding.
That doesn’t mean people never feel better after a Reiki session. Relaxation, attention, caring touch (or near-touch), and expectation
can be powerful. But those forces are better explained by psychology and context than by a new form of physics that only appears when
appointment slots are available.
Acupuncture: A Mixed Case With Modest Effects
Acupuncture is complicated: some proposed mechanisms (like neuromodulation, endorphin release, or counter-irritation) are more plausible
than “meridians” as literal anatomical channels. Large analyses have found that acupuncture can outperform usual care and sometimes sham,
but the differences between “true” and “sham” are often modest, suggesting that nonspecific effectsritual, expectation,
provider interactioncontribute substantially.
In other words: acupuncture may offer benefit for certain pain conditions, but the story is rarely “ancient energy maps proven correct.”
It’s more “a complex intervention with some measurable effects and a lot of context doing heavy lifting.” That’s not an insult; it’s
simply an accurate description of why priors (and precise claims) matter.
How “Positive” Trials Happen When the Treatment Doesn’t
If you only take one thing from this article, take this: a positive study is not a verdict. It’s a clue. And the quality
of that clue depends on how the study was designed, analyzed, and reported.
P-values are not truth meters
A p-value does not tell you the probability a treatment works. It tells you how surprising your data would be if a specified
model (often “no effect”) were true. That difference is not pedanticit’s the whole game. Even major statistical organizations have
warned against treating “p < 0.05” like a magic stamp of reality.
Common ways weak studies manufacture “evidence”
- Small sample sizes: Underpowered trials bounce around wildly, making dramatic-looking results more likely by chance.
- Flexible outcomes: Measure 20 things, celebrate the 1 that “worked,” and quietly forget the other 19.
- P-hacking: Try multiple analyses until the numbers behave. (Numbers are people-pleasers if you pressure them enough.)
- Publication bias: Positive results get published; negative ones vanish into the file drawer like socks in a dryer.
- Blinding failure: If participants can guess their group, expectation can masquerade as efficacy.
- Subjective endpoints: Pain and mood are real, but they’re also highly sensitive to placebo effects and context.
None of this is exclusive to alternative medicine. But alternative medicine is where these problems can become
structurally rewarded, because a low-prior claim needs a lot of help to look impressive. A fragile “signal” can be kept
alive through a cycle of small, positive trials and enthusiastic interpretationsespecially when marketing arrives faster than replication.
What a Prior-Savvy Evidence Standard Looks Like
A smarter approach doesn’t “ban” alternative ideas. It simply demands that claims earn belief in proportion to how wild they are.
The more a claim conflicts with established science, the more stringent the evidence must be.
1) Start with plausibility, not prejudice
Plausibility isn’t about vibes or tradition. It’s about whether a claim fits with what we know about physiology and chemistry.
If a proposed mechanism requires new forces, new particles, or water with a secret diary, your prior probability is low.
2) Use stage-gated research
Some research frameworks emphasize moving from basic mechanisms and feasibility to larger confirmatory trials only when earlier stages
justify it. That prevents the common “skip straight to clinical trials and hope the stats do magic” strategy.
3) Demand outcomes that matter
A treatment should show improvements that are clinically meaningful, not just statistically detectable. “Two points better on a
100-point scale” might be realand still not worth your time, money, or risk.
4) Make replication the price of admission
One study is a first date. Replication is moving in together and realizing you still like each other when the dishwasher breaks.
For low-prior claims, consistent replication across independent teams is not “unfair.” It’s essential.
A Practical Guide for Patients (and Busy Humans Who Don’t Want a PhD in Statistics)
You don’t need to calculate Bayes factors at the pharmacy aisle. You just need a few grounded questions that quietly encode prior
probability.
- What’s the claim, exactly? “Helps immunity” is vague; “reduces migraine days by 2 per month” is testable.
- Does the mechanism make sense? If it contradicts basic science, demand stronger evidence.
- How good are the studies? Look for preregistration, adequate sample size, blinding, and replication.
- Is the benefit meaningful? Small effects can matter, but don’t confuse “detectable” with “life-changing.”
- What are the risks and opportunity costs? Money, delays in effective care, interactions (especially with supplements), false reassurance.
- What do credible medical institutions say? Not influencers. Institutions with reputations to lose.
The goal isn’t cynicism; it’s calibration. You can be open-minded without being so open-minded that your brain falls out and rolls
under the couch.
Conclusion: The “Dirty Little Secret” Isn’t DirtyIt’s Just Math
Prior probability feels rude because it refuses to flatter our hopes. It doesn’t care that a treatment is “natural,” “ancient,” or
sold in a calming shade of green. It cares about how reality works and how often bold claims have historically been wrong.
“Evidence-based alternative medicine” often tries to win by technicality: a p-value here, a pilot study there, a headline everywhere.
But when priors are low, the bar must be higher. That isn’t bias against alternative ideas. It’s respect for the difference between
finding patterns and finding truth.
The good news is that this standard protects everyonepatients, clinicians, and even researchersfrom getting emotionally attached to
noise. Prior probability isn’t a killjoy. It’s a safety feature.
Experiences in the Wild: How Prior Probability Shows Up in Real Life (and Usually at the Worst Time)
Let’s take this out of the math classroom and drop it into the messy world where people have back pain, deadlines, and a suspicious
rash that appeared right before vacation. Prior probability isn’t just a philosophical stanceit’s a practical survival skill.
The “My Friend Swears By It” Moment
You’ve seen it: someone says a remedy “worked instantly,” and your brain starts building a tiny shrine to hope. But personal experience
is a chaotic dataset. Symptoms fluctuate. Many conditions improve naturally. People try three things at once and credit the last one.
And placebo effects are especially strong for pain, stress, sleep, nauseasymptoms your brain can modulate.
A prior-savvy response isn’t to dismiss your friend. It’s to translate the story into a better question: “Is there consistent evidence
this works beyond expectation and natural recovery?” If the claim is highly implausible, you quietly raise your standards. If the
claim is plausible and low-risk, you might be more willing to experimentwhile still tracking results honestly.
The “Well, It Can’t Hurt” Trap
This phrase has launched a thousand regrettable purchases. Even when a therapy is physically safe, it can still “hurt” through
opportunity cost: delaying effective treatment, draining money, or creating false reassurance. Supplements can interact with medications.
Unregulated products can vary in quality. And some “natural” approaches can be surprisingly potentbecause “natural” includes hemlock.
Prior probability sharpens this. If a claim has a low prior and weak evidence, “can’t hurt” is not a free pass. It’s a request to
calculate the non-obvious harms: missed diagnoses, delayed care, or the slow creep of medical misinformation into future decisions.
The Clinician’s Inbox Experience: New Study, Big Headline, Small Effect
Imagine you’re a clinician skimming a study: “Integrative therapy X improves outcomes.” The abstract looks exciting. The sample size is
40. The main outcome is a self-reported scale. The effect is statistically significant by a hair. The authors sound thrilled. Your
calendar is full. Your patient wants an answer now.
This is where priors act like a mental triage tool. If the therapy is biologically plausible, you might say: “Promising, but preliminary;
let’s watch for replication.” If it’s wildly implausible, you might say: “Interesting result, but likely noise; show me bigger, better,
replicated trials with strong blinding.” That’s not cynicism. That’s Bayesian hygienelike washing your hands after touching a doorknob
in flu season.
The Integrative Clinic Experience: The Ritual Is Doing Something Real
Many people describe feeling cared for in settings that offer extended appointments, soothing environments, and a practitioner who
listens without rushing. That experience can be genuinely therapeutic. It can reduce stress, improve adherence, and help people feel
more in control. Those benefits are realand they don’t require mystical explanations.
Prior probability helps you separate the value of the care context from the claim of a specific mechanism.
You can appreciate the ritual without pretending the ritual proves water has memory. You can seek supportive care and still insist that
disease-modifying claims meet high evidentiary standards.
The “I Just Want to Feel Better” Experience (which is valid, by the way)
Sometimes you’re not chasing a miracle cure. You’re chasing relief. If a low-risk intervention helps you relax, sleep better, or feel
supported, that can be worthwhileespecially when you treat it as complementary and keep your expectations honest. The key is not to
let comfort morph into certainty.
The most grounded approach often looks like this: prioritize treatments with strong evidence for disease outcomes, use low-risk supportive
practices for symptom relief, track what changes, and remain willing to update your beliefs when better evidence arrives. That is the
grown-up version of being open-minded.
