science-based medicine Archives - Everyday Software, Everyday Joyhttps://business-service.2software.net/tag/science-based-medicine/Software That Makes Life FunWed, 18 Mar 2026 03:04:08 +0000en-UShourly1https://wordpress.org/?v=6.8.3Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”Continuedhttps://business-service.2software.net/prior-probability-the-dirty-little-secret-of-evidence-based-alternative-medicinecontinued/https://business-service.2software.net/prior-probability-the-dirty-little-secret-of-evidence-based-alternative-medicinecontinued/#respondWed, 18 Mar 2026 03:04:08 +0000https://business-service.2software.net/?p=11099“Evidence-based alternative medicine” sounds reassuringuntil you remember that evidence has to be interpreted in context. This deep-dive unpacks prior probability (the Bayesian ‘base rate’ that a claim is likely before a study is run) and explains why low-plausibility therapies can generate plenty of “positive” trials without proving much. Using clear exampleshomeopathy, energy healing, and acupuncturewe explore how p-values get overhyped, how bias and flexible analyses create false positives, and why replication is the real test. You’ll also get practical, patient-friendly questions for evaluating any claim: plausibility, study quality, effect size, risk, and opportunity cost. If you want to stay open-minded without falling for statistical illusions, this is your roadmap.

The post Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”Continued appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

“Evidence-based alternative medicine” sounds like the best of both worlds: the warm fuzzies of “natural” plus the cold hard
receipts of science. Who wouldn’t want that? The catch is that evidence doesn’t float in a vacuum. It lives in a
world where some claims start out more likely than others. And that awkward, mathy, party-pooping reality has a name:
prior probability.

Prior probability is the quiet bouncer at the club of medical knowledge. You can show up with a flashy “statistically significant”
p-value and a confident grin, but the bouncer still asks: “Yeah… but how likely was this to be true before you ran the study?”
If the answer is “about as likely as a goldfish driving a forklift,” then your evidence needs to be extraordinarynot
just “p < 0.05 and vibes.”

This matters most in the places where alternative medicine tends to camp out: claims with weak mechanisms, stretchy definitions
of “works,” and a long tradition of explaining away negative results. If you’ve ever wondered how a treatment can “have studies”
yet still not be convincing, welcome to the part where we talk about the base rates, the biases, and the brutal honesty of Bayes.

The Blind Spot: When “Evidence-Based” Forgets the “Based”

Traditional evidence-based medicine (EBM) brought a huge upgrade: randomized controlled trials, systematic reviews, and an insistence
on measuring outcomes rather than merely admiring theories. But EBM can develop a blind spot when it treats every hypothesis like it
starts at 50/50like the universe flips a coin every time you propose a treatment.

Science-based medicine (SBM) pushes back on that idea. It says: evidence is not just a spreadsheet of results; it’s
evidence interpreted through scientific knowledgebiology, chemistry, physiology, and what we already know about how the
world behaves. A claim that fits well with established science begins with a higher prior probability than a claim that requires
rewriting physics on a napkin.

In “evidence-based alternative medicine,” the blind spot becomes a lifestyle. Implausible treatmentshomeopathy, “energy healing,”
distant prayer as a measurable medical interventionare often presented as if they’re merely underfunded underdogs. But
the real issue isn’t funding. It’s that the claims start with extremely low plausibility.

Priors 101: “Maybe” Is Not Automatically 50%

Prior probability is simply your best estimate of how likely a claim is before new data arrives. In Bayesian reasoning, you
combine that prior with the study’s results to get a posterior probabilityhow confident you should be after seeing the
evidence.

Here’s the punchline: when prior probability is low, modest evidence won’t move the needle much. And in fields where
lots of hypotheses are wrong (or are “wrong-ish,” meaning any effect is tiny or inconsistent), “positive” studies can be
mathematically expected even when nothing real is happening.

A simple numeric gut-check

Imagine 1,000 different “alternative” claims being testedherbs, magnets, detox foot baths, crystals with leadership qualities.
Let’s be generous and say only 10 of those claims are actually effective in a meaningful way (a 1% prior that a randomly
chosen claim is true). Now run studies with:

  • 80% power (pretty good) to detect a real effect
  • 5% false-positive rate (the classic p < 0.05 threshold)

Out of 10 true treatments, about 8 will show up “positive.” Great. But out of 990 false treatments, about 50 will also show up
“positive” just by chance. That’s 58 positive results… and only 8 are real. In this scenario,
a “statistically significant” finding has a decent chance of being a mirage.

This is one reason why replication, preregistration, transparent methods, and realistic priors matter. It’s also why
“There are studies!” is not the same thing as “There is good evidence.”

Why Alternative Medicine Attracts Low Priors Like a Magnet Attracts… Well, Iron

Not everything labeled “alternative” is equally implausible. Some approaches (like exercise, mindfulness, and certain nutrition
strategies) have mechanisms that make sense and measurable effects that pass the sniff test. But many marquee alternative claims are
famous precisely because they promise a lot while asking science for very littleespecially when it comes to mechanism.

Homeopathy: Dilution That Dilutes Believability

Homeopathy claims that ultra-diluted substances can treat illness, often diluted beyond the point where any molecules of the original
substance remain. The proposed explanations (“water memory,” “succussion” as magical activation) clash with basic chemistry and physics.
That doesn’t automatically make it falsebut it does make the prior probability extremely low.

And then the evidence arrives. When higher-quality trials and systematic reviews are considered, homeopathy generally fails to show
effects beyond placebo. That patternstrong claims, weak mechanism, and outcomes that fade as trial quality improvesis exactly what
low priors predict.

Energy Healing & Reiki: The Mechanism Is Mostly “Trust Me, Bro”

Reiki and other “biofield” therapies often propose an undetectable energy that can be manipulated to heal. The problem isn’t that
science is “closed-minded.” It’s that extraordinary claims require either extraordinary measurements (show the energy) or extraordinary
outcomes (show large, consistent clinical effects under rigorous blinding). In practice, the research base is mixed and often limited
by small sample sizes, subjective outcomes, and difficulty maintaining true blinding.

That doesn’t mean people never feel better after a Reiki session. Relaxation, attention, caring touch (or near-touch), and expectation
can be powerful. But those forces are better explained by psychology and context than by a new form of physics that only appears when
appointment slots are available.

Acupuncture: A Mixed Case With Modest Effects

Acupuncture is complicated: some proposed mechanisms (like neuromodulation, endorphin release, or counter-irritation) are more plausible
than “meridians” as literal anatomical channels. Large analyses have found that acupuncture can outperform usual care and sometimes sham,
but the differences between “true” and “sham” are often modest, suggesting that nonspecific effectsritual, expectation,
provider interactioncontribute substantially.

In other words: acupuncture may offer benefit for certain pain conditions, but the story is rarely “ancient energy maps proven correct.”
It’s more “a complex intervention with some measurable effects and a lot of context doing heavy lifting.” That’s not an insult; it’s
simply an accurate description of why priors (and precise claims) matter.

How “Positive” Trials Happen When the Treatment Doesn’t

If you only take one thing from this article, take this: a positive study is not a verdict. It’s a clue. And the quality
of that clue depends on how the study was designed, analyzed, and reported.

P-values are not truth meters

A p-value does not tell you the probability a treatment works. It tells you how surprising your data would be if a specified
model (often “no effect”) were true. That difference is not pedanticit’s the whole game. Even major statistical organizations have
warned against treating “p < 0.05” like a magic stamp of reality.

Common ways weak studies manufacture “evidence”

  • Small sample sizes: Underpowered trials bounce around wildly, making dramatic-looking results more likely by chance.
  • Flexible outcomes: Measure 20 things, celebrate the 1 that “worked,” and quietly forget the other 19.
  • P-hacking: Try multiple analyses until the numbers behave. (Numbers are people-pleasers if you pressure them enough.)
  • Publication bias: Positive results get published; negative ones vanish into the file drawer like socks in a dryer.
  • Blinding failure: If participants can guess their group, expectation can masquerade as efficacy.
  • Subjective endpoints: Pain and mood are real, but they’re also highly sensitive to placebo effects and context.

None of this is exclusive to alternative medicine. But alternative medicine is where these problems can become
structurally rewarded, because a low-prior claim needs a lot of help to look impressive. A fragile “signal” can be kept
alive through a cycle of small, positive trials and enthusiastic interpretationsespecially when marketing arrives faster than replication.

What a Prior-Savvy Evidence Standard Looks Like

A smarter approach doesn’t “ban” alternative ideas. It simply demands that claims earn belief in proportion to how wild they are.
The more a claim conflicts with established science, the more stringent the evidence must be.

1) Start with plausibility, not prejudice

Plausibility isn’t about vibes or tradition. It’s about whether a claim fits with what we know about physiology and chemistry.
If a proposed mechanism requires new forces, new particles, or water with a secret diary, your prior probability is low.

2) Use stage-gated research

Some research frameworks emphasize moving from basic mechanisms and feasibility to larger confirmatory trials only when earlier stages
justify it. That prevents the common “skip straight to clinical trials and hope the stats do magic” strategy.

3) Demand outcomes that matter

A treatment should show improvements that are clinically meaningful, not just statistically detectable. “Two points better on a
100-point scale” might be realand still not worth your time, money, or risk.

4) Make replication the price of admission

One study is a first date. Replication is moving in together and realizing you still like each other when the dishwasher breaks.
For low-prior claims, consistent replication across independent teams is not “unfair.” It’s essential.

A Practical Guide for Patients (and Busy Humans Who Don’t Want a PhD in Statistics)

You don’t need to calculate Bayes factors at the pharmacy aisle. You just need a few grounded questions that quietly encode prior
probability.

  • What’s the claim, exactly? “Helps immunity” is vague; “reduces migraine days by 2 per month” is testable.
  • Does the mechanism make sense? If it contradicts basic science, demand stronger evidence.
  • How good are the studies? Look for preregistration, adequate sample size, blinding, and replication.
  • Is the benefit meaningful? Small effects can matter, but don’t confuse “detectable” with “life-changing.”
  • What are the risks and opportunity costs? Money, delays in effective care, interactions (especially with supplements), false reassurance.
  • What do credible medical institutions say? Not influencers. Institutions with reputations to lose.

The goal isn’t cynicism; it’s calibration. You can be open-minded without being so open-minded that your brain falls out and rolls
under the couch.

Conclusion: The “Dirty Little Secret” Isn’t DirtyIt’s Just Math

Prior probability feels rude because it refuses to flatter our hopes. It doesn’t care that a treatment is “natural,” “ancient,” or
sold in a calming shade of green. It cares about how reality works and how often bold claims have historically been wrong.

“Evidence-based alternative medicine” often tries to win by technicality: a p-value here, a pilot study there, a headline everywhere.
But when priors are low, the bar must be higher. That isn’t bias against alternative ideas. It’s respect for the difference between
finding patterns and finding truth.

The good news is that this standard protects everyonepatients, clinicians, and even researchersfrom getting emotionally attached to
noise. Prior probability isn’t a killjoy. It’s a safety feature.


Experiences in the Wild: How Prior Probability Shows Up in Real Life (and Usually at the Worst Time)

Let’s take this out of the math classroom and drop it into the messy world where people have back pain, deadlines, and a suspicious
rash that appeared right before vacation. Prior probability isn’t just a philosophical stanceit’s a practical survival skill.

The “My Friend Swears By It” Moment

You’ve seen it: someone says a remedy “worked instantly,” and your brain starts building a tiny shrine to hope. But personal experience
is a chaotic dataset. Symptoms fluctuate. Many conditions improve naturally. People try three things at once and credit the last one.
And placebo effects are especially strong for pain, stress, sleep, nauseasymptoms your brain can modulate.

A prior-savvy response isn’t to dismiss your friend. It’s to translate the story into a better question: “Is there consistent evidence
this works beyond expectation and natural recovery?”
If the claim is highly implausible, you quietly raise your standards. If the
claim is plausible and low-risk, you might be more willing to experimentwhile still tracking results honestly.

The “Well, It Can’t Hurt” Trap

This phrase has launched a thousand regrettable purchases. Even when a therapy is physically safe, it can still “hurt” through
opportunity cost: delaying effective treatment, draining money, or creating false reassurance. Supplements can interact with medications.
Unregulated products can vary in quality. And some “natural” approaches can be surprisingly potentbecause “natural” includes hemlock.

Prior probability sharpens this. If a claim has a low prior and weak evidence, “can’t hurt” is not a free pass. It’s a request to
calculate the non-obvious harms: missed diagnoses, delayed care, or the slow creep of medical misinformation into future decisions.

The Clinician’s Inbox Experience: New Study, Big Headline, Small Effect

Imagine you’re a clinician skimming a study: “Integrative therapy X improves outcomes.” The abstract looks exciting. The sample size is
40. The main outcome is a self-reported scale. The effect is statistically significant by a hair. The authors sound thrilled. Your
calendar is full. Your patient wants an answer now.

This is where priors act like a mental triage tool. If the therapy is biologically plausible, you might say: “Promising, but preliminary;
let’s watch for replication.” If it’s wildly implausible, you might say: “Interesting result, but likely noise; show me bigger, better,
replicated trials with strong blinding.” That’s not cynicism. That’s Bayesian hygienelike washing your hands after touching a doorknob
in flu season.

The Integrative Clinic Experience: The Ritual Is Doing Something Real

Many people describe feeling cared for in settings that offer extended appointments, soothing environments, and a practitioner who
listens without rushing. That experience can be genuinely therapeutic. It can reduce stress, improve adherence, and help people feel
more in control. Those benefits are realand they don’t require mystical explanations.

Prior probability helps you separate the value of the care context from the claim of a specific mechanism.
You can appreciate the ritual without pretending the ritual proves water has memory. You can seek supportive care and still insist that
disease-modifying claims meet high evidentiary standards.

The “I Just Want to Feel Better” Experience (which is valid, by the way)

Sometimes you’re not chasing a miracle cure. You’re chasing relief. If a low-risk intervention helps you relax, sleep better, or feel
supported, that can be worthwhileespecially when you treat it as complementary and keep your expectations honest. The key is not to
let comfort morph into certainty.

The most grounded approach often looks like this: prioritize treatments with strong evidence for disease outcomes, use low-risk supportive
practices for symptom relief, track what changes, and remain willing to update your beliefs when better evidence arrives. That is the
grown-up version of being open-minded.


The post Prior Probability: the Dirty Little Secret of “Evidence-Based Alternative Medicine”Continued appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/prior-probability-the-dirty-little-secret-of-evidence-based-alternative-medicinecontinued/feed/0
Bee Venom is Snake Oilhttps://business-service.2software.net/bee-venom-is-snake-oil/https://business-service.2software.net/bee-venom-is-snake-oil/#respondTue, 03 Mar 2026 03:02:11 +0000https://business-service.2software.net/?p=8981Bee venom therapy is everywhere: in spa menus, wellness clinics, and splashy social media posts promising relief from pain, autoimmune disease, and even aging. But when you trade marketing hype for hard data, a very different picture emerges. This in-depth, science-based guide unpacks what bee venom actually is, how apitherapy is supposed to work, what human clinical trials really show, and why the risksfrom severe allergic reactions to life-threatening anaphylaxisfar outweigh any unproven benefits. Along the way, we separate venom immunotherapy (a legitimate allergy treatment) from bee venom snake oil, share real-world lessons from patients and clinicians, and offer practical, evidence-based alternatives to explore with your doctor instead of banking your health on stings.

The post Bee Venom is Snake Oil appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

Bee venom has had an impressive glow-up. Once just the unpleasant reason you
couldn’t enjoy a summer picnic in peace, it now shows up in “detox”
injections, anti-wrinkle creams, spa treatments, and something charmingly
called “live bee acupuncture.” To hear the marketing, a sting a day keeps
arthritis, multiple sclerosis, Lyme disease, and even aging itself away.

There’s only one small problem: when you look at the actual evidence,
bee venom therapy behaves less like a miracle cure and more like classic
snake oil with a stinger. In the spirit of science-based medicine, let’s
unpack what bee venom is, what the research really shows, why the risks are
far from “natural and harmless,” and how to protect yourself from buzzworthy
but empty promises.

What Exactly Is Bee Venom Therapy?

Bee venom therapy (often bundled under the term apitherapy)
uses the venom of honeybees for supposed health benefits. Venom is a complex
mixture of compounds like melittin, apamin, and phospholipase A2, which can
trigger powerful effects in the body, from inflammation and pain to changes
in immune signaling.

Practitioners deliver bee venom in a few different ways:

  • Live bee stings: Yes, this is exactly what it sounds like.
    A bee is placed on your skin and encouraged to sting you.
  • Injections: Purified or diluted bee venom is injected
    under the skin, sometimes at or near acupuncture points.
  • Topical products: Creams, masks, and serums with small
    amounts of bee venom marketed for skin “plumping” or “anti-aging.”
  • “Bee venom acupuncture” or “bee venom pharmacopuncture”:
    A mash-up of acupuncture theory with bee venom injections at selected
    points.

The list of claims is long: reduced pain, better joint function, fewer MS
relapses, improved immunity, faster healing, younger skin, more energy.
When one substance is advertised as doing everything for everyone, your
inner skeptic should start buzzing.

Why Bee Venom Sounds So Tempting

If you live with chronic pain or a serious illness, conventional treatments
can feel slow, imperfect, or frustratingly full of side effects. Into that
very real suffering steps a narrative that feels comforting and hopeful:

  • It’s “natural”, so it must be safer than “chemicals.”
  • It has a long history in traditional medicine and folk
    remedies.
  • There are compelling personal testimonials online about
    “getting my life back” after bee stings.
  • Wellness influencers and some clinics promote it as a
    “holistic” or “root cause” treatment.

That story is emotionally powerful, but medicine has to run on data, not
vibes. So what does the research actually say about bee venom therapy for
real human beings with real diseases?

What the Science Actually Says (Spoiler: Not Much)

Promising lab data is not the same as proven treatment

In test tubes and animal models, bee venom looks interesting. Components of
venom have shown anti-inflammatory, antioxidant, and even anti-tumor
effects in cells and in rodents. Researchers have explored them for
arthritis, skin diseases, and central nervous system conditions.

But here’s the crucial point: mice are not tiny humans, and
petri dishes are not people. Thousands of compounds that look great in
early lab work never become safe, effective treatments once they are tested
in rigorous human trials. Bee venom is not special in that regard.

Multiple sclerosis: a high-quality trial with a clear “no”

Multiple sclerosis (MS) is one of the conditions where bee sting therapy
has been heavily promoted. Enthusiasts claim it can reduce relapses and
disability by “resetting the immune system.”

A well-designed randomized crossover trial put those claims to the test.
People with relapsing MS received a course of regular bee stings and, at
another time, a placebo phase. Researchers measured disease activity,
disability, fatigue, and quality of life. The result? No
meaningful benefit
from bee stings compared with placebo on any of
the key outcomes.

In other words, when you control for expectations and placebo effects,
carefully delivered bee stings do not improve MS. That’s not the story you
see on social media, but it’s the story told by controlled data.

Arthritis and pain conditions: limited and weak evidence

Some small studies and case series have looked at bee venom injections or
bee venom acupuncture for conditions like rheumatoid arthritis or
osteoarthritis. A few report improvements in pain or stiffness, but they
tend to share common problems:

  • Small sample sizes.
  • Lack of true placebo controls.
  • Poor blinding, so patients and practitioners know what they’re getting.
  • Short follow-up periods.

Systematic reviews examining this research have repeatedly concluded that
the evidence is insufficient and low quality to support
bee venom therapy as a standard treatment. Some reviews explicitly warn
that the risk of serious side effects may outweigh any modest and uncertain
benefit for arthritis pain.

Cancer, infections, and “immune boosting”: mostly hype

If you’ve seen headlines claiming that bee venom “kills cancer cells” or
“stops viruses,” remember that destroying cells in a lab dish is the easy
part. The hard part is delivering a compound into the human body in a way
that:

  • Targets the right cells.
  • Spare healthy tissues.
  • Maintains a safe dose.
  • Actually improves survival or quality of life.

Bee venom components are being studied as leads for future drugs, but that
is not the same as saying, “Go get stung a bunch of times and your cancer
will get better.” Translational research is a marathon, not a bee sprint.

The Very Real Risks of Bee Venom Therapy

Marketing for bee venom therapy often emphasizes that it’s “natural” and
“gentle.” The immune system strongly disagrees.

Anaphylaxis: the life-threatening allergic reaction

Bee venom is one of the classic triggers of
anaphylaxis, a rapid, severe allergic reaction that can
cause hives, swelling of the throat, trouble breathing, a dangerous drop in
blood pressure, and, if not treated quickly, death.

You do not have to be “very allergic” ahead of time to wind up in trouble.
Sensitization can build with repeated stings or injections. Reviews of bee
venom therapy report a range of adverse reactions, including serious
anaphylaxis requiring emergency treatment and, in rare but real cases,
fatal outcomes after “live bee acupuncture” sessions.

Any treatment that can land you in the emergency department or the
intensive care unit needs rock-solid evidence of benefit to justify that
risk. Bee venom therapy doesn’t have it.

Other side effects: it’s not just “a little sting”

Even when people do not experience full-blown anaphylaxis, bee venom
therapy can cause:

  • Severe local pain and swelling.
  • Large local allergic reactions that can last days.
  • Headache, nausea, or flu-like symptoms.
  • Flare-ups of underlying conditions.

Patients sometimes pay significant money and endure months of repeated
stings or injections, only to end up with no improvement in their disease
and a new fear of bees plus an EpiPen prescription.

But Wait, Don’t Allergists Use Venom Therapy?

Yes, and this distinction really matters.

Venom immunotherapy is an evidence-based allergy treatment
offered by board-certified allergists to people with a documented
life-threatening allergy to stings by bees or related insects. In this
setting:

  • The venom is standardized and carefully dosed.
  • Treatment happens in a medical setting with emergency care available.
  • The goal is precise: reduce the risk of severe reactions to future stings.
  • Benefit has been confirmed in high-quality trials and long-term follow-up.

That is very different from using bee venom (or live bee stings) as a
catch-all therapy for arthritis, MS, or “immune boosting” at wellness
clinics. The existence of venom immunotherapy does not validate apitherapy
for unrelated conditions any more than insulin for diabetes justifies
injecting random hormones for weight loss.

How to Recognize Bee Venom Snake Oil

Bee venom therapy is a case study in modern snake oil. Many of the classic
warning signs are there:

  • Cure-all claims: Any therapy advertised as fixing pain,
    cancer, autoimmune disease, infections, aging, and “detox” all at once is
    waving a big red flag.
  • Cherry-picked science: Lots of references to lab studies
    and animal research, very little mention of randomized controlled trials
    or systematic reviews in humans.
  • Testimonial overload: Heartwarming stories, before-and-after
    photos, and celebrity endorsements instead of consistent clinical data.
  • Anti-medicine rhetoric: Lines like “doctors don’t want
    you to know this” or “Big Pharma is hiding nature’s cure.”
  • Minimized risks: Serious reactions are brushed off as
    rare or “no big deal” compared to the “healing crisis.”

Good medicine is usually boring. It comes with detailed informed consent,
data from peer-reviewed trials, clear risk-benefit discussions, and
realistic expectations. When a treatment is sold with more drama than
details, be cautious.

Safer, Evidence-Based Paths for People in Pain

If you’re considering bee venom therapy, it’s probably because you’re
hurting, exhausted by your condition, or frustrated with standard options.
That deserves empathy, not judgment. It also deserves honest information.

For inflammatory arthritis and autoimmune diseases, rheumatology guidelines
emphasize disease-modifying medications, biologics, physical therapy, and
lifestyle approaches tailored to each person. For MS, neurologists rely on
proven disease-modifying therapies to reduce relapses and slow progression.

None of these options are perfect, but they have something bee venom does
not: large, controlled studies measuring real outcomes like disability,
relapse rate, joint damage, and survival. If you’re curious about
complementary approaches, talk with your healthcare team about options with
better evidence and lower risk, such as supervised exercise programs,
cognitive behavioral therapy for coping, or specific mind-body techniques.

And if you know or suspect that you have a sting allergy, the path is
clear: see an allergist, discuss venom immunotherapy, and ask whether you
should carry an epinephrine auto-injector. Random, repeated stings at a spa
or clinic are not a safe experiment.

Lived Experiences and Hard Lessons from the Bee Venom Hype

To understand how bee venom became the new snake oil, it helps to look at
the human stories behind the headlines. These experiences are not data in
the scientific sense, but they show how hope and marketing can collide in
the real world.

The patient who “tried everything”

Imagine someone with long-standing rheumatoid arthritis. They’ve cycled
through medications, physical therapy, and diet changes. They’re tired of
blood tests and waiting rooms. One night, they stumble onto an article
online: “Doctor Said I’d Need a Wheelchair, Bee Stings Proved Her Wrong.”

The story is dramatic, full of photos and emotional quotes. The treatment
clinic is only a few hours away. The price is high, but not impossible.
Compared with feeling hopeless in the face of chronic pain, a new “natural”
solution sounds worth the risk.

Months later, after dozens of stings, they might notice some temporary
relief after sessionsmaybe from endorphins, distraction, or placebo
effects. But the underlying disease doesn’t change, and the flares keep
coming. Eventually, the reality sinks in: a lot of money, a lot of pain,
and no lasting improvement.

The close call in the clinic

In another scenario, a clinic offers live bee acupuncture as a luxurious
spa add-on. The practitioner is enthusiastic, the room smells like essential
oils, and there’s relaxing music in the background. The first few stings
hurt, but it’s framed as a “healing sensation.”

Then things change. The client suddenly feels dizzy. Their throat feels
tight. Hives spread across their skin. Instead of a relaxing wellness
experience, they are now in the middle of a medical emergency. If the
practitioner is unpreparedno epinephrine, no emergency planthe outcome
can quickly go from scary to tragic.

Case reports of fatal reactions after bee venom apitherapy are rare but
very real. For the families involved, “rare” is no comfort at all.

The doctor stuck cleaning up the mess

Healthcare providers also have stories. Rheumatologists and neurologists
see patients who stopped effective medications to try bee venom, only to
return with worsened disease. Allergists see patients who have become
sensitized after multiple stings and now live with a much higher risk of
severe reactions to accidental exposure.

These clinicians are often left in the awkward position of trying to repair
trust. They must acknowledge the patient’s suffering and understandable
desire for alternatives while gently explaining that the glamorous
treatment they found online is, in fact, not supported by evidence and may
have made things worse.

What we can learn

The bee venom story teaches a few important lessons:

  • Hope is powerful and deserves respect. People turn to
    unproven treatments because they are desperate for relief, not because
    they are foolish.
  • Good science is slower and less flashy than marketing.
    Waiting for solid data can feel frustrating when you’re in pain, but
    shortcuts often end badly.
  • Skepticism and compassion belong together. It is
    possible to care deeply about patients’ experiences while still insisting
    on rigorous evidence before endorsing a treatment.

Bee venom will likely continue to be studied in labs and carefully designed
clinical trials. That’s fine. What is not fine is selling repeated stings
and injections as a proven, low-risk therapy today when the best available
evidence says otherwise.

Conclusion: Don’t Trade Your Health for a Sting

Bee venom therapy has all the hallmarks of modern snake oil: sweeping
promises, dramatic testimonials, selective use of early-stage research, and
a striking mismatch between hype and reality. For conditions like MS,
arthritis, or cancer, it simply does not have the kind of strong,
reproducible clinical evidence needed to justify the very real risk of
severe allergic reactions and other harms.

If you are tempted by bee venom because you’re running out of options,
pause and breathe. Talk with your healthcare team about what the data
actually show, what safer alternatives exist, and how to evaluate new
treatments without getting stung by the latest health fad. Your body
deserves better than snake oil with stripes.

The post Bee Venom is Snake Oil appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/bee-venom-is-snake-oil/feed/0
“(Un)Well:” Netflix’s Documentary Series Is Poor Journalism That Neglects Sciencehttps://business-service.2software.net/unwell-netflixs-documentary-series-is-poor-journalism-that-neglects-science/https://business-service.2software.net/unwell-netflixs-documentary-series-is-poor-journalism-that-neglects-science/#respondMon, 02 Mar 2026 09:32:12 +0000https://business-service.2software.net/?p=8879Netflix’s (Un)Well looks like investigative health journalismuntil you notice the show treats scientific evidence like just another opinion. This deep-dive breaks down how the series leans on testimonials, glamorizes questionable claims, and repeatedly falls into false balance across episodes on essential oils, tantra, breast milk, fasting, ayahuasca, and bee venom therapy. You’ll get concrete examples of where the framing goes wrong, what reputable science says about key risks (from electrolyte imbalance to contamination and allergic reactions), and a practical checklist for watching wellness content without getting pulled into a vibe-based sales funnel. If you’ve ever finished a documentary thinking, “Maybe I should try that,” this article is your friendly, funny, science-first reality checkand your guide to separating comfort, placebo, and proof.

The post “(Un)Well:” Netflix’s Documentary Series Is Poor Journalism That Neglects Science appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

Because “I saw it on Netflix” should not be a substitute for “I asked my doctor.”

There’s a special kind of confidence you get after watching a slick documentary. Not “I can rebuild a carburetor” confidence. More like “I have strong opinions about mitochondria now” confidence. And that’s exactly why (Un)Well is such a problem: it borrows the visual language of investigative journalismserious music, moody lighting, concerned facesthen uses that credibility to treat science like it’s just one opinion in a group chat.

Netflix’s six-part series aims to explore wellness trends and ask the big question: are these practices helpful, harmless, or harmful? That’s a worthwhile mission. But instead of delivering answers grounded in evidence, (Un)Well often delivers something closer to a buffet: a little bit of science, a lot of testimonials, and a side of “you decide!”as if viewers are choosing toppings at a frozen yogurt shop, not deciding whether to drink black-market breast milk or get stung by bees for “chronic Lyme.”

The result is poor journalism: it neglects how scientific evidence actually works, amplifies emotionally compelling claims without adequate fact-checking, and falls into the trap of “false balance”the idea that giving equal screen time to experts and non-experts automatically creates fairness. Sometimes “both sides” is journalism. Sometimes it’s just giving misinformation a camera angle.

What (Un)Well Tries to Doand Why It Misses

(Un)Well positions itself as a tour through a booming wellness marketplace. Each episode centers on a different trend: essential oils, tantric sex, adults drinking breast milk, fasting, ayahuasca, and bee venom therapy. The show’s format is familiar: heartfelt personal stories, charismatic practitioners, and a few scientists and clinicians offering cautionusually in smaller portions and with less narrative momentum.

This structure isn’t automatically bad. Human stories matter. But in health reporting, anecdotes are the appetizer, not the nutrition label. When a show puts a dramatic testimonial (“This cured me!”) beside a careful expert explanation (“There’s no good clinical evidence”), and then shrugs like, “Gosh, who’s to say?”it quietly teaches the viewer that evidence and vibes are equally valid currencies.

Even reviewers who appreciated the intent noted how the series often feels indecisive and “wishy-washy,” leaving audiences with few clear takeaways. That’s not a quirky aesthetic choice. In medicine, ambiguity without context can be dangerous.

The Big Journalism Failure: False Balance Wearing a Lab Coat

Here’s the core issue: science is not a debate club where the winner is whoever tells the most moving story. Science is a method for reducing self-deceptionthrough controls, replication, peer review, and an obsessive commitment to being wrong in public until proven otherwise.

(Un)Well repeatedly frames the conflict as “believers vs. skeptics,” rather than “claims vs. evidence.” That framing matters. It turns medical questions into identity questions: “Are you open-minded?” “Are you a hater?” “Do you trust Big Pharma?” And once the story is about identity, facts become optional accessorieslike crystals, but with better lighting.

Real investigative health journalism does three things relentlessly: it quantifies evidence, it foregrounds harm, and it shows its work. (Un)Well does some of this occasionally, but too often it slips into an entertainment-first rhythm: the most compelling character gets the emotional close-up, while the scientist gets the role of “party pooper who ruins everyone’s montage.”

Episode-by-Episode: Where the Science Gets Left Behind

1) Essential Oils: Aromatherapy, MLMs, and the “FDA Won’t Let Me Say It” Routine

The essential oils episode is a master class in how to make weak claims feel strong. The series highlights people who swear oils helped them with everything from sleep to serious disease. And suresmell can affect mood. A calming scent during a massage may reduce stress in the same way that a warm blanket and a quiet room reduce stress: because your nervous system likes comfort.

But there’s a huge difference between “this can be relaxing” and “this treats medical conditions.” Federal health agencies describe aromatherapy as a complementary approachsometimes useful for symptoms like stress or nauseawhile emphasizing safe use and the limits of evidence. Swallowing large amounts of essential oils isn’t recommended, and “natural” doesn’t mean “risk-free.” Oils can irritate skin, trigger allergic reactions, and interact with medications depending on how they’re used.

The show also wades into the multilevel marketing ecosystem around essential oils, where income claims, product evangelism, and pseudo-medical language often travel together. That’s a legitimate storyexcept it needs harder edges. If a documentary lets miracle-sounding statements stand without firm, repeated correction, it stops being a critique and starts being a promotional trailer with a concerned eyebrow.

2) Tantra: When “Sexual Wellness” Becomes a Vibe-Based Fact Pattern

The tantra episode is one of the clearest examples of the series’ “observe but don’t judge” problem. Tantra is treated as a shapeshifting term: spiritual practice, sex therapy, intimacy coaching, personal empowerment, sometimes all at once. That ambiguity is part of the appealand part of the risk.

Without clear definitions and safeguards, “tantric healing” can become a license for manipulation. The show acknowledges that abuse can occur in guru-driven environments, but it doesn’t dig deeply into what consent, professional ethics, and evidence-based sex therapy actually require. The audience is left with an aesthetic conclusion: tantra is complicated, powerful, maybe great, maybe dangerous. Which is trueyet not very helpful if you’re trying to decide whether a “workshop” is legitimate care or just expensive boundary confusion in fancy linen pants.

3) Adult Breast Milk: The Ethics and Safety Story That Deserved the Spotlight

The breast milk episode is shocking, yes, but it’s also one of the most straightforward to report responsiblybecause there are clear public-health warnings available. U.S. regulators explicitly recommend against acquiring human milk directly from individuals or through the internet for infant feeding, citing contamination and adulteration risks. Researchers have also documented bacterial contamination concerns in milk purchased online.

Now, adults drinking breast milk for bodybuilding is not the same as feeding infantsethically or medicallybut the public-health lesson still applies: unregulated bodily fluids bought online come with risks. The series gestures at this, but it also lingers on the “biohacker” framing, as if the central question is whether this is edgy and innovative rather than whether it’s medically meaningful (there’s no solid evidence it is), or socially costly when donor milk can be scarce for babies who truly need it.

4) Fasting: A Legit Topic, Framed Like a Survival Challenge

Fasting is the episode where (Un)Well almost does the joband then edits itself into confusion. There’s serious research on intermittent fasting and time-restricted eating. Some people find these approaches helpful for weight management or metabolic markers. But the science is nuanced, and outcomes vary based on the person, the method, and the medical context.

The show leans hard into extreme fasting, where risks climb fast: electrolyte imbalances, dizziness, fainting, worsening of certain conditions, and danger for people on particular medications. Major health sources warn that prolonged fasting and “cleanse” behaviors can be harmful, especially when they involve not eating for days and consuming large amounts of water or herbal teas.

A responsible documentary would separate “common intermittent fasting patterns under medical guidance” from “prolonged water fasts marketed as spiritual purification.” (Un)Well blends them for drama, then tries to mop up the mess with a disclaimer. That’s like tossing a smoke bomb into a room and then whispering, “Please breathe responsibly.”

Ayahuasca is one of those topics that demands careful reporting, because it sits at the intersection of mental health, spirituality, pharmacology, and law. The psychoactive component most often discussed is DMT, which U.S. authorities classify as a Schedule I substance. There are also narrow religious exemptions recognized in U.S. law under specific circumstancesdetails that matter if a documentary is going to show ceremonies on American soil without context.

Medically, ayahuasca is not just “a plant medicine.” It has pharmacological effects, can cause intense vomiting and psychological distress, and can be dangerous for people with certain psychiatric or cardiovascular conditions. Because the brew involves MAOI activity, interactions with medications (including some antidepressants) are a real concern. Poison control data and research literature describe a range of adverse effects, including agitation, tachycardia, and hypertension in reported exposures.

The series flirts with a “psychedelics are promising” storylinewhich may be true in tightly controlled clinical research settings for certain substances but it doesn’t consistently distinguish between medical trials and DIY ceremonies run by charismatic facilitators. When you blur that line, you create the impression that “clinical potential” automatically translates into “safe weekend retreat.” It does not.

6) Bee Venom Therapy: “Natural” Doesn’t Mean “Safe”

Bee venom therapy is the kind of wellness trend that sounds like a dare. It’s also the kind that can send you to the emergency room. Bee venom can trigger severe allergic reactions, including anaphylaxis. Research reviews describe adverse events associated with bee venom therapy, and the existence of risk isn’t controversialit’s the baseline.

The evidence for bee venom as a treatment for chronic conditions is limited and not strong enough to justify the casual tone the show sometimes adopts. Yet the documentary’s storytelling pattern repeats: vivid testimonials, hopeful claims, a quick skeptical note, then back to the believer’s narrative arc. That is not balance. That is narrative gravity pulling toward the most emotionally satisfying conclusion.

Why This Matters: Wellness Misinformation Scales Faster Than Ever

A single bad health claim used to spread slowlythrough a friend-of-a-friend, a niche book, a late-night infomercial with suspiciously wet hair. Now it spreads with cinematic B-roll and autoplay.

The modern wellness economy thrives on two things: distrust and desperation. Distrust in institutions (“doctors don’t listen”), and desperation for relief (“nothing else worked”). That emotional reality is worth documenting. But when a show fails to clearly label what is unsupported, what is plausible, what is disproven, and what is dangerous, it can become part of the pipeline that funnels viewers from curiosity to purchase.

The irony is that (Un)Well occasionally hints at the bigger, truer story: many people turn to wellness trends because healthcare is expensive, rushed, fragmented, and sometimes dismissive. That’s the investigative thread that deserved six episodes of real reporting. Instead, we got a sampler platter of bizarre practicesserved with a wink and a shrug.

What Responsible Health Journalism Would Look Like (No Lab Coat Required)

Start With the Evidence, Not the Anecdote

Human stories are powerful, but they’re also statistically illiterate. A responsible documentary uses testimonials as prompts for investigation, not as proof. It explains what counts as strong evidence (randomized trials, systematic reviews), what counts as weak evidence (case reports), and what counts as marketing (anything with the phrase “detoxify your cells”).

Quantify Harm, Clearly and Repeatedly

If a practice can trigger anaphylaxis, dangerous electrolyte imbalances, or severe interactions with medications, that shouldn’t be a brief disclaimer. That should be the headline in plain English, repeated enough that it sticks.

Disclose Incentives Like You Mean It

Many wellness markets are built on financial incentivesaffiliate links, MLM structures, “certifications,” retreats, courses, supplements. Journalism isn’t just asking, “Does it work?” It’s asking, “Who profits if you believe it works?”

Give Viewers a Map Out of the Maze

A science-forward series would end each episode with practical guidance: which credible sources to consult, what to ask a clinician, and what red flags indicate a scam (miracle claims, conspiracy talk, “one weird trick,” pressure to buy immediately, refusal to cite evidence).

How to Watch (Un)Well Without Getting Played

  • Translate testimonials into testable questions. “It cured me” becomes “Has this been tested against placebo, and what were the outcomes?”
  • Separate symptom relief from disease treatment. Relaxation is real. Curing cancer is a claim that demands extraordinary evidence.
  • Watch for “science-y” language without definitions. If someone says “toxins” but can’t name any, you’re in marketing territory.
  • Assume ‘natural’ can still bite. Bee venom literally bites. So does unregulated supplements. So does fasting without supervision.
  • Check a federal or academic source after each episode. FDA, NIH, major hospitals, and medical associations exist for a reason.
  • If a claim implies you don’t need real medical care, exit the chat. That’s not empowerment. That’s a sales funnel.

Conclusion: Entertaining? Sure. Responsible? Not Even Close.

(Un)Well wants to be a consumer-protection series about the wellness industry. But its storytelling choices often do the opposite: they normalize pseudoscientific claims, blur the difference between evidence and emotion, and leave viewers with “maybe it works” ambiguity precisely where clarity is most needed.

If you watch it as a cultural documentaryan exploration of what people will try when they feel unheard, unwell, or unluckythere’s something to learn. But if you watch it as health journalism, it’s a cautionary tale about how slick production can outrun scientific rigor.

In other words: it’s a show about misinformation that sometimes behaves like misinformation. Which is impressively meta, but not in a way anyone should celebrate.

Real-World Viewing Experiences (and What They Teach)

If you’ve ever watched (Un)Well with a friend group, you’ve probably seen the same three reactions unfoldsometimes in the same person, in the same episode, within the same five minutes. First comes curiosity: “Wait, people really do this?” Then comes temptation: “Okay but… what if it does help?” And then, depending on your tolerance for woo-woo, you land somewhere between skeptical laughter and late-night Googling that starts with “essential oil migraine” and ends with “Is oregano oil supposed to burn?”

One common experience is the “Netflix credibility halo.” Viewers know, intellectually, that a streaming platform isn’t a medical journal. But the brain doesn’t fully separate “this looks official” from “this is official.” When a practitioner speaks confidently on camera, backed by slow-motion shots of nature and a soundtrack that sounds like an awards-season trailer, the claim can feel more legitimate than it is. You’ll hear people say, “I’m not saying it’s true, but…”which is how misinformation politely asks to move in and use your Wi-Fi.

Another experience: the whiplash between “helpful self-care” and “please do not do that.” Many viewers walk away thinking the show is warning against extremes, and that’s partly right. But the series often fails to label the middle ground clearly. So a viewer might leave the fasting episode thinking all fasting is dangerous, while another leaves thinking extreme fasting is just misunderstood. The same ambiguity pops up with ayahuasca: some viewers interpret the episode as an invitation to explore psychedelics, while others only see the risks. Without strong scientific framing, what you “learn” can depend less on evidence and more on your prior beliefs.

People with healthcare burnout often have the most complicated reaction. If you’ve been dismissed by doctors, stuck in insurance limbo, or told “it’s probably stress” for the tenth time, a confident wellness figure can feel like relief. Viewers in that mindset sometimes report that (Un)Well feels validatingfinally someone is talking about the gaps in care. The danger is that validation can slide into vulnerability. A documentary can acknowledge those gaps while still insisting on evidence. (Un)Well too often chooses empathy without rigor, as if you can’t have both. You absolutely canand you should.

There’s also the “group chat effect” after watching: links get shared, debates start, and someone inevitably says, “My cousin’s friend did this and it worked.” That’s not malicious; it’s human. We’re pattern-seeking creatures who love stories more than spreadsheets. The best outcome of watching (Un)Well is when it sparks a healthier habit than any featured trend: checking claims against reliable sources, asking better questions, and recognizing when a narrative is engineered to make you feel something rather than understand something.

The takeaway from these viewing experiences isn’t “never watch wellness documentaries.” It’s “watch them like you’d watch a magic show.” Enjoy the performance, admire the production, and keep your wallet in your pocket until you figure out how the trick works. The moment a show makes you feel like evidence is optional, that’s your cue to pause, breathe, and remember: your health deserves more than a cliffhanger edit.

SEO Tags

The post “(Un)Well:” Netflix’s Documentary Series Is Poor Journalism That Neglects Science appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/unwell-netflixs-documentary-series-is-poor-journalism-that-neglects-science/feed/0
It Will Take More Than “Courage” to Restore Public Trust in Medicinehttps://business-service.2software.net/it-will-take-more-than-courage-to-restore-public-trust-in-medicine/https://business-service.2software.net/it-will-take-more-than-courage-to-restore-public-trust-in-medicine/#respondTue, 17 Feb 2026 19:32:08 +0000https://business-service.2software.net/?p=7116Public trust in medicine has taken a beatingfrom pandemic confusion and social media misinformation to real systemic failures and historic injustices. This in-depth guide explains why dramatic calls for “courage” are not enough, and what it truly takes to restore confidence in doctors, hospitals, and public health. Through clear analysis, real-world examples, and practical steps for patients, clinicians, and institutions, it shows how transparency, humility, equity, and science-based communication can slowly rebuild trust where it matters most: in everyday medical decisions that shape people’s lives.

The post It Will Take More Than “Courage” to Restore Public Trust in Medicine appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

For a brief moment in 2020, doctors and nurses were superheroes. People banged pots, sent pizzas to hospitals, and taped “Thank you, healthcare heroes” signs to every available surface. Fast-forward a few years and the vibe has changed. Now, many people side-eye public health guidance, argue with their doctor’s recommendations, or look to influencers instead of infectious-disease experts.

So what happened? And more importantly, what will actually restore public trust in medicine? Hint: it’s not just “courage” or one dramatic whistleblower speech. Real trust is built much more slowlyand it can be broken with a single bad experience, a confusing message, or a viral meme that feels more believable than a CDC fact sheet.

This article looks at why trust in medicine has taken such a hit, why vague calls for “courage” are not enough, and what concrete steps science-based medicine can take to earn trust backstep by step, conversation by conversation.

Why Public Trust in Medicine Is So Shaky Right Now

Trust in medicine didn’t suddenly collapse out of nowhere. It’s the result of multiple long-term trends colliding with a once-in-a-century pandemic and a firehose of online misinformation. To fix it, we have to be honest about what went wrong.

From “Healthcare Heroes” to Hesitancy and Suspicion

Early in the COVID-19 pandemic, surveys showed high levels of confidence in doctors, hospitals, and medical scientists. People desperately wanted guidance and, for a while, they largely listened. Over the next several years, however, trust slippedsometimes sharplyas recommendations changed, policies felt inconsistent, and political battles spilled into exam rooms and pharmacy lines.

Many people didn’t see nuance; they saw “Flip-flopping.” Masks were first downplayed, then strongly recommended. Boosters went from “maybe” to “please, now.” Some communities were hit with strict mandates while others barely saw restrictions. Even when the science behind these shifts was solid, the messaging often wasn’t. The result? A lot of people started feeling like medicine and public health were just another partisan team sport.

The Misinformation Multiplier

Into that messy environment walked social media, ready to pour gasoline on every spark of frustration. Complex topics like vaccine safety, myocarditis risk, or long COVID were reduced to shareable images, emotional anecdotes, and threads that traveled faster than any correction ever could.

Bad information has several unfair advantages over good information:

  • It’s simple and emotionally charged (“They lied to you!” feels more exciting than “The evidence has evolved.”).
  • It comes with a built-in villain“Big Pharma,” “the government,” “the establishment.”
  • It flatters the reader (“You’re one of the few who knows the truth.”).

Meanwhile, evidence-based voices often responded with jargon, cautious uncertainty, or dry press releases. In a fight between a spicy conspiracy thread and a 40-page PDF of risk estimates, guess which one wins most newsfeeds.

Real Harms, Not Just Hurt Feelings

Distrust isn’t only about vibes; it shows up in health outcomes. People who don’t trust their doctors are less likely to follow treatment plans, get recommended vaccines, or seek help early when something feels off. That means more preventable disease, more needless suffering, and higher costs for everyone.

It also doesn’t fall evenly. Communities with a long history of discrimination or neglect in healthcareespecially Black, Latino, Indigenous, and low-income groupshave plenty of lived experience telling them that the system doesn’t always act in their best interests. For them, “just trust the experts” is not a compelling argument; it’s a reminder of past harm.

Why “Courage” Isn’t a Magic Fix

In some corners of the medical world, a popular narrative has emerged: what we really need is courageous truth-telling doctors who “speak out” against the system. You’ll see this framed as brave warriors exposing hidden risks of vaccines, calling out public health agencies, or rejecting “groupthink.”

There’s a grain of truth here: courage matters. Whistleblowers who expose real wrongdoing are absolutely essential. Patients benefit when doctors push back on unsafe policies, greedy corporate interests, or poor-quality care.

The problem is that “courage” has become a kind of universal self-justification. Any controversial opinion can be branded as “speaking truth to power,” even when it is built on weak evidence, cherry-picked data, or outright misinformation. Courage without accuracy is just loudness in a lab coat.

When “Brave” Messaging Backfires

Take vaccine safety as an example. During the pandemic, some self-styled contrarian voices loudly exaggerated rare riskslike myocarditis after mRNA vaccinationwhile barely mentioning the much higher risk from the infection itself. That framing can feel honest and bold to scared patients. But if the numbers are skewed, the timeline is cherry-picked, or the trade-offs are hidden, trust erodes rather than grows.

Patients remember when a doctor made a spectacular claim that didn’t line up with reality. They also remember when an institution insisted that there were “no problems at all” and later quietly updated the fine print. Both extremesoverreaction and denialcan be framed as “courage” by their supporters, and both ultimately damage trust.

Courage Plus Humility, Not Courage Alone

Real trust in medicine won’t be rebuilt by more dramatic monologues. It will be rebuilt by people and institutions willing to be:

  • Brave enough to admit uncertainty, error, and limitations.
  • Disciplined enough to stick to the best available evidence even when a hot take would get more clicks.
  • Humble enough to listen seriously when patients say, “This feels wrong,” or “This doesn’t match my experience.”

That combinationcourage, humility, and disciplineis much rarer than a fiery post on social media. But it’s exactly what people are quietly looking for in their clinicians and health institutions.

Five Pillars for Rebuilding Public Trust in Medicine

Trust rebuilds slowly and locally. There’s no national rebrand or slogan that will fix everything. But there are concrete changes that can make a real difference, especially when they’re grounded in science-based medicine.

1. Radical Transparency (Including the Messy Parts)

People don’t lose trust because they hear “we don’t know yet.” They lose trust when they’re told “we’re completely sure,” and then watch reality prove otherwise.

Radical transparency means:

  • Explaining what is known, what is uncertain, and what is being studied.
  • Sharing risks and benefits in plain language, with actual numbers, not vague reassurances.
  • Openly acknowledging when guidance changes and why it changesnew data, new variants, better trials, or recognition that an earlier assumption was wrong.

When institutions behave like they must never admit error, they look more like PR machines than scientific organizations. Ironically, trying to appear infallible makes them less trustworthy, not more.

2. Evidence-Based Communication, Not Just Evidence-Based Care

Doctors and scientists are trained to read studies, not TikTok comments. But in the real world, communication is as important as the content itself. You can have the best evidence in the world and still lose the argument if you deliver it like a robot reading a fax from 1997.

Improving trust means investing in:

  • Plain-language explanations that respect people’s intelligence without assuming they’ve taken a statistics course.
  • Storytelling that connects data to real livesprotecting a grandparent, keeping a chronic disease under control, avoiding a preventable hospitalization.
  • Proactive myth-busting that names common misconceptions and explains how we know they’re wrong, instead of just saying “that’s misinformation.”

Science-based medicine doesn’t just mean the treatment itself is evidence-driven; the way we talk about it has to be evidence-informed too.

3. Treating Patients as Partners, Not Problems

Nothing destroys trust faster than feeling dismissed. A patient who brings in a screenshot from social media doesn’t need an eye roll; they need a real conversation.

Partnership looks like:

  • Listening to fears and doubts without sarcasm.
  • Validating real past harmslike rushed visits, surprise bills, or earlier experiences of bias and disrespect.
  • Making room for shared decision-making when multiple reasonable options exist.

When patients feel like they must choose between their own instincts and their doctor’s advice, trust fractures. When they feel truly heard, they’re far more willing to consider recommendations, even uncomfortable ones.

4. Tackling Structural Problems and Conflicts of Interest

No amount of warm bedside manner can fully compensate for systems that are opaque, financially confusing, or visibly influenced by industry money. People reasonably wonder: “Is this recommendation really about my healthor someone’s revenue target?”

Rebuilding trust requires visible efforts to:

  • Disclose financial relationships clearly and accessibly.
  • Separate direct marketing from clinical decision-making as much as possible.
  • Support payment models that reward long-term health, not just procedures and volume.

Patients don’t expect perfection, but they do expect that their well-being is at least in the top three prioritiespreferably number one.

5. Committing to Equity, Fairness, and Repair

Medical mistrust in many communities is not paranoia; it’s memory. From unethical experiments to ongoing disparities in pain management, maternal mortality, and access to care, trust has been earnedjust in the wrong direction.

Repair looks like:

  • Investing in community health workers and local partnerships, not just parachute campaigns.
  • Collecting data on disparities and acting on it, not filing it away.
  • Publicly naming past wrongs and explaining what is being done differently now.

Without equity, calls for “trust the system” ring hollow. With it, trust becomes possiblenot guaranteed, but possible.

What Patients, Clinicians, and Institutions Can Do Today

If You’re a Patient

Patients don’t have to simply accept whatever the healthcare system dishes out. You can strengthen your own relationship with medicine by:

  • Bringing written questions to appointments so you don’t forget them under pressure.
  • Asking, “What are the pros, cons, and alternatives?” whenever a major treatment is proposed.
  • Requesting numbers: “Roughly how many people benefit? How many are harmed?”
  • Seeking second opinions when something doesn’t feel rightgood doctors don’t fear them.

Trust doesn’t mean blind obedience; it means feeling confident that your clinician is on your side and willing to explain their thinking.

If You’re a Clinician

Clinicians often feel squeezed between time limits, insurance hassles, and constant information updates. Even so, small shifts can pay huge trust dividends:

  • Lead with empathy: “That sounds scary. Let’s unpack it together.”
  • Translate evidence into real-world language and focus on what matters most to this person’s life.
  • Be honest when you’re not sureand show how you’ll get a better answer.
  • Say out loud when your recommendation is shaped by strong evidence versus expert opinion or habit.

Many patients don’t need perfection; they need a guide who feels human, not scripted.

If You’re a Health Institution or Public Agency

Systems have the most power to change the rules of the game. Institutions can:

  • Publish clear explanations of major recommendations in everyday language.
  • Show their work: share how decisions were made, who was at the table, and what data mattered.
  • Invest in communication training for clinicians, not just new hardware and software.
  • Bring community leaders into the process before decisions are finalized, not just for damage control afterwards.

Trust grows when decisions feel legible, participatory, and grounded in real science rather than political winds.

Real-World Experiences: What Trust (and Distrust) Look Like in Practice

It’s easy to talk about “public trust” like it’s a bar graph on a slide deck. In reality, trust is personal. It happens in exam rooms, pharmacies, and kitchen-table conversations. Here are a few composite experiencesblending many real-world storiesthat show how trust is lost, and how it can be slowly rebuilt.

Case 1: The Vaccine Conversation That Almost Went Off the Rails

Maria is in her thirties, works two jobs, and takes care of her grandmother. She missed earlier COVID vaccine campaigns, partly because of scheduling, partly because she wasn’t sure who to believe. Her social feeds are a mix of family photos, recipes, and posts warning that “people are dropping dead from shots.”

At a routine visit, her doctor brings up vaccination. Maria tenses and says, “I’ve heard it can cause heart problems. My cousin knows a guy whose friend ended up in the hospital.” In some clinics, this is where the conversation dieseither with a rushed “That’s not true, don’t worry about it,” or a quiet note in the chart: “vaccine hesitant.”

But this doctor does something different. She leans in and says, “I’m glad you told me that. Let’s go through what we know about that risk and how it compares to the infection itself.” She pulls up a simple chart showing how rare vaccine-related myocarditis is, who is most affected, and how outcomes compare to heart complications after COVID infection.

They talk about Maria’s specific health risks, her grandmother’s vulnerability, and what matters most to her: “I can’t afford to be out sick for weeks,” Maria says. The doctor acknowledges the uncertainty (“Nothing in medicine is zero-risk”), shares actual numbers, and invites questions: “What’s still worrying you?”

Maria doesn’t magically become a huge public health cheerleader. But she leaves feeling respected, better informed, and more in control. A month later, after talking it over with her family, she comes back for the shot. Trust didn’t arrive in one conversation; it started there.

Case 2: When an Honest “I Don’t Know” Beats a Confident Guess

Jared, who lives with a complex autoimmune disease, has seen multiple specialists. He’s used to being told different things by different people. At one visit, a new doctor confidently insists that a certain treatment “definitely won’t interact” with his current medication. Jared later discovers that the combination is not recommended and feels betrayed: “If they could be that wrong about this, what else are they wrong about?”

Months later, he meets another clinician. When he asks about a new therapy he’s read about online, she says, “I’m not completely sure how that interacts with your current meds. Give me 30 seconds; I want to check the most recent guidelines.” She swivels her monitor, looks it up, and talks through what she findsincluding the limits of the data.

To an outsider, this might look like indecision. To Jared, it feels like safety. “I trust you more because you didn’t fake it,” he tells her. That small, honest pause does more to rebuild his faith in medicine than any glossy brochure could.

Case 3: An Institution That Finally Says, “We Were Wrong”

In one city, a hospital system rolled out an algorithm that was supposed to prioritize patients at highest risk for complications. It later turned out that the tool systematically under-prioritized patients from certain racial and socioeconomic groups. When the story surfaced, people were furiousand rightly so.

The institution had a choice: quietly tweak the algorithm and issue a vague “we are committed to equity” statement, or do the uncomfortable thing. Leadership chose discomfort. They publicly explained what went wrong, released an independent review, met with community organizations, and involved patient advocates in designing the replacement system.

Trust didn’t bounce back overnight. But over the next few years, people in that community pointed to this moment as a turning point: “They actually told us what they messed up and what they changed,” one advocate said. “That doesn’t erase the harm, but it makes future promises more believable.”

Conclusion: Trust Is Earned the Slow, Uncomfortable Way

Public trust in medicine won’t be restored by a single apology, one charismatic doctor, or a new slogan about “courage.” It will be restored by countless acts of clarity, humility, and accountability: a physician who admits uncertainty instead of bluffing; an institution that publicly corrects itself; a public health agency that explains why guidance changed instead of pretending it never did.

Science-based medicine has one major advantage in this long rebuild: reality is ultimately on its side. Treatments that genuinely work save lives, prevent suffering, and keep families together. But for people to say “yes” to those treatments, they need to believe that the system offering them is worthy of trust.

That belief can’t be commanded. It has to be earnedpatient by patient, community by community, decision by decision.

SEO Summary and Metadata

sapo: Public trust in medicine has taken a beatingfrom pandemic confusion and social media misinformation to real systemic failures and historic injustices. This in-depth guide explains why dramatic calls for “courage” are not enough, and what it truly takes to restore confidence in doctors, hospitals, and public health. Through clear analysis, real-world examples, and practical steps for patients, clinicians, and institutions, it shows how transparency, humility, equity, and science-based communication can slowly rebuild trust where it matters most: in everyday medical decisions that shape people’s lives.

The post It Will Take More Than “Courage” to Restore Public Trust in Medicine appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/it-will-take-more-than-courage-to-restore-public-trust-in-medicine/feed/0
The regulation of nonsensehttps://business-service.2software.net/the-regulation-of-nonsense/https://business-service.2software.net/the-regulation-of-nonsense/#respondSat, 14 Feb 2026 09:32:08 +0000https://business-service.2software.net/?p=6637Licensing crystal healers and miracle supplements might sound like consumer protection, but when the law blesses unproven treatments, patients struggle to tell the difference between science-based care and beautifully packaged nonsense. This deep dive unpacks how U.S. rules for supplements, homeopathy, and other alternative therapies actually work, why regulators struggle to keep up, and what a truly evidence-based approach to protecting patients would look like.

The post The regulation of nonsense appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

Every few months, a well-meaning lawmaker somewhere in the United States wakes up and says,
“We really ought to regulate this alternative medicine stuff.” Task forces are formed,
hearings are held, and suddenly there’s a brand-new bill to license crystal whisperers,
detox foot-bath technicians, and people who sell supplements that promise to “optimize”
anything that sits still long enough.

On the surface, this sounds reassuring. Regulation feels like order. Licenses and official
boards sound like protection. But there’s a problem that science-based clinicians have been
pointing out for years: when you formally regulate nonsense, what you mostly succeed in doing
is blessing nonsense. The public sees something wrapped in legal language and assumes
it has been vetted by science, not just by lobbyists.

In other words, the regulation of nonsense tends to create regulated nonsense. And that is
not what evidence-based healthcare is supposed to look like.

What do we mean by “nonsense” in medicine?

In everyday life, “nonsense” might mean a weird family superstition or your coworker’s
conspiracy theory about printer paper. In medicine, the word is narrower and much more
serious. Here, we’re talking about health interventions that:

  • Are not backed by credible, reproducible evidence
  • Often contradict well-established principles of biology, chemistry, or physics
  • Are marketed using emotional stories instead of solid data
  • Sometimes directly conflict with effective, life-saving care

That includes a wide range of practices and products often lumped under the “complementary
and alternative medicine” (CAM) umbrella: homeopathy that dilutes ingredients past the point
where any molecules remain, “energy medicine” that adjusts unmeasurable, undefined fields,
supplements that claim to cure multiple unrelated diseases, and detox regimens that never
quite specify the toxin.

The key point isn’t whether a therapy feels traditional, natural, or comforting. The key
point is whether it works better than placebo and is reasonably safe. When therapies fail
those basic tests, trying to regulate them the same way we regulate genuine medical
interventions creates a tangle of risk, confusion, and unintended consequences.

The strange logic of licensing pseudoscience

One of the central arguments from science-based critics is simple: if you create a licensing
system for a practice that is not grounded in reality, you inevitably end up with rules that
entrench that unreality. You can’t write a law about which herbs “balance energy channels”
without first pretending that such channels exist.

Licensing boards for pseudoscientific professions tend to focus on internal coherence
(“Did you follow the rules of this belief system?”) rather than actual medical outcomes.
A homeopath can be disciplined for using the “wrong remedy” under the doctrine of
homeopathy but not for the fact that high-quality trials consistently fail to show that
homeopathic remedies perform better than placebos.

From the outside, though, the public rarely sees that nuance. They just see:

  • A state board
  • A license hanging on the wall
  • A title that sounds medical-ish

Combine that with white coats, stethoscopes, and a website full of wellness buzzwords, and
it becomes almost impossible for a non-expert to distinguish between science-based medicine
and a regulated system of elaborate placebos.

Why the U.S. regulatory patchwork invites nonsense

To really appreciate how nonsense sneaks into the system, you have to understand that
healthcare in the United States is governed by a patchwork of overlapping rules and
agencies:

  • Federal agencies such as the Food and Drug Administration (FDA) and
    Federal Trade Commission (FTC) oversee products, labeling, and advertising claims.
  • State governments license professionals and define scopes of practice.
  • Legislatures can carve out special exceptions for certain industries
    and traditions.

Decades of lobbying by the supplement and CAM industries have created generous carve-outs
that let many products onto the market with minimal premarket scrutiny. Under the
Dietary Supplement Health and Education Act (DSHEA), for example, most supplements do not
require FDA approval before they are sold, and the burden of proof is often shifted to the
government to act after harm occurs rather than before the product launches.

At the same time, states have created licensure for a range of non-physician practitioners,
sometimes based more on political compromise than on evidence of efficacy. Once those
practice acts are in place, rolling them back is politically painful. No one wants to be
accused of “taking away people’s choices,” even if the choice in question is between
chemotherapy and a smoothie cleanse.

How pseudoscience leverages the language of regulation

Industries that rely on weak evidence have learned to speak fluent regulator-ese. You’ll
see phrases like:

  • “Complies with all applicable FDA regulations”
  • “Manufactured in an FDA-registered facility”
  • “Meets current Good Manufacturing Practices (cGMP)”

None of those statements mean the product has been tested and proven to do what the label
claims. They mostly describe how the product is made and labeled, not whether it
works. A beautifully labeled sugar pill is still a sugar pill.

Even disclaimers can be weaponized. You’ve probably seen the classic supplement disclaimer:
“This statement has not been evaluated by the Food and Drug Administration. This product
is not intended to diagnose, treat, cure, or prevent any disease.” It appears in tiny text
under a giant promise to boost immunity, rejuvenate your brain, or reverse aging.

For science-literate readers, the disclaimer is a warning. For everyone else, it’s just
legal wallpaper behind a glossy lifestyle fantasy.

The FTC, FDA, and the slow battle against health scams

Thankfully, the system is not completely asleep at the wheel. The FTC regularly brings
cases against companies that make blatantly deceptive health claims, from miracle weight
loss pills to devices that promised to prevent COVID-19 with almost no evidence behind
them. The bar, at least in theory, is that health claims must be supported by “competent
and reliable scientific evidence.”

In parallel, the FDA has updated its approach to homeopathic products and certain CAM
therapies, moving away from a hands-off stance toward prioritizing enforcement against
higher-risk productsthose that make serious disease treatment claims, target vulnerable
populations, or contain potentially unsafe ingredients.

But enforcement is resource-intensive, and the marketplace of nonsense is endlessly
inventive. For every product the agencies manage to challenge, dozens more pop up with
slightly tweaked wording, fresh branding, or a different “natural” angle.

Why the current system still falls short

From a science-based medicine perspective, the core problem isn’t that regulators never act.
It’s that the default posture of the law is not “prove it works before you sell it,” but
“sell it until someone proves it’s harmful or misleading.” That asymmetry favors nonsense
by design.

Add in the communication challengeagencies publishing careful, technical guidance while
marketers publish eye-catching TikToksand you have a playing field that is tilted heavily
toward those willing to stretch the truth.

Medical freedom vs. consumer protection: the rhetorical trap

There is a powerful rhetorical move that defenders of pseudoscientific products use over and
over: they frame any attempt to tighten evidence standards as an attack on “health freedom.”
Smart regulation is recast as a tyrannical attempt to control what people put in their own
bodies.

This is emotionally compelling but logically confused. Science-based regulation does not
dictate which choices individuals make. It sets a reasonable floor for honesty:

  • If you claim to prevent disease, you should actually be able to prevent disease.
  • If you claim to cure a condition, your product should outperform placebo in good trials.
  • If you sell something with real risks, those risks should be clearly disclosed.

Requiring evidence is not an assault on freedom; it is a defense of people’s right not to
be misled when they are scared, sick, or desperate. Pseudoscience thrives precisely when
people are most vulnerable and least able to critically evaluate what they’re being sold.

What a science-based regulatory framework would actually look like

If we took science-based medicine seriously as the guiding principle, regulation would look
very different from the current mix of loopholes, special categories, and euphemistic
disclaimers. A more rational framework might include:

1. One evidence standard for all health claims

Whether a product is labeled as a “drug,” “supplement,” “natural remedy,” or “traditional
formula,” the same core rule would apply: if you make a specific health claim, you need
solid evidence to back it up. Not testimonials, not cherry-picked in vitro studies, but
credible clinical data that would satisfy a skeptical, independent expert.

2. Transparency by default

Labels and ads should clearly state what is known and what is not. If evidence is
weak, preliminary, or contradictory, that should be communicated in plain language. If no
clinical trials exist, that should be explicit rather than hidden behind vague phrases like
“traditionally used for.”

3. No special carve-outs for magical thinking

Laws should not codify concepts with no basis in reality, such as “balancing life energy”
or “correcting quantum vibrations” of organs. People are free to believe in those ideas on
their own time, but they shouldn’t be embedded into professional practice acts or used to
shield purveyors from normal evidence requirements.

4. Proactive protection for high-risk situations

Regulators should prioritize interventions that:

  • Claim to treat serious diseases (cancer, heart disease, diabetes, neurodegenerative conditions)
  • Target children, older adults, or other vulnerable populations
  • Encourage people to delay or abandon proven treatments

In those settings, even “low-risk” nonsense is not benign, because the main harm is not the
sugar pill itself; it’s the opportunity cost of lost time and neglected real care.

What individuals can do while the system catches up

While we wait for lawmakers to grow spines and regulatory frameworks to catch up with
reality, individual patients and families still have to navigate the maze. A few practical
filters can help:

  • Be suspicious of big promises. If something claims to cure everything
    from arthritis to Alzheimer’s, it probably does none of those things.
  • Look for real evidence, not just stories. Stories can be comforting, but
    they are not a substitute for controlled trials.
  • Ask who benefits if you believe this. Is someone selling you an expensive
    product or long-term treatment plan?
  • Talk to a science-literate clinician. A good doctor, pharmacist, or other
    evidence-based professional should be able to help you weigh risks and benefits honestly.

You don’t need to become an expert epidemiologist to protect yourself. You just need a
healthy skepticism, a willingness to ask hard questions, and an understanding that the
phrase “all-natural” has no magical powers.

Experiences from the front lines of regulating nonsense

The debate over the regulation of nonsense is not just an abstract legal puzzle. It plays
out in clinics, pharmacies, board meetings, and kitchen tables every day. Here are some
composite scenariosdrawn from how these issues typically appear in real lifethat show
what’s at stake.

The oncologist and the “licensed” alternative clinic

Imagine an oncologist meeting a new patient with advanced cancer. The patient has spent the
last year visiting a licensed alternative clinic that offers vitamin infusions, “immune
boosting” injections, and high-priced detox plans. The clinic is state-licensed, the staff
wear white coats, and the waiting room has a wall of framed certificates.

From the patient’s perspective, this looked legitimateafter all, the practitioners had
official titles and regulatory approval. But the treatments, while emotionally comforting,
had no meaningful impact on the cancer. Valuable time was lost, and by the time evidence-based
care begins, the window for cure has narrowed.

The oncologist’s anger is not directed at the patient, but at a system that licensed a set
of practices whose foundational claims were never held to scientific standards. Regulation
in this case did not protect; it camouflaged the problem.

The pharmacist and the supplement aisle

In many large U.S. pharmacies, the supplement aisle looks almost indistinguishable from the
prescription counter. Bottles are color-coordinated, labels are polished, and health claims
hover right on the line between implication and explicit promise.

Pharmacists routinely field questions like, “This says it supports joint healthdoes it
actually work?” or “Is this homeopathic cold medicine okay for my toddler?” The awkward
reality is that many of these products are on the shelves not because they cleared a
rigorous evidence bar, but because they fall into regulatory categories that require far
less proof.

Some pharmacists print out summaries of the evidence, others steer people gently back toward
treatments with proven benefit, and some simply shrug under the weight of time pressure.
The mismatch between how “official” these products look and how little we actually know
about their effectiveness is a daily reminder of how partial our regulation really is.

The regulator trying to prioritize real harm

Regulators themselves are often acutely aware of these limitations. Picture a career staffer
at a federal agency reviewing complaints: there are miracle cancer cures advertised online,
weight loss teas linked to liver injury, and a company selling homeopathic remedies for
infants with vague claims about “supporting respiratory wellness.”

With limited staff and legal constraints, the regulator has to choose where to spend their
enforcement capital. They know that going after one particularly egregious scam may send an
important signal, but they also know that thousands of smaller nonsense claims will persist
unchallenged simply because the law wasn’t written to demand proof before such
products flood the market.

It is frustrating work: trying to use a leaky bucket to bail out a boat that someone
keeps drilling new holes into.

The patient caught between hope and skepticism

Finally, there is the person at the center of all this: the patient. Maybe they’ve just
received a new diagnosis. Maybe conventional treatment helped but didn’t fully resolve
their symptoms. Maybe they feel dismissed or rushed by a healthcare system that is
overstretched and under-empathetic.

Then they encounter a regulated alternative practitioner who spends an hour listening,
nodding, and offering a tidy explanation for every symptom based on energy, toxins,
imbalances, or a handful of lab tests of dubious value. It feels human and attentive in a
way that many mainstream encounters do not.

From this vantage point, the fact that the practitioner is licensed, or the supplement is
sold in a chain pharmacy, seals the deal. Regulation has done what good branding alone
could never do: it has conferred a sense of institutional trust. The patient is not being
foolish; they are responding to cues the system itself created.

This is why science-based medicine argues that the only honest way forward is to align our
regulatory signals with reality. Licensure, shelf placement, and advertising rules should
all point in the same direction: toward interventions that are grounded in good evidence
and away from those that are mostly storytelling wrapped in legal disclaimers.

Conclusion: regulating for reality, not rhetoric

The regulation of nonsense is one of the most quietly consequential issues in modern
healthcare. It shapes what patients see in pharmacies, what they hear in clinics, and how
they interpret the difference between a science-based treatment and a promising story with
no data behind it.

If we license pseudoscience as if it were medicine, the public will, quite reasonably,
treat it as medicine. If we give health products a free pass on evidence as long as they
wear the right marketing language, nonsense will continue to flourish in the gaps between
law and science.

A genuinely science-based regulatory system would be more boring, more demanding, and far
less friendly to magical thinking. It would also be more honest. And when people’s health,
savings, and time are on the line, honesty is the least the system can offer.

The post The regulation of nonsense appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/the-regulation-of-nonsense/feed/0
The case of John Lykoudis and peptic ulcer disease revisited: Crank or visionary?https://business-service.2software.net/the-case-of-john-lykoudis-and-peptic-ulcer-disease-revisited-crank-or-visionary/https://business-service.2software.net/the-case-of-john-lykoudis-and-peptic-ulcer-disease-revisited-crank-or-visionary/#respondFri, 06 Feb 2026 20:15:08 +0000https://business-service.2software.net/?p=5226Decades before the Nobel Prize for the discovery of Helicobacter pylori, a small-town Greek doctor named John Lykoudis was treating ulcer patients with antibiotics and insisting that their disease was infectious. Was he a crank pushing an unproven remedy, a visionary whose insight arrived too early, or something in between? This in-depth, science-based look at peptic ulcer history explores how Lykoudis’s story intersects with modern evidence, what it teaches us about medical mavericks, and why rigorous trialsnot lone heroesultimately transformed ulcer care.

The post The case of John Lykoudis and peptic ulcer disease revisited: Crank or visionary? appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

In 2005, Barry Marshall and Robin Warren took the stage in Stockholm to accept the Nobel Prize for proving that a spiral-shaped bacterium,
Helicobacter pylori, is a major cause of peptic ulcer disease. For many people, that’s where the story of infection and ulcers begins.
But decades earlier, a small-town Greek general practitioner named John Lykoudis was already treating ulcer-like symptoms with antibiotics,
convinced they were caused by an infection. He had no clinical trial network, no endoscopy, and no Twitter accountjust a firm belief,
a homemade drug cocktail, and thousands of patients.

Today, Lykoudis is an irresistible Rorschach test for how we think about mavericks in medicine. Was he a visionary whose insight was tragically ignored,
or a crank who got lucky with an unproven treatment? As usual in science-based medicine, the truth is more complicatedand much more interestingthan either extreme.

Peptic ulcer disease before H. pylori: a tale of acid, stress, and surgery

To understand why Lykoudis struggled to convince anyone, you have to remember what peptic ulcer disease (PUD) looked like in the mid-20th century.
At the time, ulcers were largely blamed on stress, smoking, diet, and “too much stomach acid.”
The stomach was considered too acidic for bacteria to survive, so the idea of an infectious cause seemed almost silly.

Treatment strategies followed this acid-centric worldview. Physicians prescribed antacids, bland diets, and later H2 blockers like cimetidine.
For severe or recurrent ulcers, surgeons stepped in with partial gastrectomies or vagotomy procedures to reduce acid production.
These approaches helped symptoms but often didn’t cure the disease. Relapses were common, and many patients cycled through years of pain,
hospitalizations, and major surgery.

Against that background, the idea that ulcers might be caused by an infectionand thus cured with a course of antibioticswas
not just unconventional. It was fighting both dogma and the limits of technology, because reliable endoscopy and biopsies
weren’t yet part of routine practice.

Who was John Lykoudis?

John Lykoudis was a general practitioner in the Greek town of Missolonghi, born in 1910 and later elected mayor of the town in the 1950s.
He wasn’t a research professor with a lab; he was a working doctor seeing everyday patientsexactly the kind of clinician who
notices patterns before they’re written up in prestigious journals.

In 1958, plagued by his own ulcer-like symptoms, Lykoudis decided to treat himself with antibiotics. When he improved,
he drew the bold conclusion that peptic ulcer disease and gastritis had an infectious cause.
He then began prescribing antibiotic mixtures to his patients and, over time, reported treating tens of thousands of people
with what he believed were excellent results.

Lykoudis developed a proprietary oral formulation he called Elgaco (also referred to in some accounts as “Elgaco” or “Elgaco-like” preparations),
a combination of antibiotics and other agents, which he patented in Greece in the early 1960s.
Patients flocked to him not because of randomized controlled trials, but because word of mouth suggested they actually felt better.

What did Lykoudis actually do?

A homemade antibiotic approach

Most of what we know about Lykoudis’s treatment comes from retrospective descriptions and a few historical analyses
rather than detailed trial data. Articles in The Lancet in 1999 and later historical commentaries describe
his use of antibiotic-based mixturessometimes including chlortetracyclinecombined with other agents such as bismuth and antacids.

According to these reports, Lykoudis estimated he treated more than 30,000 patients, claiming rapid symptom relief and very low relapse rates.
However, the documentation is sparse by modern standards. There were no controlled, blinded comparisons, no systematic follow-up with endoscopy,
and no microbiologic confirmation, because at the time no one even knew which bacteriumif anywas involved.

The backlash: fines, skepticism, and closed doors

If this sounds like the perfect hero’s arc for an underappreciated genius, reality quickly complicates the story.
Lykoudis faced serious resistance from Greek medical authorities. He was investigated, fined by a disciplinary committee,
and even indicted in court for using a therapy that wasn’t officially approved.

When he tried to publish his findings in major journals, including the Journal of the American Medical Association,
his submissions were rejected. Pharmaceutical companies weren’t interested in developing his formulation.
In the end, he died in 1980, never having convinced the mainstream medical community that his infectious hypothesis was correct.

From the vantage point of 2025, this almost begs for a Netflix miniseries titled
“The Doctor Who Was Right Too Soon”. But science-based medicine has to resist tidy narratives and look at the actual evidence.

Enter Marshall, Warren, and Helicobacter pylori

In the late 1970s and early 1980s, Australian pathologist Robin Warren and gastroenterologist Barry Marshall began documenting
spiral bacteria in gastric biopsies from patients with gastritis and duodenal ulcers.
They cultured the organismlater named Helicobacter pyloriand demonstrated that eradicating it with antibiotics
dramatically reduced ulcer recurrence.

Marshall famously drank a culture of H. pylori, developed gastritis, and then cured himself with antibiotics,
dramatically strengthening the causal case.
Over the next decade, multiple studies worldwide showed that triple therapy with antibiotics and bismuthor later combinations
of a proton pump inhibitor (PPI) plus two antibioticscould actually cure many peptic ulcers by eradicating the underlying infection.

By the 1990s, major gastroenterology societies recommended testing for and treating H. pylori in patients with ulcers.
In 2005, Marshall and Warren received the Nobel Prize in Physiology or Medicine for this work, officially ending
the acid-only dogma and cementing infection as the key driver for most peptic ulcers.

Crank or visionary? Reassessing Lykoudis through a science-based lens

The core question raised by Science-Based Medicine and other commentators is deceptively simple:
given what we know now about H. pylori, was John Lykoudis a misunderstood pioneer or just a doctor with a lucky hunch?

The case for “visionary”

  • He recognized an infectious pattern before it was fashionable.
    Lykoudis concluded that ulcers were infectious decades before Warren and Marshall, at a time when “bacteria can’t survive in the stomach”
    was practically medical gospel.
  • He used antibiotics and bismuth, which we now know are effective.
    Modern ulcer therapy often involves two antibiotics plus a PPI or bismuth-based triple therapy, closely echoing the general strategy Lykoudis pursued.
  • His patients seemed to improve.
    Historical accounts describe many patients experiencing rapid symptom relief and fewer recurrences compared with the standard acid-suppressive care of the era,
    although the data are mostly anecdotal.

From this vantage point, it’s tempting to paint him as the “original ulcer infection hero,” unfairly ignored by a stuffy establishment.

The case for “crank” (or at least “not quite there”)

  • No rigorous trials.
    Lykoudis never produced randomized, blinded clinical trials to compare his therapy to standard treatment. Without controls,
    it’s impossible to know how much of his apparent success was due to natural healing, placebo effects, or regression to the mean.
  • No identified pathogen.
    He believed ulcers were infectious but never isolated, visualized, or characterized the causative organism. Marshall and Warren’s work
    wasn’t just about using antibiotics; it was about proving causality through microbiology and carefully designed studies.
  • Opaque formulation and poor documentation.
    His proprietary mixture was not fully characterized or systematically studied, making it hard to evaluate or replicate in modern terms.
  • Regulatory friction wasn’t purely villainous.
    Authorities have a duty to protect patients from untested treatments. Without solid evidence, skepticism was appropriate,
    even if, in hindsight, some decisions look heavy-handed.

In other words, Lykoudis got the big idea partly rightulcers were often infectious and susceptible to antibiotic therapybut
he didn’t provide the kind of evidence that modern science-based medicine requires to change practice.

The Science-Based Medicine takeaway: Plausibility is not enough

The Lykoudis story fits neatly into one of Science-Based Medicine’s favorite themes: the need to balance scientific plausibility with solid evidence.
On one hand, the medical community can be slow to accept new ideas, particularly when they challenge established dogma.
On the other hand, history is full of mavericks who were wrongand whose ideas could have harmed patients if adopted uncritically.

Lykoudis demonstrates that you can:

  • Have a partially correct hypothesis (infection plays a key role),
  • Use treatments that are closer to right than the standard of care,
  • And still not meet the evidentiary bar needed to transform medicine.

He is a useful cautionary tale for both sides. For skeptics, he reminds us not to dismiss new ideas just because they sound odd or come from outside major institutions.
For enthusiasts of “maverick geniuses,” he’s a reminder that anecdotes, personal conviction, and even plausible mechanisms are not enough.
Without rigorous testing, we don’t actually know how goodor how dangerousa treatment is.

Modern peptic ulcer care: What actually works today

Today, peptic ulcer disease is usually approached with a straightforward science-based strategy:

  • Test for H. pylori.
    Non-invasive breath tests, stool antigen tests, or endoscopic biopsies can identify infection.
  • Eradicate the infection if present.
    Standard regimens use combinations of antibiotics plus a proton pump inhibitor (PPI), sometimes with bismuth.
    The exact regimen depends on local antibiotic resistance patterns and guideline recommendations.
  • Address other causes.
    Not all ulcers are caused by H. pylori. Nonsteroidal anti-inflammatory drugs (NSAIDs), certain medications,
    and rare conditions like Zollinger–Ellison syndrome also play a role. These require different management strategies.
  • Modify risk factors such as smoking and heavy alcohol use, and manage comorbidities.

The result is that many patients who once faced recurrent ulcers and major surgery can now be effectively cured with a short course of
evidence-based therapy. That outcome owes more to the detailed, painstaking work of Marshall, Warren, and many others than to
any single “lone genius” narrative.

As always, anyone with ulcer symptomssuch as persistent upper abdominal pain, black or bloody stools, unexplained weight loss,
or vomitingshould seek prompt medical evaluation rather than experimenting with antibiotics or relying on historical stories.
Only a qualified clinician can diagnose ulcers and recommend appropriate testing and treatment.

So, was John Lykoudis a crank or a visionary?

The most accurate answer may be: he was a little bit of bothand that’s exactly why his story is so valuable.

Lykoudis was visionary in recognizing an infectious component to peptic ulcer disease long before it became accepted,
and in intuitively adopting antibiotic-based therapy. He saw patterns in his patients, took a risk on a new approach,
and probably did deliver real benefit compared with the standard care of his day.

At the same time, judged by modern standards, he behaved at least somewhat like a crank:
holding a strong belief based largely on personal conviction and uncontrolled observations, shielding a proprietary mixture from full scrutiny,
and failing to produce the systematic evidence needed to convince skeptical peers.

The case of John Lykoudis doesn’t justify shipping untested therapies directly to patients in the name of “disrupting medicine.”
Instead, it highlights why we need robust mechanisms to:

  • Encourage reasonable, biologically plausible innovation,
  • Test new ideas rigorously, and
  • Quickly scale treatments that prove safe and effectiveno matter where they originated.

Science-based medicine is at its best when it can learn from both the mavericks and the meticulous trialists,
weaving insight and evidence together. Lykoudis’s story reminds us that being early is not enough;
to truly change care, you must also be convincingly right.

Reflections and real-world experiences: What Lykoudis teaches us today

Although few modern clinicians will ever prescribe a mysterious homemade ulcer mixture,
many recognize pieces of the Lykoudis story in their own day-to-day experience.
Medicine is full of “near misses” where someone notices something important but doesn’t quite manage to turn it into accepted practice.

When pattern recognition meets the evidence wall

Ask around in hospital break rooms and you’ll hear informal stories that sound a little bit like Lykoudis.
A primary care doctor notices that patients with a particular symptom cluster seem to respond better to one medication than another.
An infectious disease specialist sees repeated odd culture results that don’t fit the textbook.
A surgeon senses that certain patients bounce back faster with a tweak to post-operative care.

These observations are the raw material of scientific progressbut only if they make it over the “evidence wall.”
To cross that wall, you need protocols, ethics approvals, funding, statisticians, and time.
Lykoudis never made that leap. Many modern clinicians don’t either, not because they’re cranks,
but because the system for turning bedside insight into robust research is still difficult and slow.

Teaching Lykoudis as a case study

In some medical schools and evidence-based medicine courses, Lykoudis is now used as a teaching example.
Students are asked to read the historical accounts and then answer questions like:

  • What kind of study could Lykoudis have realistically run with the tools of his time?
  • How might his local medical community have evaluated his claims more fairly?
  • What safeguards would you insist on before using his therapy widely?

These exercises highlight the tension between “listen to the data” and “don’t be reckless with patients.”
They also underline that ethical skepticism is not the same as closed-mindedness.
Critically appraising a bold claimeven one that turns out to be partly correctis a feature of good medicine, not a bug.

Modern parallels: separating the Lykoudises from the homeopaths

Every time a new “miracle cure” hits social mediawhether it’s an exotic supplement, a detox protocol, or an energy-based device
someone inevitably says, “They laughed at people who said bacteria cause ulcers too!”
Lykoudis is sometimes invoked in this context, as if his story proves that any ridiculed idea is destined for vindication.

But that logic cuts out the most important detail: Lykoudis’s hypothesis, while under-documented, was at least biologically plausible.
Infectious explanations for ulcers had been floated before; bacteria had been seen in stomach tissue; and antibiotics are known to kill bacteria.
By contrast, many modern fringe therapiessuch as homeopathy, which posits therapeutic effects from ultra-dilutions where no molecules remainwould require
overturning large swaths of physics, chemistry, and biology to be true.

Clinicians and skeptics can use the Lykoudis case as a calibration tool:
it encourages a nuanced approach that asks, “Is this idea compatible with what we already know about biology?”
and “What evidence would convince us either way?” rather than automatically cheering every maverick or reflexively rejecting them all.

For patients: what this history means in real life

For people living with ulcer symptoms, the main takeaway is reassuring: modern care is vastly better than it was in Lykoudis’s day.
We don’t have to rely on heroic individual experiments; we have decades of clinical trials, clear diagnostic pathways,
and consensus guidelines built around what actually works.

At the same time, the story is a reminder to bring curiosityand questionsto your medical care.
If your treatment plan doesn’t make sense to you, it is absolutely appropriate to ask:
“What’s the evidence for this approach? Are there alternatives? How do you decide what to recommend?”
Good clinicians welcome those questions; they’re a sign that patients are engaging with the same evidence-based mindset that ultimately validated
the infectious theory of ulcers.

In the end, John Lykoudis’s legacy is less about a specific pill bottle and more about the messy, human path from hunch to hypothesis to proof.
He reminds us that getting the right answer in medicine usually requires both creative thinking and rigorous testing
and that leaving out either piece can turn a potential visionary into a footnote instead of a revolution.

The post The case of John Lykoudis and peptic ulcer disease revisited: Crank or visionary? appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/the-case-of-john-lykoudis-and-peptic-ulcer-disease-revisited-crank-or-visionary/feed/0
Dummy Medicine, Dummy Doctors, and a Dummy Degree, Part 2.0: Harvard Medical School and the Curious Case of Ted Kaptchuk, OMDhttps://business-service.2software.net/dummy-medicine-dummy-doctors-and-a-dummy-degree-part-2-0-harvard-medical-school-and-the-curious-case-of-ted-kaptchuk-omd/https://business-service.2software.net/dummy-medicine-dummy-doctors-and-a-dummy-degree-part-2-0-harvard-medical-school-and-the-curious-case-of-ted-kaptchuk-omd/#respondWed, 04 Feb 2026 19:10:10 +0000https://business-service.2software.net/?p=3722Placebos can change how patients feelbut rarely alter disease. We unpack Harvard’s Program in Placebo Studies, Ted Kaptchuk’s “OMD” controversy, and Science-Based Medicine’s critique. See what the evidence actually says about open-label placebos, asthma, IBS, and the ethics of “dummy medicine.”

The post Dummy Medicine, Dummy Doctors, and a Dummy Degree, Part 2.0: Harvard Medical School and the Curious Case of Ted Kaptchuk, OMD appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

Introduction: If you’ve followed the long-running saga of “dummy medicine” (placebos), “dummy doctors” (credential confusion), and the occasional “dummy degree,” you know this story is equal parts serious science and academic theater. At the center is Ted J. Kaptchuk, OMDa professor at Harvard Medical School and director of the Harvard-wide Program in Placebo Studies (PiPS)whose work on placebo effects has shaped how clinicians and skeptics talk about care, context, and patient-reported outcomes. The debate is lively: are placebos clever illusions that help people feel better, or ethical landmines that risk replacing hard evidence with warm fuzzies? Today, we revisit the high points, clear up frequent misconceptions, and examine why the “curious case” still matters for science-based medicine.

Who Is Ted Kaptchukand Why Is He Controversial?

Ted Kaptchuk is a Harvard Medical School professor known internationally for placebo research and for leading PiPS at Beth Israel Deaconess Medical Center. That’s not controversial. What riles critics is the path he took to the ivory tower (and how that journey gets framed): the “OMD” credential (Doctor of Oriental Medicine), a long history in East Asian medicine scholarship, and a prolific record on placebo mechanisms and ethics. Harvard’s official bio lists his professorship and leadership of PiPS, underscoring his mainstream standing within academic medicine.

Science-Based Medicine (SBM), a hub for physician-skeptics, chronicled Kaptchuk’s background in a multipart series beginning in 2011, probing his credentialing and the claims built around it. The core critique: top-tier institutions shouldn’t elevate ambiguous credentials or blur lines between empirically supported medicine and therapies whose effects rarely exceed placebo. The title you’re reading nods to that series’ Part 2.0, which dissected Kaptchuk’s role at Harvard and the optics of his OMD.

Placebos 101: What We Knowand What We Don’t

Placebos aren’t magic pills; they’re context effects. The encounter with a clinician, the ritual of treatment, and patient expectations can change how symptoms are perceived. A landmark Cochrane review led by Hróbjartsson and Gøtzsche found that, in general, placebos don’t produce large effects on objective or binary outcomes, though they can yield small benefits for subjective, continuous outcomes such as pain. Translation: placebos can influence how people feel, but they rarely move hard physiological endpoints.

Modern overviews echo this nuance: placebo effects are real psychobiological events rooted in the therapeutic context, not mere “nothing.” But their clinical utility has limits and ethical boundaries, especially if they displace effective treatment for disease processes that demand active therapy.

The Harvard Experiments: From Deception to “Open-Label” Placebo

Open-Label Placebo in IBS

One of Kaptchuk’s most quoted studies is the 2010 open-label placebo (OLP) randomized trial in irritable bowel syndrome. Participants knowingly took placebo pills and still reported symptom improvement versus a no-treatment control. The finding electrified headlines: if you tell patients “this is a sugar pill,” and they still feel better, what exactly is at workconditioning, expectation, the clinical ritual, or something else? These data helped seed a new research program around “honest placebos” as potential adjuncts for symptom-driven conditions.

Asthma: When Subjective and Objective Diverge

A 2011 NEJM study comparing albuterol, placebo inhalers, sham acupuncture, and no intervention found a classic split: objective lung function (FEV₁) improved with albuterol but not with placebo or sham; yet subjective improvement ratings were similar for albuterol and the two placebo armsand all three beat “no intervention.” This is the placebo paradox in high resolution: patients can feel better while physiology stays the same, a reminder that relief isn’t always repair.

The Credential Question: What Does “OMD” Mean Here?

Credentials carry weightespecially at Harvard. Kaptchuk’s use of “OMD” has been scrutinized by skeptics who argue it isn’t comparable to an MD or PhD in biomedical science. A frequently cited account (via SBM) points to official correspondence from Macau authorities stating the named institute wasn’t a degree-granting university, highlighting the fog around the credential’s status. Irrespective of titles, Kaptchuk’s Harvard profile reflects that he was appointed and promoted on the strength of his scholarship, not on the basis of a U.S. medical license. The controversy, however, raises important institutional questions: how should elite centers weigh atypical backgrounds when the scholarship itself is influential but sits next to “integrative” narratives that can be oversold?

Media, Mythmaking, and the “Power of Nothing”

High-end journalism has profiled Kaptchuk’s work, sometimes with a romantic sheen. Michael Specter’s “The Power of Nothing” in The New Yorker captured the allure of placebo science: an artful clinical ritual that modulates perception andoccasionallybiomarkers. Letters to the editor and commentary quickly pushed back, stressing that placebos shouldn’t be mistaken for curative therapy for diseases like cancer or atherosclerosis. The lesson for communicators is simple: hold two truths at onceplacebos can meaningfully ease subjective suffering; they are not substitutes for disease-modifying treatment.

What Science-Based Medicine Gets Right

The SBM critique lands squarely on several points. First, placebo responses shine with subjective outcomes (pain, distress, nausea), but typically don’t budge objective pathology. Second, institutions must be vigilant about credential inflation and the messaging that flows from it; when elite brands platform ambiguous degrees, the public can confuse charisma with credibility. Third, the ethics matter: deception is off the table, and even “honest” placebos must not crowd out proven care. In short, SBM’s caution sign is not anti-compassionit’s pro-evidence, insisting that warm bedside manner and rigorous therapeutics are complements, not competitors.

What Kaptchuk’s Program Contributed

Even critics concede that the Harvard-wide Program in Placebo Studies helped formalize a research agenda on the “context of care”: how interaction, meaning-making, and ritual shape perceived outcomes. Harvard’s own coverage underscored how Kaptchuk’s group teased apart components of placebo effects and documented nocebo side effects in trials where participants were primed with warnings. These insights are gifts to mainstream clinicians, reminding us that tone, time, trust, and transparency affect patient experiencewhether or not the intervention is pharmacologically potent.

Ethics: The Line Between Caring and Misleading

Ethical north star: alleviate suffering without compromising truth or delaying effective care. The “open-label” pathway tries to square that circle: no deception, clear disclosure, and use mainly for symptom relief in conditions where active disease modification isn’t at risk. The literature, including NEJM perspective work, calls for rigorous guardrails: don’t oversell, don’t replace indicated therapies, and keep informed consent central.

Key Takeaways for Clinicians and Skeptics

  • Placebos are context, not cure: expect modest benefits on subjective outcomes; don’t expect changes in objective disease measures.
  • Open-label placebos can help select patients with symptom-dominant conditions like IBS, provided consent is explicit and standard care remains intact.
  • Messaging matters: media can drift from nuance to narrative; keep claims tightly tethered to data.
  • Credentials and credibility are separable: institutions must ensure that public-facing titles don’t mislead about expertise or licensure.
  • Compassion enhances, it doesn’t replace, efficacy: warm, attentive care boosts patient experience alongside evidence-based treatment.

FAQ: The Curious Case, in Plain English

“Do placebos really work?”

They can change how you feeloften a little, sometimes a lotespecially for pain and similar symptoms. They rarely change the underlying disease process.

“Is it ethical to use them?”

Deceptive placebos are ethically fraught. “Honest” (open-label) placebos are being studied as add-ons, not replacements, and require careful consent and boundaries.

“What’s the deal with ‘OMD’?”

It’s a non-MD credential from the world of East Asian medicine. Skeptics argue that it can be misleading when used in mainstream academic settings. The controversy is about optics and standards in elite institutions.

Conclusion

Placebo researchespecially the open-label trackhas enriched medicine’s understanding of the therapeutic encounter, and Ted Kaptchuk’s group deserves credit for making “context” a measurable variable. At the same time, Science-Based Medicine’s scrutiny is healthy: medicine must keep its compass oriented toward outcomes that matter, hierarchies of evidence, and clarity about credentials. The best future is not “dummy medicine” displacing real therapy; it’s real therapy delivered in humane, expectation-sensitive ways that maximize relief without sacrificing truth.

sapo: Placebos can change how patients feelbut rarely alter disease. We unpack Harvard’s Program in Placebo Studies, Ted Kaptchuk’s “OMD” controversy, and Science-Based Medicine’s critique. See what the evidence actually says about open-label placebos, asthma, IBS, and the ethics of “dummy medicine.”


Experiences and Lessons from Covering “Dummy Medicine” (≈)

Writing about placebos is like narrating a magician’s act while refusing to use smoke and mirrors. The first lesson is how easily people conflate “feeling better” with “getting better.” Patients (and sometimes journalists) love a tidy narrative: the acupuncture felt soothing, the sugar pill reduced nausea, the sham inhaler calmed breathing. Yet the data keep warning us that the body’s dashboard lightsspirometry, tumor burden, inflammatory markersoften don’t budge. The experience taught me to pair every human story with a hard endpoint. When the two disagree, optimism yields to evidence.

Another lesson is how “ritual” can be rehabilitated without slipping into pseudoscience. In clinics that emphasize time, touch, and explanation, patients often report less pain or anxiety. That’s not proof of energy meridians; it’s proof that empathy has measurable effects. The trick is to deliver warmth without theatricsno white-coat mysticism, just communication skills and predictable follow-up. When I interview clinicians who ace this, their secret is banal and beautiful: ask, listen, and don’t rush.

Credentials were the third wake-up call. Titles are shortcuts our brains use to decide who’s worth trusting. But shortcuts can mislead. The “OMD” debate showed me how institutions must spell out what a credential doesand doesn’tmean. Was the degree conferred by an accredited university? Does it imply licensure or clinical authority in biomedicine? Silence on these points lets audiences assume equivalence with MD or PhD when none exists. Exploring this story made me more explicit about degrees in every profile I write.

The fourth lesson: open-label placebos deserve curiosity but also containment. Patients appreciate honesty, and some are willing to try a transparent sugar pill as an add-on for symptoms. But in real clinics, the risk is scope creep. An “honest placebo” for IBS discomfort is one thing; letting a placebo stand in for an antibiotic or a bronchodilator is another. My rule when covering OLP trials is to ask two questions: What would standard of care be without the placebo? and Were objective outcomes tracked? If either answer is fuzzy, the story needs more reportingor a tighter conclusion.

Finally, I learned to recognize how media frames influence public expectations. A feature titled “The Power of Nothing” is catnip; it suggests we’ve discovered a hack for suffering. But headlines can blur boundary lines that researchers spend entire careers trying to draw. When I talk to trialists, they’re careful: placebos can help patients feel better; they do not “treat” cancer, reverse asthma pathophysiology, or unblock arteries. As a writer, matching that precision is part of the job.

So, what’s the takeaway for readers? Celebrate the parts of care that make you feel heard; they matter. Demand treatments that change outcomes when outcomes can be changed; your health deserves it. And when an authority leans on an obscure credential, ask what it certifies. In the end, the best medicine isn’t dummy or dourit’s humane, honest, and anchored to evidence.

The post Dummy Medicine, Dummy Doctors, and a Dummy Degree, Part 2.0: Harvard Medical School and the Curious Case of Ted Kaptchuk, OMD appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/dummy-medicine-dummy-doctors-and-a-dummy-degree-part-2-0-harvard-medical-school-and-the-curious-case-of-ted-kaptchuk-omd/feed/0
A Review of “In Covid’s Wake”: According to Laptop Class Professors, the Heroes of the Pandemic Were Laptop Class Professorshttps://business-service.2software.net/a-review-of-in-covids-wake-according-to-laptop-class-professors-the-heroes-of-the-pandemic-were-laptop-class-professors/https://business-service.2software.net/a-review-of-in-covids-wake-according-to-laptop-class-professors-the-heroes-of-the-pandemic-were-laptop-class-professors/#respondWed, 04 Feb 2026 00:40:09 +0000https://business-service.2software.net/?p=3338In In Covid’s Wake: How Our Politics Failed Us, Princeton professors Stephen Macedo and Frances Lee argue that elites overreacted to COVID and that dissenting academics were unfairly silenced. In his Science-Based Medicine review, neurologist Jonathan Howard counters that the book downplays evidence that restrictions saved lives and recasts laptop class professors and Great Barrington Declaration allies as tragic heroes while sidelining frontline workers and patients. This article unpacks that clash, examining what the science actually says about lockdowns and school closures, how the “laptop class” narrative distorts who really carried the risks of the pandemic, and why it matters whose perspective dominates our post-COVID story.

The post A Review of “In Covid’s Wake”: According to Laptop Class Professors, the Heroes of the Pandemic Were Laptop Class Professors appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

Every big historical crisis eventually gets its bookshelf: sober policy autopsies, emotional memoirs, and at least one volume insisting that the real victims were… the authors’ friends.
In Covid’s Wake: How Our Politics Failed Us by Princeton political scientists Stephen Macedo and Frances Lee aims to explain how institutions bungled the pandemic. But in the Science-Based Medicine review titled “According to Laptop Class Professors, the Heroes of the Pandemic Were Laptop Class Professors,” neurologist Jonathan Howard argues that the book quietly recasts privileged academics and Great Barrington Declaration signatories as misunderstood heroes, while pushing aside the people who actually faced the virus in hospitals, nursing homes, and crowded buses.

This review article takes a closer look at Howard’s critique, the book’s core arguments, and the broader “laptop class” narrative that grew up around COVID-19. We’ll unpack what the authors get right about harms from restrictions, where they drift away from the scientific evidence, and why it matters who we cast as “main characters” in the story of the pandemic.

What In Covid’s Wake Tries to Do

Macedo and Lee’s book sits in a growing genre that treats the pandemic as a political failure above all else. Their central thesis is that American institutions and elitesespecially liberal onesoverreacted to COVID with overly stringent restrictions, failed to weigh trade-offs, and shut down legitimate dissent.

The authors frame In Covid’s Wake as a kind of postmortem for liberal governance: why did we get prolonged shutdowns, school closures, and mandates that, in their telling, weren’t clearly justified by data? They focus strongly on dissenting academicsespecially proponents of the Great Barrington Declaration (GBD)and argue that these figures raised important questions about harms from lockdowns but were unfairly marginalized.

On its face, a serious examination of policy failures and unintended consequences is absolutely worthwhile. The long shadow of school closures, delayed medical care, and mental health strain is real, and it deserves rigorous scrutiny. The problem, according to Howard’s review, is that Macedo and Lee are far less rigorous with the scientific evidence behind COVID mitigation than they are with the hurt feelings of anti-mitigation intellectuals.

The “Laptop Class” and the Pandemic

First, let’s decode the key phrase: “laptop class.” During COVID, commentators began using it to describe relatively affluent professionalslawyers, professors, consultants, tech workerswho could keep their income flowing while working safely from home on a laptop.

The term is often used pejoratively, contrasting their safety and comfort with “essential workers”: grocery clerks, bus drivers, factory workers, aides in nursing homes, and hospital staff who faced daily exposure and could not simply move to Zoom.

In Howard’s telling, In Covid’s Wake leans hard into this laptop-class framingbut in a surprising way. Macedo and Lee argue that laptop class critics of lockdowns and school closureslike the GBD authors and aligned doctorswere bravely speaking out on behalf of the working class. Yet, as Howard points out, many of these figures enjoyed intense media visibility, elite institutional backing, and the ability to “log off” from the consequences of their ideas, in stark contrast to the people staffing COVID wards.

What the Science Actually Says About COVID Restrictions

A major flashpoint in the debate is whether “stringent COVID-19 restrictions were associated with substantial decreases in excess deaths.” According to Howard, Macedo and Lee have repeatedly claimed in interviews that such studies “don’t exist.”

That’s simply not accurate. Multiple modeling and observational studiessummarized in journals like JAMA Health Forum and other peer-reviewed venueshave found that combinations of nonpharmaceutical interventions (NPIs) such as masking, limits on gatherings, and temporary closures were associated with lower excess mortality and reduced transmission in many settings.

School closures are a harder case. Systematic reviews suggest that shutting schools may reduce transmission and community deaths, but the benefits are modest and context-dependent, while the harms to learning, mental health, and physical health (including increased anxiety and obesity) are substantial.

In other words, the evidence paints a picture of messy trade-offsnot “restrictions did nothing,” but also not “restrictions were pure net benefit in every form.” Howard’s criticism is that Macedo and Lee downplay or ignore the robust evidence that serious mitigations saved lives, in order to amplify a narrative in which dissenting laptop-class intellectuals were silenced truth-tellers rather than deeply controversial actors whose proposals carried their own risks.

The Great Barrington Declaration Under the Microscope

From “Focused Protection” to Mass Infection

A big chunk of both the book and the SBM review revolves around the Great Barrington Declaration, a 2020 manifesto authored by three academic scientists advocating “focused protection.” In practice, they argued that low-risk people should return to normal life and acquire natural infection, while high-risk people would somehow be shielded.

Howard summarizes the track record of the GBD’s core claims: that society could sharply separate “vulnerable” and “not vulnerable” groups, that herd immunity was just a few months away if we allowed the virus to sweep through, that children didn’t meaningfully spread COVID, and that reinfections were rare. He notes that real-world data and time have decisively falsified these assumptionsyet Macedo and Lee treat the GBD authors as fundamentally right in spirit, even if they “got some things wrong.”

The SBM review does not mince words here. It argues that the GBD wasn’t just an imperfect early document; it was a sustained campaign of misrepresentation that downplayed death, long-term disability, and overwhelmed hospitals. The critique is especially pointed when it comes to the dissonance between lofty rhetoric about protecting the vulnerable and the actual outcomes in places where GBD-style policies or messaging were influential, such as Florida’s high nursing home and staff mortality and repeated school disruptions despite a rhetoric of keeping schools open.

Turning Disinformation into Victimhood

Where Howard seems most astonished is in Macedo and Lee’s moral framing. In their reading, the truly grievous injustices of the pandemic were reputational harms to GBD-aligned doctorsonline criticism, loss of social media posts, and being called “fringe”rather than the very real harm caused by their inaccurate claims about vaccines, natural infection, and the prospects of herd immunity.

Put bluntly, Howard believes the book flips the script: instead of focusing on patients misled by anti-vaccine or “let it rip” messaging, the authors center the feelings and careers of those spreading the messaging. That’s the heart of his subtitle’s punchline: the heroes of the pandemic, in this telling, were laptop class professors and their allies.

Who Actually Showed Up in the Pandemic?

The “laptop class professor as hero” narrative grates particularly hard if you spent any time following reports from COVID wards. Healthcare workers, aides in long-term care facilities, respiratory therapists, and even non-medical essential workers like delivery drivers and grocery clerks bore huge risks with limited protection early on. Many got sick; many died.

Howard’s review repeatedly contrasts this reality with the relative safety of laptop-class pundits who recorded YouTube videos, did media tours, and argued that broad infection of healthy people was not just acceptable but morally preferable. He notes that many of these figures never treated COVID patients but were confident enough to portray clinicians as panicky, cowardly, or “sheep” for supporting masks and vaccines.

In that light, a book that spends pages lamenting that such pundits suffered social media backlash, while barely acknowledging the clinicians and patients who suffered tangible harm, feels profoundly misaligned. The review argues that In Covid’s Wake reflects a kind of elite solipsism: the most important tragedies are those happening in your own inbox or conference invitations.

What the Book Gets Right About Pandemic Harms

To be fair, even a harsh review like Howard’s concedes that Macedo and Lee are not wrong to highlight serious harms from some COVID policies. The educational, psychological, and social costs of prolonged school closures are no longer in serious dispute. Numerous overviews and inquiriesfrom academic reviews to national COVID investigationsnow document lost learning, increased anxiety, worsening obesity, increased exposure to domestic harm, and the fraying of the “fabric of childhood.”

Similarly, disruptions in routine medical care, economic precarity, and the strain on single parents and low-income families are real and lasting. Any honest accounting of the pandemic must grapple with how to better balance infection control with these long-term costs next time.

Where Howard and other critics differ from Macedo and Lee is not in observing those harms, but in how they assign causality and moral weight. For the laptop-class narrative, the primary villains are overcautious public health leaders and censorship-happy platforms; for science-based critics, the story also has villains on the other sidefigures who trivialized COVID, discouraged vaccination, and promoted unrealistic strategies based on wishful thinking rather than data.

Where the Laptop Class Narrative Falls Short

The big weakness of the “laptop class professors as heroes” frame is that it treats a small, privileged subset of pandemic commentators as the central moral actors. The story becomes one of brave, embattled contrarians versus rigid institutional elites, instead of a far more complicated clash of imperfect policies, evolving data, human fear, and political polarization.

That simplification matters because it encourages readers to see every future crisis through the same lens: if experts caution against risky behavior, they must be self-interested elites; if some academic claims to speak for “the workers,” their claims must be virtuous. Reality is messier. Many of the loudest laptop-class critics of mitigation never had to walk through an overflowing ICU or talk to a family whose unvaccinated relative regretted their choices as they struggled to breathe.

Howard’s review is a reminder that experience with the virus itselfin hospitals, in nursing homes, in communities that saw repeated wavesis a crucial form of evidence. It does not replace randomized trials or statistical models, but it certainly should not be treated as irrelevant. When books like In Covid’s Wake treat GBD authors and allied pundits as persecuted moral visionaries while barely acknowledging the damage their recommendations might have caused, they risk rewriting history in favor of the people least exposed to that damage.

How to Read Pandemic “Reckonings” Critically

For readers trying to make sense of the growing pile of COVID retrospectives, Howard’s review implicitly offers a checklist:

  • Follow the evidence trail. Does the author fairly represent the body of scientific literature on NPIs, vaccines, and school closures, or cherry-pick studies that support their narrative?
  • Watch who is centered. Are the main characters policymakers, pundits, and professorsor the people whose lives and health were directly on the line?
  • Separate criticism from victimhood. Being criticized on social media is not equivalent to being censored, nor is it comparable to losing a loved one to a disease you were told was “mild.”
  • Beware easy heroes and villains. A pandemic is a systems failure, not a simple morality play.

The Science-Based Medicine review is not the last word on In Covid’s Wake, but it is a necessary counterweightone that insists we keep real-world consequences, not just laptop-class narratives, at the center of our post-COVID reckoning.

Shared Experiences from the Laptop Class Era

Theory and evidence are important, but part of what makes the “laptop class professors” framing so grating is how it clashes with the lived experience of many people who worked from home during COVID. Most weren’t masterminding global policy from cushioned desk chairs. They were juggling Zoom meetings with first-grade math, worrying about aging parents, and doom-scrolling through hospitalization graphs at midnight.

Picture a mid-career university professor in spring 2020. Overnight, their job turned into a one-person media studio: learning to record lectures, run breakout rooms, and troubleshoot internet outages for students who sometimes sat in parked cars for Wi-Fi. Did they experience physical risk differently from the respiratory therapist intubating patients? Absolutely. But many also felt a gnawing unease that their own safety depended on armies of people still going outdelivery drivers, lab techs, custodial staff, cafeteria workerswhose risk they could not fully see but could not ignore either.

Or consider a K-12 teacher who spent a year teaching through a laptop balanced on a stack of cookbooks. Their “classroom” was a grid of faces, some cameras off, some siblings wandering through, some kids clearly struggling in crowded apartments. They knew remote school was suboptimal and often heartbreaking. They also knew that with shifting variants, no vaccines yet, and poor ventilation in their building, going fully back in person felt terrifying. To cast them as either villains for backing caution or as simplistic heroes for pushing reopening is to miss the real story: they were constantly weighing imperfect options, with no guarantee that anyone would support them if things went wrong.

Many healthcare-adjacent professionalsmedical school instructors, public health faculty, epidemiology grad studentsoccupied an uneasy middle ground. They weren’t on the COVID wards every shift, but they were in close contact with people who were. Some spent their days analyzing data that told them exactly how bad things might get; their nights were filled with texts from friends in ICUs saying “we’re out of beds again.” For them, supporting masks, distancing, and vaccines was not an abstract exercise in control. It was a desperate attempt to keep the graphs from matching the worst-case scenarios they were modeling.

At the same time, many laptop-class workers could see the cracks in policy from their vantage point. They watched children regress academically and emotionally. They listened to friends in hospitality and retail lose jobs while white-collar hiring boomed. They saw how unevenly relief funds were distributed. This dual visionfear of the virus and frustration with policy clumsinesswas common, even if it rarely appeared in op-eds. In reality, plenty of people could simultaneously believe that COVID was genuinely dangerous and that some restrictions were poorly designed, poorly communicated, or kept in place too long.

That’s what makes the heroes-and-villains framing of In Covid’s Wake feel so off. Most “laptop class” people did not experience themselves as brave dissidents or selfish cowards; they experienced themselves as fallible humans trying to protect their families, support their students, keep their teams afloat, and stay sane while the ground kept shifting. They might have changed their minds as more evidence emerged. They might carry regretabout being too cautious or not cautious enough. What they rarely did was imagine that their personal story should crowd out the voices of nurses in overwhelmed ICUs or families whose loved ones died after being told the virus was overblown.

A more honest narrative about the pandemic would start from that messy reality. It would acknowledge that a grad student who never left their tiny apartment, a professor arguing about school policies on email threads, and a grocery clerk who never stopped going to work all lived through the same pandemic in radically different ways. It would resist the temptation to declare that one groupespecially the group with the most media accesswas the true “hero” or the ultimate victim. Howard’s review pushes us in that direction, inviting readers to be suspicious of any story in which the laptop class just happens to emerge as the protagonists of everyone else’s suffering.

Conclusion: Remembering Who the Story Is Really About

In Covid’s Wake promises a reckoning with how politics failed us during the pandemic. The Science-Based Medicine review suggests that, instead, the book performs a quieter failure: it recenters the story on well-connected commentators, especially Great Barrington Declaration allies, and waves away the consequences of their errors while amplifying their grievances.

A science-based perspective doesn’t deny the harms of lockdowns, school closures, or social isolation. It simply insists that those harms be weighed against the equally real harms of uncontrolled viral spreadand that we be honest about who bore which risks. If we’re going to learn the right lessons for the next pandemic, we’ll need books that grapple deeply with both sides of that ledger, not just with the “reputational injuries” of the laptop class.


meta_title: In Covid’s Wake Review and the Laptop Class

meta_description: A critical, science-based review of “In Covid’s Wake” and the laptop class narrative that recasts professors as heroes of the COVID-19 pandemic.

sapo: In In Covid’s Wake: How Our Politics Failed Us, Princeton professors Stephen Macedo and Frances Lee argue that elites overreacted to COVID and that dissenting academics were unfairly silenced. In his Science-Based Medicine review, neurologist Jonathan Howard counters that the book downplays evidence that restrictions saved lives and recasts laptop class professors and Great Barrington Declaration allies as tragic heroes while sidelining frontline workers and patients. This article unpacks that clash, examining what the science actually says about lockdowns and school closures, how the “laptop class” narrative distorts who really carried the risks of the pandemic, and why it matters whose perspective dominates our post-COVID story.

keywords: In Covid’s Wake review, laptop class professors, Science-Based Medicine, COVID-19 pandemic response, school closures and lockdowns, Great Barrington Declaration, public health interventions

The post A Review of “In Covid’s Wake”: According to Laptop Class Professors, the Heroes of the Pandemic Were Laptop Class Professors appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/a-review-of-in-covids-wake-according-to-laptop-class-professors-the-heroes-of-the-pandemic-were-laptop-class-professors/feed/0
Is the ACCME cracking down on quackery in continuing medical education (CME) offerings? Richard Jaffe thinks so.https://business-service.2software.net/is-the-accme-cracking-down-on-quackery-in-continuing-medical-education-cme-offerings-richard-jaffe-thinks-so/https://business-service.2software.net/is-the-accme-cracking-down-on-quackery-in-continuing-medical-education-cme-offerings-richard-jaffe-thinks-so/#respondSun, 01 Feb 2026 02:30:10 +0000https://business-service.2software.net/?p=1528Is the Accreditation Council for Continuing Medical Education (ACCME) finally getting serious about quackery in continuing medical education? Attorney Richard Jaffe says yes, warning that tighter standards are threatening complementary and alternative medicine courses. Science-Based Medicine and other skeptics see something more nuanced: a slow, imperfect shift toward demanding real evidence, stricter conflict-of-interest rules, and clearer separation between education and marketing. This in-depth analysis explains what the updated ACCME standards actually require, how they affect integrative and functional medicine CME, and what real-world experiences from conferences and hospitals reveal about a quiet but meaningful cultural change in medical education.

The post Is the ACCME cracking down on quackery in continuing medical education (CME) offerings? Richard Jaffe thinks so. appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

If you’ve ever sat through a sleepy hotel-ballroom CME lecture while the speaker waxed poetic about detoxing mitochondria with magic water, you’ve probably wondered: “Wait… this gets the same credit as a solid update on sepsis guidelines?”

That tensionbetween serious, evidence-based continuing medical education and seminars that drift into pseudoscienceis exactly what sparked the debate over whether the Accreditation Council for Continuing Medical Education (ACCME) is finally cracking down on “quackery” in CME. Attorney Richard Jaffe, long-time defender of alternative and fringe practitioners, sounded the alarm years ago that ACCME was targeting complementary and alternative medicine courses. Science-Based Medicine (SBM) took a closer look and asked the obvious follow-up question: is this a genuine cleanup of CME, or just regulatory theater?

Fast-forward to today, with ACCME’s updated Standards for Integrity and Independence and a much sharper focus on content validity, conflicts of interest, and commercial influence. Has the landscape actually changed? Let’s unpack the players, the rules, and some real-world experience to see whether quack-friendly CME is really on the endangered listor just rebranding itself with fancier slides.

ACCME 101: Why CME accreditation matters so much

The ACCME is the main organization in the United States that sets and enforces standards for accredited CME for physicians. It doesn’t run courses itself; instead, it accredits hospitals, medical schools, specialty societies, and private providers so they can offer CME that counts toward licensure, board maintenance, and hospital credentialing.

In theory, this creates a trusted seal: if it’s ACCME-accredited, the course should be scientifically valid, free from commercial bias, and aligned with improving patient care instead of selling the latest miracle in a syringe or supplement bottle. ACCME’s Standards for Integrity and Independence in Accredited Continuing Education explicitly say that accredited CME must present “accurate, balanced, scientifically justified” recommendations and maintain a “clear, unbridgeable separation” between education and marketing.

That’s the ideal. In practice, the system has had plenty of weak spots, especially when “integrative,” “functional,” or “holistic” medicine enters the chat.

How quackery sneaked into CME credit

Historically, CME was sometimes treated as a mixed bag: you could learn about heart failure management in the morning, then sit through an afternoon session praising unproven chelation for autism or homeopathy for chronic illnessall under the same formal educational umbrella. Science-Based Medicine documented how certain CME providers offered credit for courses promoting outright pseudoscience, like implausible cancer regimens and “detox” protocols with no credible evidence base.

So how did that happen if ACCME is supposedly guarding the gate?

  • Broad provider autonomy: Accredited providers are responsible for validating the clinical content of their activities. If a provider’s leadership is sympathetic to fringe ideas, “validation” can get… creative.
  • Lagging enforcement: ACCME historically relied on periodic reviews, self-study reports, and complaints. If no one complained, a lot could slide under the radar.
  • Confusing labels: Terms like “integrative” or “functional” medicine can blend conventional care with unsupported treatments, making it hard for busy reviewersand cliniciansto see where evidence ends and speculation begins.

Into this environment stepped Richard Jaffe, a lawyer who has represented figures like Stanislaw Burzynski and other practitioners criticized for promoting dubious cancer and alternative therapies. When Jaffe publicly worried that ACCME was finally going to stop granting CME credit for such content, skeptics took noticenot because they were sad, but because they were intrigued that the “foremost defender of quacks” smelled trouble.

What Richard Jaffe is worried about

Jaffe wrote that ACCME’s evolving standards and emphasis on “validated” clinical content threatened CME courses that promote complementary and alternative medicine. He argued that these changes could marginalize practices outside mainstream medicine by making it harder to get credit for courses about them.

From Jaffe’s perspective, the problem isn’t that these therapies lack evidence. It’s that organized medicine is allegedly biased against them and uses accreditation rules as a weapon. If ACCME requires that CME content be based on evidence “accepted within the profession of medicine,” then anything deemed “alternative” might be squeezed out, regardless of anecdotal enthusiasm.

From a science-based perspective, however, that’s sort of the point. The whole reason CME exists is to keep clinicians aligned with the best available evidencenot to legitimize every interesting hypothesis or long-running anecdotal tradition. When Jaffe says, in effect, “My clients can’t get CME credit for their favorite untested treatments anymore,” many skeptics respond, “Good. That’s how it should work.”

What ACCME’s rules actually say about quackery

ACCME has long had policies that, on paper, are pretty tough on unproven or dangerous treatments. Its content validation documents state that providers are not eligible for accreditation or reaccreditation if they promote recommendations or methods of practice that are outside the definition of CME, known to be ineffective, or carry risks that outweigh any benefits.

With the updated Standards for Integrity and Independence, the language got clearer and more structured. Among the key elements:

  • Content must be valid and evidence-based. Clinical recommendations must be grounded in accepted scientific evidence, with appropriate discussion of risks and benefits.
  • Ineligible companies and commercial influence are fenced off. Companies that produce or market healthcare goods used on patients (pharma, device manufacturers, etc.) cannot control educational content, choose speakers, or dictate who gets invited.
  • Financial relationships must be identified and mitigated. Anyone in control of content has to disclose relevant financial ties, and providers must take steps to mitigate potential bias.

ACCME has even published guidance on dealing with controversial topics, emphasizing that CME can absolutely discuss new, emerging, or disputed therapiesbut the activity must clearly distinguish evidence from speculation and avoid promoting unscientific care recommendations.

On paper, that is a strong framework. The million-dollar question is whether it’s being enforced vigorously enough to change behavior.

Is this a real crackdownor just better paperwork?

So, is ACCME actually “cracking down,” or did Jaffe simply notice that vague guidelines were becoming more explicit and slightly more enforceable?

There are some signs of tightening:

  • Providers that lean heavily into pseudoscientific content can be ruled ineligible or have their accreditation status downgraded.
  • ACCME’s examples of noncompliance include activities where commercial sponsors or fringe therapies are promoted without balanced evidence or proper disclosure.
  • The new standards require more robust conflict-of-interest mitigation, which makes it harder for a supplement-funded “expert” to turn a CME talk into a sales pitch.

At the same time, there’s still a lot of wiggle room. Integrative or functional medicine courses that stick to relatively mainstream topics, sprinkle in enough references, and avoid overtly recommending egregious nonsense can still pass review. Some activities walk right up to the line, hinting strongly at unproven interventions without explicitly endorsing them.

In other words, the crackdown is real in intent, but its impact depends heavily on how aggressively accreditors interpret and enforce the rulesand how willing providers are to push boundaries.

The skeptics’ view: progress, but not perfection

Science-Based Medicine’s analysis of the so-called crackdown was cautious. The authors agreed that, if ACCME truly enforced its standards, many quack-friendly CME activities could be curtailed. They pointed to prior examples where the council’s rules should have prevented pseudoscientific content from being accredited in the first place.

The skepticism has two main components:

  1. Enforcement has lagged behind policy. The rules have looked good on paper for years; the problem has been inconsistent application and a tendency to give providers the benefit of the doubt.
  2. “Quackery” rebrands itself. When one style of alt-med CME becomes untenable, it can shed the more inflammatory marketing, adopt gentler language (“supportive,” “adjunctive,” “personalized”), and slip back under the radar.

From this viewpoint, Jaffe’s alarm is less about a sudden, draconian crackdown and more about the gradual tightening of a system that used to be remarkably tolerant of nonsense.

Why this matters for physicians, patients, and the public

Treating CME quality like inside baseball is a mistake. What physicians learn in CME courses influences real-world decisions about diagnosis, prescribing, referrals, and how they talk to patients about alternative therapies.

If a doctor hears in an accredited course that high-dose vitamin regimens can “reverse” advanced cancer or that homeopathy is a valid option for serious chronic disease, that endorsement comes with the halo of legitimacy. Patients rarely see the accreditation details, but they feel the effects when clinicians recommendor fail to push back againstunproven treatments.

On the other side, if ACCME and other accreditors insist on strong evidence and honest communication about uncertainties, CME can help clinicians navigate patient questions about supplements, detox programs, or “natural” cures without either sneering at patients or endorsing false hope.

What a truly science-based CME ecosystem would look like

If ACCME is serious about reducing quackery in CME, several principles need to move from policy documents into everyday practice:

1. Evidence first, anecdotes second

CME activities should highlight systematic reviews, randomized trials, and high-quality observational data. Case reports and clinician testimonials are fine as illustrations, but they can’t be the backbone of an educational activity on treatment effectiveness.

2. Radical transparency about uncertainty

If a topic is genuinely unsettledsay, the best strategy for tapering certain medications or the evolving role of new immunotherapiesCME should say so clearly. Ambiguity is not a license to fill the gap with whatever sounds appealing.

3. Honest handling of “controversial” therapies

CME can and should discuss complementary approaches, but only in proportion to the evidence. That might mean describing some popular alt-med practices primarily in terms of what we don’t know, the risks of delaying proven treatment, and how to counsel patients who are interested in these options.

4. Active monitoring and real consequences

When providers repeatedly push pseudoscientific content, there should be meaningful consequences: probation, loss of accreditation, and clear communication about why. Quiet chats and gentle suggestions aren’t enough when patient safety is on the line.

Experiences and lessons from the front lines of CME “crackdowns”

So what does all this look like in real life, beyond policy PDFs and blog debates? Here are some composite experiencesdrawn from how CME providers, physicians, and skeptically minded educators have navigated this shifting landscapeto illustrate how a “crackdown” can feel on the ground.

When the wellness weekend hits a wall

Picture a regional hospital that has long offered an annual “integrative wellness weekend” with CME credit. For years, it featured a mix of reasonable content (mindfulness for stress, exercise counseling) and eyebrow-raising material: detox foot baths, energy balancing, and intravenous vitamin cocktails for vague “immune support.” Attendance was strong; the marketing photos were full of yoga mats and green smoothies.

Then the education office updated its processes to align with ACCME’s newer standards. Speakers now had to submit references for clinical claims, disclose all financial ties, and undergo more rigorous peer review. When the committee examined the slides on detox protocols and IV vitamin drips, it realized there were no credible trials supporting the sweeping claims being made. The presenters were also financially tied to the clinics selling those services.

The result? The course didn’t vanishbut it changed. The detox and vitamin infusion sessions lost their CME designation and were either dropped or moved to a separate, clearly non-accredited “wellness discussion” with toned-down claims. What remained in the accredited portion focused on lifestyle interventions with a reasonable evidence base. Some regular attendees grumbled that the event had become “too mainstream,” but the hospital avoided giving formal educational blessing to practices it couldn’t defend scientifically.

The conference planner’s dilemma

On the provider side, many CME planners now describe their job as equal parts educator and referee. One planner for a large specialty society talks about reviewing lecture proposals on “cutting-edge metabolic therapy” that sound excitingbut, once you dig into the references, are built on tiny uncontrolled series, preprints, or speculative mechanistic papers.

Under older, looser expectations, those sessions might have slipped into the program with minimal pushback. Under newer standards, planners feel more pressure to ask tough questions: Can you show high-quality evidence that this improves outcomes? Are you over-claiming benefit? Do you have financial ties that might color your enthusiasm? More than once, a proposed talk has been accepted only after being reframed as “emerging hypotheses and early data,” with explicit disclaimers that the therapy should not replace standard care outside clinical trials.

It’s not as dramatic as banning entire schools of thought, but it is a subtle cultural shift toward intellectual honesty.

The skeptical clinician’s path through the CME jungle

From the clinician’s perspective, the landscape remains mixed. A primary care doctor looking for CME credits can still find courses that lean heavily into glossy branding and soft claims. But a growing number of physicians are learning to “read” CME the way they read scientific literature: Who’s sponsoring this? How strong is the evidence? Is the speaker clearly separating data from opinion?

Some clinicians describe a personal rule: if a course spends more time on branding, testimonials, and vague promises than on study design, effect sizes, and limitations, they skip iteven if it’s technically accredited. Others have started asking their institutions not to pay for certain conferences that seem more like product showcases than education. That kind of bottom-up pressure complements ACCME’s top-down standards.

Patients notice, even if they don’t know ACCME exists

Patients rarely ask, “Is this CME accredited?” But they do notice when their clinicians’ advice feels grounded and consistent versus wildly variable. A patient with cancer who is told by one doctor that IV vitamin C is a miracle and by another that it’s unproven and potentially risky is stuck in a confusing, stressful tug-of-war.

As CME moveshowever slowlytoward stricter evidence requirements, patients may indirectly benefit from more consistent messaging. Instead of hearing, “I saw a great talk at a conference; you should try this unregulated cocktail,” they’re more likely to hear, “Some people are experimenting with this, but right now, we don’t have strong evidence it helps, and it may carry risks.” That’s not as thrilling, but it’s a lot more honest.

So… is the ACCME really cracking down on quackery?

The fairest answer is: more than before, but not nearly as much as a hard-core skeptic might wish.

ACCME’s policies have clearly evolved toward stricter, more explicit standards for evidence, independence, and bias mitigation. Providers that blatantly promote pseudoscientific or dangerous care have less room to hide, and some of the most egregious CME offerings have been curtailed or stripped of accreditation. The fact that Richard Jaffe and similar defenders of alternative medicine have publicly worried about these changes suggests that they are not purely symbolic.

At the same time, “quackery” is a moving target. As long as there is financial and ideological incentive to promote unproven therapies, there will be efforts to wrap them in respectable language and shoehorn them into educational offerings. ACCME can raise the floor and push the culture in a more science-based direction, but it cannot police every conference room or every persuasive speaker.

For now, the best path forward is a combination of strong, enforced standards from ACCME and other accreditors; vigilant, evidence-literate CME planners; and clinicians who treat CME not as sacred truth, but as information to be weighed criticallyjust like any other medical claim.

The post Is the ACCME cracking down on quackery in continuing medical education (CME) offerings? Richard Jaffe thinks so. appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/is-the-accme-cracking-down-on-quackery-in-continuing-medical-education-cme-offerings-richard-jaffe-thinks-so/feed/0
Dismantling NCCAM: A How-To Primerhttps://business-service.2software.net/dismantling-nccam-a-how-to-primer/https://business-service.2software.net/dismantling-nccam-a-how-to-primer/#respondSun, 01 Feb 2026 02:15:06 +0000https://business-service.2software.net/?p=1522The National Center for Complementary and Integrative Health (formerly NCCAM) was created to study alternative medicine, but decades later it mainly proves a blunt truth: when implausible therapies are tested with rigorous methods, they mostly fail. This in-depth primer explains how NCCAM came to exist, why its politically protected status frustrates science-based clinicians and researchers, and what “dismantling” it would really mean in practicefrom absorbing plausible work into mainstream NIH institutes to cutting off funding for homeopathy, energy healing, and other disproven ideas. If you care about responsible research spending, honest communication about CAM, and holding every therapy to the same evidence standard, this is your guide to turning a costly experiment into a lesson learned.

The post Dismantling NCCAM: A How-To Primer appeared first on Everyday Software, Everyday Joy.

]]>
.ap-toc{border:1px solid #e5e5e5;border-radius:8px;margin:14px 0;}.ap-toc summary{cursor:pointer;padding:12px;font-weight:700;list-style:none;}.ap-toc summary::-webkit-details-marker{display:none;}.ap-toc .ap-toc-body{padding:0 12px 12px 12px;}.ap-toc .ap-toc-toggle{font-weight:400;font-size:90%;opacity:.8;margin-left:6px;}.ap-toc .ap-toc-hide{display:none;}.ap-toc[open] .ap-toc-show{display:none;}.ap-toc[open] .ap-toc-hide{display:inline;}
Table of Contents >> Show >> Hide

Once upon a time in Bethesda, Maryland, Congress looked at the growing world of herbs, homeopathy, energy fields, and coffee enemas and thought,
“Sure, let’s study that.” The result was the Office of Alternative Medicine, which later grew up into the National Center for Complementary and
Alternative Medicine (NCCAM) and, after a strategic rebranding in 2014, the National Center for Complementary and Integrative Health (NCCIH).
Same building, same mission, slightly shinier name.

From the start, science-based medicine advocates have asked a simple question: if a treatment works, why does it need a special “alternative”
corner of the National Institutes of Health (NIH)? Why not just test it like everything else and, if it passes, call it medicine?
That question sits at the heart of the original Science-Based Medicine essay “Dismantling NCCAM: A How-To Primer” and still matters today, now
that NCCAM has been rebranded but not really rethought.

In this article, we’ll unpack what NCCAM/NCCIH is, why critics see it as a taxpayer-funded monument to bad incentives, and what “dismantling” it
would actually look like in practical, policy-focused terms. Along the way we’ll keep the tone light, but the standards firmly rooted in
evidence, not wishful thinking or magical energy fields.

What NCCAM (Now NCCIH) Actually Is

NCCIH is one of 27 institutes and centers that make up the NIH. It started in 1991 as the Office of Alternative Medicine with a small budget and a
congressional mandate to explore “unconventional” therapies. By 1998 it had been elevated to a full NIH center as NCCAM, and in 2014 it was
renamed the National Center for Complementary and Integrative Health to focus on “integrative” rather than “alternative” care.

That name change wasn’t just a branding tweak. “Alternative” suggests something outside mainstream medicine; “integrative” suggests something cozy
and compatible with it. Critics argue that this shift makes it easier to market unproven practices as gentle, holistic add-ons rather than
fringe ideas that haven’t passed scientific muster. In other words, the new label softens skepticism without fixing the underlying scientific
problems.

NCCIH divides the world of complementary and alternative medicine (CAM) into three broad buckets:

  • Natural products: herbal supplements, botanicals, vitamins, and various plant-based concoctions.
  • Mind and body practices: yoga, meditation, tai chi, qigong, spinal manipulation, acupuncture, and similar practices.
  • Other approaches: homeopathy, naturopathy, Traditional Chinese Medicine systems, Ayurveda, and energetic or spiritual
    healing practices.

Over the decades, NCCAM/NCCIH has received billions of dollars in cumulative funding to study these modalities. Some of that work has looked at
plausible questions (for example, whether mindfulness training helps chronic pain or anxiety). A lot of it, however, has chased highly implausible
claimslike distant prayer changing hard clinical outcomes, or magnets curing arthritisthat clash with basic biology and have repeatedly produced
negative or inconclusive results.

When Taxpayer-Funded Science Becomes a Parallel Universe

If NCCIH were a small side unit quietly running a few studies, it would probably not attract much attention. But it isn’t. It sits inside the
world’s premier biomedical research agency, with its own budget, leadership, advisory council, and strategic plans. That structure has created a
kind of parallel research universe where certain ideas get protected and funded not because they’re especially promising, but because they fall
under the “CAM” umbrella.

Over the years, NCCAM/NCCIH has funded or co-funded trials on topics such as:

  • Prayer and “distance healing” for serious diseases.
  • Magnet therapy for pain conditions like arthritis and carpal tunnel syndrome.
  • Energy healing for animals and lab models.
  • Coffee enemas and other detox regimens for cancer.
  • Homeopathic preparations, which by design contain little or none of the original substance.

These projects are not fringe YouTube experiments; they are federal grants that go through peer review, consume time and talent, and result in
published papers. The consistent pattern, documented by skeptics who have followed these trials for years, is that high-quality studies largely
fail to confirm the bold claims made by CAM advocates. In other words: the more rigorous the research, the less impressive the results.

That wouldn’t be a problem if the center’s mission were to test a wild idea once and move on. But critics argue that NCCIH has often kept
returning to the same implausible wells, even when earlier studies were negative. If the rest of NIH behaved this waypouring money into
repeatedly disproven hypotheseswe’d call it a scandal.

Why Science-Based Medicine Advocates Call for Dismantling

Dismantling NCCAM wasn’t a phrase invented lightly. When Science-Based Medicine and other skeptical organizations talk about “dismantling,” they
are pointing to several recurring problems that have persisted despite leadership changes and strategic plans.

1. No Unique Scientific Mission

The central objection is simple: there is no scientific reason to carve out a separate center for CAM. If a therapy is plausible enough to
warrant studysay, mindfulness for chronic pain, or yoga for back painit can be studied by existing institutes such as the National Institute of
Neurological Disorders and Stroke, the National Institute of Mental Health, or the National Institute of Arthritis and Musculoskeletal and Skin
Diseases. NIH already has the infrastructure, expertise, and peer review systems in place to evaluate behavioral or non-drug interventions.

Creating a separate center implies that CAM is a coherent scientific specialty rather than a marketing category. It also creates pressure to
maintain a pipeline of CAM-specific projects just to justify the center’s existence. That’s backwards: the science should determine what gets
funded, not the survival needs of a politically created office.

2. Extraordinary Claims, Ordinary or Negative Results

Many of the interventions that drew NCCAM’s early attentionlike homeopathy, energy healing, and distant prayerrest on mechanisms that flatly
contradict chemistry, physics, or physiology. When such claims are tested rigorously, they almost always fail. The problem is not simply that they
fail, but that the negative results often do not lead to a clear public message of “this doesn’t work; don’t waste your money.”

Instead, reports may emphasize how “more research is needed” or highlight small, clinically unimportant differences. Meanwhile, marketing for
these same therapies often cherry-picks the most flattering phrases from government documents to lend credibility: “studied by the NIH” can be a
powerful sales tool, even if the underlying trial found nothing clinically meaningful.

3. Politics Over Evidence

NCCAM was born out of political pressure, not scientific demand. Members of Congress sympathetic to alternative medicine advocates pushed for a
dedicated office and later a full center, often over the reservations of mainstream researchers. That political origin still matters. It means
NCCIH is structurally insulated from the normal “survival of the most useful” pressures that shape NIH research priorities.

Critics have noted that the center’s agenda and survival are tied to keeping certain constituencies satisfiedpractitioners, industry
stakeholders, and voters who like the idea of “natural” medicinerather than simply asking, “Where can these dollars do the most good for
patients?” When politics rather than plausibility drives what gets funded, the result is often look-busy science with low impact.

So What Would “Dismantling NCCAM” Actually Look Like?

The phrase can sound dramatic, like a wrecking ball swinging through NIH headquarters. In practice, dismantling NCCAM/NCCIH would be more like a
careful reorganization of responsibilities, with a strong emphasis on scientific standards and patient welfare.

Step 1: Absorb Plausible Research into Existing NIH Institutes

Not everything NCCIH touches is nonsense. Studying physical activity, stress reduction, and cognitive-behavioral techniques for chronic pain,
depression, or insomnia can absolutely be worthwhile. The issue is where that work lives and under what rules.

A science-based dismantling plan would:

  • Move studies of exercise, mindfulness, and other plausible behavioral interventions into appropriate disease-focused institutes.
  • Subject those studies to the same standards of trial design, preregistration, and replication as any other clinical research.
  • Eliminate the artificial requirement that they be branded as “integrative” or “complementary” to be funded.

In other words, if a yoga-based program looks promising for back pain, it should compete directly with other pain treatments for funding. No
special category, no parallel peer-review universe.

Step 2: Stop Funding Implausible and Disproven Modalities

Dismantling also means drawing firm lines. There is no scientific justification for continued federal funding of homeopathy, energy healing,
distant prayer as a medical intervention, or magnet therapy for systemic disease. These ideas either violate basic science or have already been
tested and failed in controlled trials.

A concrete policy step would be to:

  • Explicitly deem certain categories “no longer a research priority” after repeated high-quality null results.
  • Redirect funds previously used for such trials into more promising interventions, including underfunded areas of conventional care.
  • Publish clear summaries in plain language stating that these modalities have not demonstrated meaningful benefit.

This isn’t “close-minded.” It’s how science normally works: hypotheses that repeatedly fail get deprioritized so new ideas can be tested.

Step 3: Raise the Bar for All Non-Drug Therapies

Critics sometimes worry that shutting down NCCIH would mean ignoring non-pharmacologic treatments. It’s actually the opposite. The goal is to hold
all non-drug therapiesacupuncture, chiropractic, meditation, manual therapy, dietary supplementsto the same standards any drug or
device would face.

Whether a trial is run inside NCCIH or another institute, science-based medicine calls for:

  • Biologically plausible mechanisms.
  • Solid preclinical or preliminary data before large, expensive clinical trials.
  • Preregistered protocols, appropriate controls, and meaningful clinical endpoints.
  • Transparent reporting, including null or negative results.

The problem is not that NCCIH studies non-drug interventions; it’s that it has historically funded too many poorly grounded ideas and sent mixed
messages when they failed.

Step 4: Revert NCCIH to a Small Evaluation Officeor Close It

One practical dismantling option is to shrink NCCIH back into a small office within the NIH director’s purview. That office could:

  • Coordinate occasional methodological workshops on studying behavioral interventions.
  • Serve as a clearinghouse summarizing evidence about popular non-drug therapies for other institutes and the public.
  • Have no independent grant-making authority, preventing it from becoming a protected silo.

A more decisive option would be to abolish the center entirely, transferring its staff and ongoing plausible projects to other institutes and
winding down the rest. Either way, the key is that “CAM” stops being a protected funding category.

Step 5: Fix Public Communication

Finally, dismantling isn’t just about budgets; it’s about language. Any government communication about CAM should be brutally clear about what
works, what doesn’t, and where evidence is lacking. That means:

  • No “careful” wording that sounds like an endorsement for therapies that failed trials.
  • Prominent statements that “no benefit was found” when that’s what the data show.
  • Patient-facing materials that actively warn about opportunity costs, financial harm, and the risk of delaying effective treatment.

If a treatment has repeatedly failed in well-designed research, the most integrative thing we can do is integrate that failure into patient
counseling.

Common Counterargumentsand Science-Based Replies

“But people love CAM. Shouldn’t we study what they use?”

Yes, popularity matters, but it doesn’t override plausibility or opportunity cost. People also love fad diets and detox cleanses; that doesn’t
justify unlimited federal trials on lemon-juice cleanses. Studying widely used therapies is reasonable, but only within a framework that prioritizes
likelihood of benefit, not marketing buzz.

“NCCIH is improving and focusing on whole-person health.”

NCCIH’s recent strategic language emphasizes “whole-person health” and non-pharmacologic strategies for pain and chronic disease. Some of that is
aligned with mainstream priorities, like reducing opioid reliance and improving self-management. The criticism is that these goals don’t require a
separate CAM-branded center. Every major NIH institute already has to think in “whole-person” terms; slapping a CAM label on it doesn’t add
scientific value.

“Getting rid of NCCIH would prove scientists are biased.”

The opposite is true. Science-based critique is not about protecting the status quo; it’s about matching resources to reality. When
high-quality trials show a therapy helps, science-based physicians adopt iteven if it started life as an “alternative” idea. What skeptics object
to is funding that continues long after evidence has turned against a treatment.

What Clinicians, Researchers, and Citizens Can Do

Dismantling NCCAM/NCCIH in the policy sense would require congressional action and pressure from scientific and medical organizations. But you
don’t need a Senate seat to nudge things in the right direction.

  • Clinicians can prioritize honest conversations about evidence, gently but firmly discouraging patients from abandoning
    proven care in favor of unproven CAM therapies.
  • Researchers can push for higher standards in trial design, resist “tooth-fairy science” (studying detailed mechanisms of
    something that probably doesn’t work), and advocate that plausible non-drug research live in mainstream institutes.
  • Citizens can support organizations that promote science-based health policy, contact their representatives about responsible
    research funding, and vote for leaders who value evidence over anecdotes.

In short: Dismantling NCCAM is less about smashing something and more about cleaning up how we think, study, and talk about medicineno quotation
marks needed around the word.

Experiences from the Front Lines of Science-Based Medicine

To understand why people get fired up about NCCAM/NCCIH, it helps to look at what this all feels like on the ground. The stories below are
composites based on recurring experiences reported by clinicians, researchers, and policy watchers.

Imagine you’re a primary care physician in a busy clinic. You see a patient with poorly controlled diabetes who proudly announces they’ve stopped
their medication because they’re “going natural.” They show you a printed packet from a supplement company, complete with quotes about NIH-funded
studies on “ancient botanical remedies” and “integrative approaches” to blood sugar. The company’s marketing has latched onto the fact that
something vaguely related was once studied under an NIH CAM grant. The nuancethat the study was small, negative, or not reproducednever made it
into the brochure.

You now have to do three jobs at once: manage the diabetes crisis in front of you, dismantle misleading claims without shaming the patient, and
gently explain that “NIH studied this” is not the same as “NIH proved this works.” When you later discover that the study in question was funded
through NCCAM and produced no meaningful benefit, you understandably wonder why such work is still being used as a halo for products that don’t
help your patients.

Now shift to the viewpoint of a young researcher. You’re passionate about pain management and fascinated by how exercise, cognitive-behavioral
strategies, and mindfulness can help people function better. You notice that many grants in your area are routed through NCCIH rather than the
traditional neuroscience or musculoskeletal institutes. That sounds fine at firstmoney is moneybut then you sit on a review panel and realize
the portfolio is a weird mix of solid behavioral science and projects on energy fields and “bio-information transfer” that feel more like
science-fiction than science.

You start to worry that your own respectable work will be lumped together with highly implausible projects simply because they share the CAM
label. That can make collaborations awkward and may even affect how seriously some colleagues take your research. You’d rather your trial on
physical activity and pain live in a mainstream pain institute, judged by the same criteria as every other treatment.

Finally, picture a staffer on Capitol Hill tasked with reviewing NIH spending. You’re not a scientist, but you’re reasonably savvy. On your desk
are budget lines showing that one centerNCCIHhas poured substantial resources into studies that have not changed guidelines, improved standard
care, or produced widely adopted therapies. Meanwhile, you’re hearing from cancer and infectious disease researchers who struggle to get highly
promising projects funded.

When you dig into the history, you discover that NCCAM was created and expanded largely due to political pressure, not because the scientific
community desperately needed a CAM silo. You also find critical reports pointing out that many NCCAM-funded trials are of lower priority or
weaker design compared with the rest of NIH’s portfolio. At some point, the question “Should we keep funding this?” stops being edgy and starts
sounding like basic fiscal responsibility.

These kinds of experiences help explain why dismantling NCCAM/NCCIH is not a niche crusade. It’s a reflection of deeper frustrations with how
pseudoscience, politics, and wishful thinking can distort research priorities in even the most respected institutions. For clinicians, it shows up
as confusion at the bedside. For researchers, it shows up as mixed signals about what counts as serious work. For policy staff, it shows up as
a line item that is increasingly hard to justify.

None of this means we should ignore non-drug approaches, dismiss patients’ lived experiences, or cling blindly to the status quo. It means we
should demand that every therapyherbal, high-tech, ancient, or brand newplay by the same scientific rules. Dismantling NCCAM is ultimately
about dismantling the double standard that has allowed weak ideas to hide under the comforting umbrella of “complementary and integrative health.”

Conclusion: One Standard of Evidence, No Special Islands

NCCAM, rebadged as NCCIH, represents a well-intentioned but deeply flawed experiment: carve out a special island for “alternative” or
“integrative” medicine inside the world’s leading biomedical research agency and hope that good science emerges. Decades later, the main legacy is
a trail of negative or inconclusive trials, a confusing public message, and a persistent double standard about what deserves federal funding.

Dismantling NCCAM doesn’t mean ignoring yoga, meditation, exercise, or nutrition. It means treating them as what they arepotentially useful
interventions that should live in the same ecosystem as everything else, judged by plausibility, evidence, and patient outcomes. When we stop
protecting categories and start protecting patients and scientific integrity instead, everybody winsexcept, perhaps, the sellers of magic
magnets.

The post Dismantling NCCAM: A How-To Primer appeared first on Everyday Software, Everyday Joy.

]]>
https://business-service.2software.net/dismantling-nccam-a-how-to-primer/feed/0