3, 2, 1: Health AI Brief
Every Friday
April 10, 2026

AI is reshaping healthcare fast. Below are 3 key AI developments, 2 studies, and 1 takeaway for this week to help you better lead with AI. Target read time: 5 minutes.

3 Market Signals

Medicare's WISeR (Wasteful and Inappropriate Service Reduction) model launched January 1 across 6 states, introducing AI-powered prior authorization to traditional Medicare Part B for the first time. Private tech vendors review requests for 13 elective services (nerve stimulators, epidural steroid injections, cervical fusion, among others) and earn up to 20% of the savings from averted care. On March 25, the Electronic Frontier Foundation (EFF) filed a FOIA lawsuit against CMS seeking vendor agreements, bias testing records, and audit documentation. The program potentially affects 6.4 million Medicare beneficiaries.

So what?

Paying vendors a percentage of the care they deny creates an obvious incentive problem. Without penalties for inappropriate denials, the pressure only runs one direction. Right now, the legal pushback is the closest thing to a counterweight.

Read the EFF complaint →  |  WISeR explainer →

Anumana received FDA clearance for its ECG-AI algorithm — the first and only software-as-a-medical-device cleared to detect cardiac amyloidosis from a standard 12-lead electrocardiogram. Validated across a multicenter study of 25,525 patients, the algorithm achieved 78.9% sensitivity and 91.2% specificity. Cardiac amyloidosis is frequently underdiagnosed because its symptoms are subtle and nonspecific. The tool identifies patterns in routine ECG waveforms that are invisible to human interpretation.

So what?

A routine test that most patients already get, now screening for a life-threatening condition that clinicians routinely miss. That's the kind of AI application that changes diagnostic economics — not by adding new tests, but by extracting more from existing ones.

Read the announcement →

Glenn Steele Jr., who led Geisinger Health System from 2001 to 2015, argues that U.S. health systems employ far more administrative staff than clinicians. That ratio, he argues, has grown "dramatically more disparate" over 2 decades. His prescription: autonomous AI must replace large portions of the back-office workforce within 5 years. "Health care will become autonomous because the survival of health systems depends upon it."

So what?

The administrative bloat problem is real. Whether the answer is mass replacement on a 5-year timeline is a different question — but it's worth noting that this argument is coming from a former health system CEO, not a tech founder.

Read the full piece →

2 Research Studies

Researchers at Northwestern Medicine tested 6 open-source AI models — including Meta's Llama 3.1, Google's Gemma, and DeepSeek-R1 — on 94 de-identified lung cancer pathology reports. Oncologists rated the AI-generated summaries as consistently more comprehensive than physician-written versions, particularly in capturing molecular and genomic findings critical for treatment decisions. DeepSeek and Llama 3.1 showed the strongest performance. The team is now developing a prototype app for clinical validation.

Why it matters

As pathology reports grow denser with genomic data, the cognitive load on oncologists increases. Open-source models — not proprietary, not fine-tuned on clinical data — outperformed physicians at completeness. That bottleneck in precision oncology is now being eliminated.

Read the coverage →  |  JCO CCI study →

A national poll of 1,007 U.S. adults (January 16–20, 2026) commissioned by The Ohio State University Wexner Medical Center found that only 42% of Americans are open to AI in their care — down from 52% in 2024. Belief that AI makes health processes more efficient dropped from 64% to 55%. Yet paradoxically, 51% have used AI to make health decisions without consulting a doctor, and 62% have used it to understand symptoms.

Why it matters

People are using healthcare AI more while trusting it less. Not a huge sample size, but directionally telling.

Read the survey findings →

1 Key Insight
Everyone's Moving. Not in the Same Direction.

Healthcare AI is accelerating on multiple fronts this week — and so is the resistance. A former health system CEO is calling for mass AI replacement of administrative workers within 5 years. The FDA just cleared a new AI diagnostic that finds disease patterns invisible to clinicians. And Medicare is piloting AI-driven prior authorization that pays vendors a cut of the care they deny.

Meanwhile, public trust in healthcare AI dropped 10 points in a single year. The EFF is suing CMS for transparency on WISeR's algorithms. And in the AMA's latest physician survey, 81% of doctors now use AI professionally — but 85% want to be consulted before it's adopted in their practice, and 88% are concerned about skill loss. Adoption and ambivalence are growing in tandem.

Takeaway

I'd argue this tension is actually healthy. The worst version of this moment would be one where everyone's moving in the same direction without friction. Friction is what separates thoughtful deployment from reckless speed. The question is whether the friction is coming from the right places — informed patients, rigorous regulators, clinicians with genuine concerns vs. inertia and fear of change.

Know someone who'd find this useful?

Share

Keep Reading