3, 2, 1: Health AI Brief
Every Friday
April 17, 2026

AI is reshaping healthcare fast. Below are 3 key AI developments, 2 studies, and 1 takeaway for this week to help you better lead with AI. Target read time: 5 minutes.

3 Market Signals

A new West Health-Gallup survey of 5,660 U.S. adults found that 25% have used an AI tool or chatbot for health information or advice. Among those users, 14% reported skipping a provider visit based on what the AI told them — an estimated 14 million adults nationally. Most use AI to research before (59%) or after (56%) doctor visits. But the access gap is stark: 32% of adults earning under $24,000 cited cost as the reason they turned to AI, compared to 2% of those earning $180,000 or more. Only 4% of users strongly trust AI accuracy.

So what?

14 million people replacing a doctor visit with a chatbot response is a big number. But the more telling finding is who's doing it and why. For higher-income adults, AI is a convenience. For lower-income adults, it's turning into a workaround for a system they can't afford. That's not an AI problem; it's a healthcare access problem that AI is now absorbing.

Read the full poll →

Chapter, an AI-native Medicare navigation platform, closed a $100 million Series E led by Generation Investment Management. The company's valuation more than doubled in under a year to $3 billion. Revenue grew 3x in 2025, surpassing $100 million in ARR. Chapter pairs AI with licensed human advisors to deliver personalized, unbiased Medicare coverage guidance — a model the company calls "the trust layer between seniors and technology." With the new funding, Chapter plans to expand beyond Medicare into broader retirement financial services.

So what?

A $3 billion valuation for a Medicare navigation company tells us where the market sees AI creating value: in this case, serving the elderly where the stakes are high, it's not AI alone, but AI + Advisor that's winning.

Read the announcement →

Digital health startups raised $4 billion across 110 deals in Q1 2026 — $1 billion more than Q1 2025 and the strongest first quarter since the pandemic peak. Average deal size climbed to $36.7 million, the highest since Q4 2021. 12 megadeals ($100M+) accounted for 59% of all capital deployed. But the most notable shift: Rock Health retired its "AI deal" tracking category entirely, noting that AI is now embedded in how digital health companies are built.

So what?

When the industry's most cited digital health analyst stops distinguishing AI companies from non-AI companies, that's a market signal in itself. AI is no longer a separate category — AI is now a part of every category.

Read the full report →

2 Research Studies

An international team of researchers tested 5 popular AI chatbots — ChatGPT, Gemini, Meta AI, Grok, and DeepSeek — by asking each 10 questions across 5 health categories (cancer, vaccines, stem cells, nutrition, and athletic performance). Roughly 50% of all responses were deemed problematic, including nearly 20% rated highly problematic. Grok generated the most highly problematic responses (58%), while Gemini produced the fewest. All 5 chatbots produced hallucinated or fabricated citations, with reference accuracy averaging just 40%. Across 250 total questions, only 2 refusals to answer occurred — both from Meta AI.

Why it matters

This is a head-to-head comparison of the 5 chatbots people are actually using — not a niche model or a lab experiment. The 50% error rate is concerning enough. But the near-zero refusal rate may be worse: these tools almost never say "I don't know" (even when they should!)

Read the BMJ Open study →

Researchers at the University of Oxford, led by Professor Charalambos Antoniades, developed an AI algorithm that analyzes microscopic changes in cardiac fat texture on routine CT scans — patterns invisible to human radiologists. Trained and validated on more than 70,000 individuals across 9 NHS trusts over a decade, the tool predicted 5-year heart failure risk with 86% accuracy. Patients in the highest-risk group were 20 times more likely to develop heart failure than those in the lowest-risk group. The system is fully automated and requires no human input.

Why it matters

Last week we covered an AI that detects cardiac amyloidosis from a routine ECG. This week, a different AI predicts heart failure from a routine CT scan. The pattern is the same: extracting clinically meaningful signals from tests patients are already getting. The difference is that this one is prognostic; ie, predicting disease before it appears, not diagnosing it after.

Read the Oxford release →  |  British Heart Foundation →

1 Key Insight
AI Doesn't Know What It Doesn't Know.

An estimated 14 million Americans skipped a doctor visit in the past month after getting health advice from an AI chatbot. That's from a new Gallup survey of 5,660 adults. Meanwhile, a BMJ Open study published this week found that the same chatbots people are using — ChatGPT, Gemini, Grok, DeepSeek, Meta AI — give problematic medical advice about half the time. Nearly 20% of responses were rated highly problematic. And across 250 questions, these tools refused to answer just twice.

It gets worse. A Nature report this week documented a Swedish researcher who invented a fake eye condition called "bixonimania," planted it in preprints with obvious red flags (including acknowledgments thanking "The Starfleet Academy" and a statement that "this entire paper is made up"), and watched as Copilot called it "an intriguing and relatively rare condition," Gemini advised visiting an ophthalmologist, and Perplexity claimed 90,000 people suffered from it. Three researchers at a foreign medical school then cited the fake papers in a real, peer-reviewed journal (it was later retracted).

Takeaway

The problem isn't that people are using AI for health — most are researching before or after doctor visits, not replacing them. The problem is that these tools deliver wrong answers with the same confidence as right ones, and they almost never say "I don't know." For the 32% of low-income adults turning to AI because they can't afford a doctor, that confidence gap carries real clinical risk. I'd argue the most useful thing a health AI chatbot could learn to do is say: "I don't know."

Know someone who'd find this useful?

Share

Keep Reading