3, 2, 1: Health AI Brief
Every Friday
February 20, 2026

AI is reshaping healthcare fast. Below are 3 key AI developments, 2 studies, and 1 takeaway for this week to help you better lead with AI. Target read time: 5 minutes.

3 Market Signals

Dr. Mehmet Oz, head of CMS, is promoting AI avatars as part of a $50 billion rural health modernization initiative. The vision: digital avatars conducting medical interviews, robots performing ultrasounds on pregnant women, and drones delivering medications where pharmacies don't exist. Oz claims AI could "multiply the reach of doctors fivefold — or more." Critics, including University of Minnesota researchers, warn this creates a two-tiered system — one for those with resources, and another for those without — while ignoring unreliable broadband, low health literacy, and the trust that in-person care provides. More than 190 rural hospitals have closed since 2005.

So what?

This isn't just a policy proposal — I see this as a signal for where CMS is headed. Meanwhile, HHS is actively seeking public input on how to accelerate AI in clinical care, with comments closing February 23. Health leaders who want a seat at the table now just have days.

Read the full story →

Microsoft Health VP Joe Petro disclosed at a New York briefing that Dragon Copilot — the company's ambient AI clinical assistant — now has over 100,000 clinician users across more than 600 health systems, all within 18 months of launch. Per Microsoft's data, these tools save up to seven minutes per use, freeing up enough time for providers to see five more patients a day. An internal analysis of nearly 40 million anonymized Copilot conversations found health to be the consistently top use case across Microsoft's entire platform.

So what?

More patients per day is the near-term value proposition: AI augmenting the clinician. But as AI-native primary care models scale — free, 24/7, licensed in 50 states — the longer-term question becomes which patients should the doctor see versus which patients should the doctor's AI see. Still early, but at 100,000 clinicians, these models are learning fast.

Read the announcement →

The HHS DOGE team released 10.32 GB of aggregated Medicaid claims data spanning 2018–2024 — covering all U.S. states and territories, fee-for-service, managed care, and CHIP claims. The dataset, available at opendata.hhs.gov, includes provider-level billing data by procedure and month. Within hours, independent analysts with laptops and AI tools were already mining it. One project — OpenMedicaid.org — ran 227 million billing records through 13 statistical fraud tests and a machine learning model, flagging 1,860+ providers billing $229.6 billion, including 40 providers still billing Medicaid despite being on the OIG exclusion list. Another analyst claimed to identify ~$90 billion in likely fraudulent payments from just 0.16% of providers. Medicaid covers roughly 90 million enrollees at ~$849 billion annually.

So what?

This release effectively crowdsources Medicaid oversight to the public. One project flagged 1,860 providers across $229 billion in billing within days — speed that was unthinkable a year ago. But missing diagnoses, legitimate high-volume clinics, and regional coding differences can all look like fraud to an algorithm not steeped in medical billing. Expect some erroneous finger-pointing short-term, but ultimately, more transparency should be a good thing.

Explore the dataset → · Read the coverage →

2 Research Studies

A new multi-agent AI system called DeepRare — published this week in Nature — integrates 40+ specialized tools, clinical notes, phenotype data, and genetic results to generate ranked diagnostic hypotheses for rare diseases. Across 6,401 cases spanning 2,919 diseases and 14 specialties, the system's first guess was correct 64.4% of the time compared to 54.6% for rare disease specialists. The correct diagnosis appeared in its top five suggestions 78.5% of the time versus 65.6% for specialists. Expert review confirmed 95.4% agreement with the AI's reasoning chains.

Why it matters

Rare disease patients endure an average diagnostic odyssey exceeding five years — repeated referrals, misdiagnoses, unnecessary treatments. Over 300 million people worldwide are affected. DeepRare doesn't replace the specialist; it narrows the search space. What makes it compelling is the architecture: AI as a diagnostic co-pilot with traceable reasoning linked to verifiable evidence, not a black box.

Read the study →

The PANORAMA study — an international, multi-center diagnostic accuracy trial — tested an AI system against 68 radiologists from 40 centers across 12 countries on 3,440 patients' CT scans. The AI achieved an AUROC of 0.92 versus 0.88 for radiologists (p=0.001), confirming statistical superiority. At matched specificity, the AI detected 38% more pancreatic cancers; at matched sensitivity, it reduced false positives by 26%. Pancreatic ductal adenocarcinoma has a five-year survival rate around 12%, largely because most cases are caught late.

Why it matters

Pancreatic cancer is deadly precisely because it's hard to see early. This is the first confirmatory study to show AI superiority — not just non-inferiority — over radiologists in detecting it, across five European tertiary centers and two U.S. datasets. The growth in data demonstrating AI competency is unrelenting.

Read the study →

1 Key Insight
From Open Data to Open Questions

In the span of one week, the federal government made three moves that are each shaping the AI landscape for healthcare in different ways.

On Thursday, DOGE released the largest Medicaid claims dataset in HHS history — 10 GB of provider-level billing data spanning six years and every U.S. state. The post drew 50 million views. On Friday, HHS's comment period on accelerating AI adoption in clinical care closes — a Request for Information asking how the federal government should use its regulatory, reimbursement, and R&D levers to reshape clinical AI. And throughout the week, CMS head Dr. Oz continued pushing AI avatars as a solution for rural healthcare, proposing that digital stand-ins could "multiply the reach of doctors fivefold."

Meanwhile, the evidence is sharpening. This week's Nature and Lancet Oncology studies show AI outperforming human specialists in rare disease diagnosis and pancreatic cancer detection. Microsoft's ambient scribe now has 100,000 clinicians across 600+ health systems. The capability question seems settled, and, of course, this is the worst it'll ever be.

But many questions persist. Who decides what other data will be publicly shared? Who defines "responsible" AI in rural clinics? Who decides what clinical AI gets reimbursed?

Takeaway

The window to shape healthcare AI policy is measured in days, not years. The HHS comment period closes February 23. Medicaid billing data is now public. The rules are being written right now — and the organizations that engage will have a voice in what comes next. Those that wait will ultimately inherit the framework that someone else builds.

Know someone who'd find this useful?

Forward to a Colleague

Keep Reading