3, 2, 1: Health AI Brief
Every Friday
March 20, 2026

AI is reshaping healthcare fast. Below are 3 key AI developments, 2 studies, and 1 takeaway for this week to help you better lead with AI. Target read time: 5 minutes.

3 Market Signals

Perplexity launched "Perplexity Health" this week, connecting users' EHRs (via b.well, covering 1.7 million providers), wearable data from Apple Health and Fitbit, and lab results into a single AI-powered search interface. It's the latest in a rapid-fire wave: OpenAI launched ChatGPT Health on January 7 (230 million users already ask health questions weekly), Anthropic followed five days later with Claude for Healthcare (January 12), Amazon rolled out Health AI on March 10, and Microsoft debuted Copilot Health on March 12.

So what?

In 10 weeks, five major tech companies have entered consumer health AI. Patients are getting their health answers from AI whether the health system or health plan endorses it, or not.

Read the full story →

Under CMS's Interoperability and Prior Authorization Final Rule (CMS-0057-F), Medicare Advantage plans, Medicaid managed care plans, CHIP entities, and QHP issuers on federal exchanges must publicly post prior authorization metrics by March 31, 2026. Required disclosures: request volumes, approval and denial rates, appeal outcomes, and average decision turnaround times. This is the first-ever mandatory reporting cycle.

So what?

For the first time, plan-level prior auth performance will be visible to regulators, providers, and members. Plans with high denial rates or slow turnarounds face reputational exposure—and I would expect that providers will use the data at the negotiating table.

Read the full story →

Doximity's 2026 State of AI in Medicine report surveyed 3,151 U.S. physicians across 15 specialties. AI adoption jumped from 47% to 63% in under a year. Neurologists lead at 64%, followed by gastroenterologists (61%) and internists (60%). Top uses: literature search (35%) and AI scribes (29%). 90% say AI can reduce "pajama time"—after-hours charting that drives burnout—and 23% say it already has. But 71% cite accuracy and reliability as their top concern.

So what?

Physicians are adopting AI surprisingly fast, yet the use cases are still narrow.

Read the full report →

2 Research Studies

The AIMS study—the largest AI breast cancer screening study in NHS history—analyzed 175,000 women across five NHS sites in two linked papers published in Nature Cancer. AI as a second reader detected 9.33 cancers per 1,000 women vs. 7.54 for human-only reading. For first-time screens: 8.8% higher cancer detection with 39.3% fewer recalls. AI also caught 25% of interval cancers—those missed between routine scans. Average processing time: 17 minutes (AI) vs. 2 days (human). False positives reduced by up to 69%.

Why it matters

This is strong evidence for AI in population-level cancer screening—peer-reviewed, prospective, and at NHS scale. This has implications on staffing and workload, of course, but also coverage decisions involving screening recommendations.

Read the coverage →  |  Nature Medicine trial →

A randomized controlled trial of 70 clinicians (Stanford, Harvard, Beth Israel, Microsoft) tested two collaborative workflows: AI provides a first opinion before the clinician, or AI provides a second opinion after. Both workflows improved diagnostic accuracy—AI-first reached 85% vs. 75% unassisted, AI-second reached 82%. Neither was statistically different from AI alone (90%). The system used a custom GPT that synthesized both perspectives, highlighting agreement and disagreement.

Why it matters

The question is no longer whether AI improves diagnosis—it's about the best workflow for deployment. Worth noting: this article was received by Nature in July 2025, meaning the study used models from well before that. These "old" models performed this well. Hard to imagine how they'd do today—and even harder to imagine what's in store next.

Read the study →

1 Key Insight
Providers are arming up with AI to fight prior auth — and payers are about to publish their denial rates

Latent Health just raised $80 million at a $600 million valuation to automate prior authorization for specialty drugs. Their AI "clinical reasoning engine" ingests unstructured doctor's notes and lab results, compiles evidence against insurer criteria, and—in some cases—even calls payers to check on request status. Ochsner Health cut review times by 75%. Over 45 health systems are already on the platform.

This isn't an isolated bet. Provider-side AI for prior auth is now a funded category—deployed in production and built to optimize submissions against payer criteria. Meanwhile, CMS-0057-F forces payers to publicly post denial rates, approval times, and appeal outcomes by March 31, 2026. For the first time, the data behind utilization management will be visible to everyone: regulators, providers, and members.

Takeaway

On one hand, providers armed with AI will submit cleaner, faster, better-documented requests. On the other hand, payers with high denial rates will face public scrutiny—the old gatekeeping model becomes untenable when the numbers are on a website. The end point is ideally some compromise guided by the truth—what's actually needed, what was actually done, and who is actually responsible—and what's right clinically, and ethically.

Know someone who'd find this useful?

Share

Keep Reading