3, 2, 1: Health AI Brief
Every Friday
March 6, 2026

AI is reshaping healthcare fast. Below are 3 key AI developments, 2 studies, and 1 takeaway for this week to help you better lead with AI. Target read time: 5 minutes.

3 Market Signals

California Attorney General Rob Bonta sent a letter to HHS opposing a proposed rule that would eliminate "model cards" — essentially nutrition labels for AI healthcare tools that disclose how models were developed, tested, and where they might carry bias. The rule, titled "Health Data, Technology, and Interoperability: ASTP/ONC Deregulatory Actions To Unleash Prosperity" (HTI-5), would remove certification criteria that currently require these disclosures for AI-enabled health products. Bonta called model cards "one of the most significant guardrails currently in place on a federal level," citing a 2019 study in Science that found racial bias in a widely used hospital algorithm for identifying high-risk patients.

So what?

While states like California are pushing for tighter AI regulation, the federal government is simultaneously pushing to accelerate clinical AI adoption and pulling back the transparency requirements that help health systems evaluate what they're adopting. That tension is now a legal and political fight. Health leaders should prepare for a patchwork: federal rules loosening while states like California tighten their own.

Read the press release →

STAT reviewed early submissions to HHS's December request for information on accelerating clinical AI adoption — and the industry's ask is clear. Epic, Oracle, Abridge, Aidoc, Tempus, and Doctronic each filed proposals centering on three themes: reformed health data privacy rules to accommodate AI training, reliable reimbursement mechanisms for AI-based care, and lighter regulatory requirements. Of 7,300 total comments submitted, these represent the strategic positions of healthcare's biggest platform players. The RFI follows the Trump administration's broader push to reduce FDA oversight of AI-enabled clinical tools.

So what?

Industry wants fewer guardrails. California says the existing ones aren't enough. Health leaders are caught in the middle — needing to adopt AI quickly while building governance frameworks that may outlast whatever regulatory regime wins. The companies writing these proposals are also the ones selling their own tools, which is worth keeping in mind.

Read the STAT story →

On March 5, two of tech's biggest enterprise players launched competing agentic AI platforms for healthcare. AWS debuted Amazon Connect Health with five AI capabilities — patient verification, appointment scheduling, medical history compilation, ambient clinical documentation, and medical coding — all HIPAA-eligible and already deployed at UC San Diego Health. Hours later, Salesforce unveiled six new Agentforce Health agents covering referrals, EHR writeback, claims and coverage, rural health, epidemiology analysis, and hospital operations, with new integrations from Verily, HealthEx, and Viz.ai.

So what?

AWS and Salesforce shipping healthcare AI agents on the same day tells you where the market is headed. Both are targeting admin tasks that eat clinical staff time — scheduling, documentation, coding, claims. For health plans evaluating vendor partnerships, this changes the calculus: your EHR vendor, your CRM vendor, and your cloud provider all now want to own your AI workflow. That's a lot of overlapping bets to manage.

Read the AWS announcement → · Read the Newsweek coverage →

2 Research Studies

Stanford researchers developed Merlin, a vision-language AI model trained on more than 15,000 3D abdominal CT scans paired with radiology reports and nearly one million diagnostic codes — the largest abdominal CT dataset compiled to date. Tested on over 50,000 previously unseen scans from four hospitals, Merlin predicted diagnostic codes with 81% accuracy across 692 different conditions (rising to 90% for a core subset of 102). More striking: it identified patients at higher risk of developing diabetes, osteoporosis, and heart disease within five years at 75% accuracy, compared to 68% for competing models. The researchers evaluated Merlin across more than 750 individual tasks spanning diagnostics, prognostics, and quality assessment. Both the model and the full dataset are publicly available, so other researchers and health systems can build on top of their work.

Why it matters

Most medical AI models are narrow — trained on one task for one condition. Merlin is a generalist: a single model handling hundreds of diagnostic tasks and predicting disease years before symptoms. That 75% accuracy on five-year disease prediction may not sound perfect, but it's a meaningful lead over existing approaches — and it's extracting insights from scans that are already being taken for other reasons. If this holds up in prospective trials, it could turn routine imaging into a screening tool.

Read the NIH news release → · Read the Nature study → · View on GitHub →

Researchers at Michigan State and the University of Michigan surveyed 3,000 U.S. adults, generating 36,000 observations from conjoint experiments where participants evaluated mock AI-assisted medical visits. The strongest driver of trust: AI that performs at or above specialist level, which boosted selection probability by 24.8–32.5%. Having a clinician present in the visit increased selection by 18.4%. Patients also favored formal governance — FDA approval, Mayo Clinic certification, or local hospital validation — over no oversight at all. Training data mattered too: AI systems described as using representative, high-quality data were preferred over those with vague or unrepresentative datasets.

Why it matters

The debate about clinical AI usually focuses on what regulators and vendors want. This study asks what patients want — and the answer is pretty straightforward. They're not anti-AI. They're anti-unaccountable AI. Show them it works, keep a clinician in the loop, and get some form of institutional sign-off, and more patients are on board. For health leaders, let's build for patients, and with patients.

Read the JAMA study →

1 Key Insight
The Pilot Trap: Why 76% of Healthcare Organizations Can't Scale Their AI

76% of healthcare organizations have more AI pilot programs than they can scale. That's the headline from this week's Kyndryl Healthcare Readiness Report — and it captures the problem that the path from pilot to enterprise deployment requires organizational readiness. The same report finds that 55% of organizations worry about keeping pace with AI regulations, and only 30% feel prepared to adapt.

The clinical evidence base isn't keeping up either. Stanford and Harvard's ARISE network reviewed more than 500 medical AI studies and found nearly half relied on exam-style questions — only 5% used real patient data. When researchers added "none of the above" as an answer option, accuracy dropped by more than a third.

Here's what's missing from most of these conversations: the patient. This week's JAMA study showed that patients aren't opposed to AI — they're opposed to AI without accountability. Performance, clinician presence, and governance aren't nice-to-haves. They're the minimum bar patients expect.

Takeaway

Organizations are flush with pilots, but short on scaled solutions. For that, we need internal AI governance frameworks, real-world evaluation criteria, and the organizational muscle to move more pilots from "interesting experiment" to "standard workflow." The pressure to move faster isn't lost on anyone — but neither is the risk of scaling tools that haven't been tested in the conditions where they'll actually be used.

Know someone who'd find this useful?

Forward to a Colleague

Keep Reading