news

FDA Loosens Medical AI Oversight: 1,000 Diagnosis Apps Without Approval

David BrooksDavid Brooks-February 10, 2026-9 min read
Share:
Conceptual representation of AI-enabled medical devices operating without FDA regulatory oversight

Photo by Unsplash on Unsplash

Key takeaways

The FDA just made it easier for AI diagnostic tools to reach the market without review, while 95% of already-approved devices never report safety issues. Meanwhile, Epic and Cerner face unfair competition from startups that skip regulation, and ChatGPT diagnoses differently based on patient race.

On January 6, 2026, the FDA released guidance that reduces oversight of clinical decision support software and AI-enabled wearables. Over 1,000 diagnostic applications can now enter the market without pre-review. The same week, Martin Makary's confirmation hearings revealed his plan to move the agency at "Silicon Valley speed."

Here's my take: I've been covering the enterprise health sector for over a decade, and this is the first time I've seen a regulator openly admit they're prioritizing industry velocity over safety verification.

Makary's Silicon Valley Promise: What It Really Means for Patients

In June 2025, before his confirmation as FDA Commissioner, Martin Makary told FierceBiotech the agency should move at "Silicon Valley speed" and make the US "the best place for AI capital investment."

That philosophy explains the January 2026 guidance. This isn't a response to scientific evidence showing medical AI is safer than we thought. It's a political decision aligned with the Trump administration's pro-AI agenda.

Wearable devices that estimate blood pressure, oxygen saturation, or glucose using non-invasive sensors can now operate as "general wellness" products without FDA oversight, even if users make medical decisions based on those readings.

The distinction between "wellness" and "diagnosis" is legal, not physiological. If a diabetic adjusts their insulin based on a glucose estimate from an unregulated wearable that turns out to be 20% inaccurate, the consequences are medical, not wellness.

Makary also announced plans to eliminate half of the FDA's digital health guidance documents, though the timeline isn't clear. That sends a signal to the industry: the FDA is getting out of the way.

Illinois and Nevada banned unregulated AI use in mental health therapy in 2025 after reports of dangerous chatbot advice. The federal response has been to go in the opposite direction: less oversight, more trust in industry self-regulation.

Let's be real: when only 5% of approved devices report adverse events, loosening oversight isn't regulatory innovation. It's abdicating the responsibility to protect patients in order to accelerate return on venture capital investment.

The 95% Blind Spot: Why FDA-Approved Devices Never Report Failures

Of the 950 AI-enabled medical devices the FDA approved through August 2024, only 5% reported any safety issues. But here's what nobody's talking about: according to an academic report published in NCBI PMC in December 2025, 95% of approved devices never sent safety data after entering the market.

Not because they're perfect. Because nobody verifies their performance once they're in hospitals.

The FDA argues that high-risk products "remain fully subject to oversight." If we can't detect problems in already-approved devices, how are we going to identify failures in ones that now enter without review?

The healthcare AI market is worth $56 billion in 2026 according to Fortune Business Insights. The number of approved devices grew 49% annually between 2016 and 2023, with 221 new devices in 2023 alone. Surveillance infrastructure didn't grow at the same pace.

We're loosening oversight of a product category where 95% already operates in a safety data vacuum. It's unacceptable that in 2026 we still depend on manufacturers to report their own failures when we know 95% don't.

Surveillance Aspect Current Reality What FDA Assumes
Post-market reporting 5% send data 100% compliance
Performance audits None required "High-risk" devices monitored
Bias detection No mechanism Manufacturers self-police
Algorithm drift tracking Not mandated Assumed stable post-deployment

Watson's Ghost: When IBM Treated Cancer Patients Without FDA Approval

Between 2015 and 2018, IBM deployed Watson for Oncology in hospitals across the United States, India, Thailand, South Korea, and China. The system recommended cancer treatments based on analysis of medical records and oncology literature.

Watson for Oncology never went through formal FDA review. There were no clinical trials demonstrating safety or accuracy. IBM marketed it as a decision support tool, not a medical device.

In 2018, leaked internal documents revealed Watson was recommending "unsafe and incorrect" treatments in multiple cases, including suggesting bevacizumab (which can cause severe bleeding) to a patient with cerebral hemorrhage.

IBM shut down Watson Health in 2022 after years of losses.

The precedent is established: large AI systems can deploy in hospitals without FDA review if structured as support software. The 2026 guidance institutionalizes this loophole. Software that provides a single diagnostic or therapeutic recommendation can now operate without oversight if the logic is "transparent" and doctors can "independently review the basis" of the recommendation.

What does "independently review" mean when the algorithm is a large language model with 175 billion parameters? Or a convolutional neural network trained on millions of radiological images?

Algorithmic transparency is an aspiration, not a technical reality in modern AI. The FDA just made it easier for opaque tools to influence clinical decisions without proving they work.

Last November, at a digital health conference in Barcelona, a university hospital CTO told me off-the-record they've been using an AI system for emergency triage for 8 months without knowing if the FDA had reviewed it. "Nobody asked us," he said.

ChatGPT's Race Problem: Different Diagnoses for Identical Symptoms

Researchers documented in November 2025 that ChatGPT, when evaluating college students with sore throat, placed HIV and syphilis "much higher" in the differential diagnosis if the patient was specified as Black, compared to white patients with identical symptoms.

Tools like ChatGPT Health (launched by OpenAI in January 2026, right when the FDA relaxed oversight) aren't subject to FDA review if positioned as "general information" rather than medical devices.

The 2026 guidance allows clinical decision support software that offers a single recommendation to operate without oversight if it meets "non-device" criteria. That distinction is technical, not clinical: to the doctor consulting the tool, it's information that influences diagnosis.

Clinical Scenario White Patient Black Patient Difference
College student, sore throat, fever Strep throat, mononucleosis HIV, syphilis "much higher" Documented racial bias
Context of use Same prompt, only race specified varies Same prompt, only race specified varies Algorithm without FDA oversight

AI algorithms with biases trained on historical data inherit the biases of that data. In medicine, that means patients from racial minorities, women, and populations underrepresented in clinical trials receive distorted differential diagnoses.

The FDA eliminated in 2026 the exclusion for "time-critical decision-making" software that existed in the 2022 guidance. Tools that suggest diagnoses in emergency rooms can now operate without pre-review.

There's no mechanism to detect "AI drift" (when algorithms change their behavior after deployment) or mandatory audits of demographic performance equity.

The elephant in the room is this: we're deploying clinical AI faster than we can measure whether it harms patients, and we're doing it deliberately.

Epic and Cerner's Regulatory Penalty: How Startups Skip the FDA

In November 2025, Epic Systems announced an integration with Azure OpenAI to add generative capabilities to its EHR platform, used by hundreds of US hospitals. Oracle Health (formerly Cerner) launched its Clinical Digital Assistant (CDA) with generative AI for medical documentation in October.

Both companies are integrating AI defensively: they know their enterprise clients expect these features, but they also know any algorithmic error could expose them to multimillion-dollar lawsuits because their systems are deeply integrated into clinical workflow.

Startups like Abridge (ambient clinical documentation), PathAI (pathology diagnosis), and Hippocratic AI (voice agents for non-diagnostic tasks) can now operate under the new guidance with minimal oversight if they structure their software as "clinical decision support" offering a single recommendation.

Here's what nobody tells you: this creates regulatory arbitrage. Incumbents who already went through FDA approvals compete against new entrants who don't have to. Epic and Cerner carry compliance costs their competitors avoid.

Aspect Epic/Cerner Startups Without FDA
EHR Integration Deep, years of development Superficial, API or standalone
Legal Liability High, part of clinical system Diffuse, "just suggestions"
Regulatory Cost Millions in compliance Virtually zero
Competitive Edge Installed infrastructure Speed to market

PathAI has FDA 510(k) clearance and CE mark for its AISight Dx system, giving it regulatory credibility. It competes with generative AI tools that analyze pathology images without any review.

This isn't fair competition. I haven't had access to Epic's or Oracle's proprietary algorithms to verify their comparative accuracy, so my analysis is based on public documentation and enterprise user reports.

If you ask me directly: enterprise vendors that invested in compliance should demand regulatory parity, not accept being undercut by startups gaming the "non-device" loophole.

What You Can Do Right Now

AI has real potential in medicine: radiology algorithms have demonstrated accuracy comparable to radiologists in breast cancer detection, and documentation tools can reduce physician burnout.

But that potential isn't realized by eliminating oversight. It's realized through rigorous clinical trials, equity audits, effective post-market surveillance, and real algorithmic transparency.

What the FDA did on January 6 wasn't deregulation. It was dismantling the only mechanism we had to know if these tools work before they fail on the wrong patient.

If you're a patient: ask your doctor if AI tools they use are FDA-cleared. If they're not, ask what evidence supports their accuracy. If they don't know, consider getting a second opinion from a provider who does.

If you're a clinician: document when AI recommendations contradict your judgment. Report adverse events even if manufacturers don't require it. You're the last line of defense.

If you're a health system CIO: demand vendor contracts include liability for algorithmic failures. Don't accept "decision support" as a shield against accountability.

The FDA chose Silicon Valley speed over patient safety. That doesn't mean we have to.

Was this helpful?

Frequently Asked Questions

What exactly changed in FDA regulation in January 2026?

The FDA published guidance on January 6, 2026, that reduces oversight of clinical decision support software and wearable devices. Now, tools that offer a single diagnostic recommendation can enter the market without pre-review if they meet non-medical software criteria. The 'general wellness' category was also expanded to include wearables that estimate physiological parameters like blood pressure or glucose.

Why is it a problem that 95% of AI devices don't report adverse events?

It means effective post-market surveillance doesn't exist. The FDA approves devices based on pre-market data but depends on subsequent reports to detect problems in real-world use. If 95% of devices never report failures, it's impossible to know if they're working correctly in hospitals or causing silent harm.

How does this affect Epic Systems and Oracle Health (Cerner)?

Epic and Cerner already invested millions in passing FDA compliance processes for their EHR systems. Now they compete against startups that can launch AI tools without those costs or scrutiny. This creates regulatory arbitrage: incumbents carry legal liability their competitors avoid.

What is racial bias in medical AI algorithms?

AI algorithms trained on historical data inherit the biases of that data. In ChatGPT's documented case, Black patients with identical symptoms to white patients received HIV and syphilis higher in the differential diagnosis. This reflects historical disparities in medical care, but without FDA oversight, these biases aren't audited or corrected.

Can I trust wearables that estimate glucose or blood pressure without FDA approval?

It depends on use. If you use them as a general wellness reference, risk is low. But if you make medical decisions (adjust insulin, change blood pressure medication) based on readings from a clinically unvalidated device, risk increases significantly. The FDA no longer requires these devices to demonstrate clinical accuracy if sold as 'general wellness.'

Sources & References (6)

The sources used to write this article

  1. 1

    FDA announces sweeping changes to oversight of wearables, AI-enabled devices

    STAT News•Jan 6, 2026
  2. 2

    The illusion of safety: A report to the FDA on AI healthcare product approvals

    NCBI PMC•Dec 1, 2025
  3. 3

    After FDA's pivot on clinical AI, we need AI safety research more than ever

    STAT News Opinion•Jan 15, 2026

All sources were verified at the time of article publication.

David Brooks
Written by

David Brooks

Veteran tech journalist covering the enterprise sector. Tells it like it is.

#fda#medical ai#regulation#medical devices#epic systems#cerner#chatgpt health#algorithmic bias#watson oncology#martin makary

Related Articles