Skip to main content
Back to Pulse
opinion
CNBC Tech

Can AI outperform doctors? Experts weigh the pros and cons

Read the full articleCan AI outperform doctors? Experts weigh the pros and cons on CNBC Tech

What Happened

One CEO said people should be using AI to understand their health much more than they already do.

Fordel's Take

A CEO claimed people should lean on AI like GPT-4 and Claude for health questions far more than they currently do. The framing is consumer-facing: chatbots as a first-line triage layer before a doctor visit.

For builders, this is a liability conversation, not a capability one. Health RAG systems already hit 90%+ on board-style evals, but eval accuracy is not clinical safety. Stop shipping medical Q&A features behind a thin disclaimer and calling it ethical — that disclaimer will not survive a single FDA inquiry or malpractice subpoena.

Consumer health app teams should add structured refusal flows and cite sources via tools like Perplexity's API. B2B clinical teams can ignore CEO soundbites.

What To Do

Wire structured refusals and source citations into health-adjacent LLM features instead of relying on a disclaimer string, because regulators read outputs not footers.

Builder's Brief

Who

Consumer health and wellness app teams shipping LLM chat features

What changes

Refusal logic, citation requirements, and logging for medical queries become product requirements not nice-to-haves

When

weeks

Watch for

First FTC or FDA enforcement action against a consumer app for unsafe LLM medical output

What Skeptics Say

It is a CEO talking his book — chatbot hallucination rates on drug interactions and dosages are still measurably dangerous, and no peer-reviewed trial backs the 'use AI more' claim at population scale.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...