There are two big issues with Apple’s reported ‘AI doctor’ plan
What Happened
Bloomberg’s Mark Gurman reported a couple of days ago on what he described as “Apple’s biggest push into health yet with a new AI doctor.” While I completely understand why Apple would want to do this, I think the company will need to tread extremely carefully to avoid the risk of doing more harm
Fordel's Take
While Apple pushing an 'AI doctor' is slick marketing, we can't ignore the serious liability and risk involved when you let AI play doctor. The two big issues are accuracy and accountability. If a large language model hallucinates a diagnosis or misses a critical symptom, who's liable? Apple, the developer, or the machine?
Trusting a black box for health decisions is just asking for disaster. This isn't a feature to slap on; it requires military-grade safety protocols and transparent error reporting, maybe even mandatory human oversight for high-stakes decisions.
I don't think they'll tread carefully enough if the incentive is just market dominance.
What To Do
Any medical AI deployment must feature mandatory, auditable human oversight and clear liability frameworks.
Cited By
React
Get the weekly AI digest
The stories that matter, with a builder's perspective. Every Thursday.
