The Food and Drug Administration has issued new guidelines on how it will regulate mobile health software and products that use artificial intelligence to help doctors decide how to treat patients.
The guidelines, contained in a pair of documents released Thursday morning, clarify the agency’s intent to focus its oversight powers on AI decision-support products that are meant to guide treatment of serious or critical conditions, but whose rationale cannot be independently evaluated by doctors.
To further define the types of products that will require greater scrutiny, the FDA gave the example of a clinical decision support (CDS) tool that, without explaining its rationale, identifies hospitalized type 1 diabetic patients at high risk of severe heart problems following surgery. If such a product were to give an inappropriate recommendation, the agency said, it could result in serious harm to the patient.
This article is exclusive to STAT+ subscribers
Unlock this article — plus in-depth analysis, newsletters, premium events, and networking platform access.
Already have an account? Log in
Already have an account? Log in
To submit a correction request, please visit our Contact Us page.
STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect