Regulating adaptive medical artificial intelligence: Can less oversight lead to greater compliance?

As of June 2024, the U.S. Food and Drug Administration (FDA) has approved 950 medical artificial intelligence (AI) devices. The current regulatory framework freezes AI algorithms after approval, requiring new submissions for updates to ensure compliance with Good Machine Learning Practices (GMLP). T...

Full description

Saved in:
Bibliographic Details
Main Authors: LAI, Jiayi, XU, Liang, FANG, Xin, DAI, Tinglong
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/lkcsb_research/7644
https://ink.library.smu.edu.sg/context/lkcsb_research/article/8643/viewcontent/ssrn_5009572.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:As of June 2024, the U.S. Food and Drug Administration (FDA) has approved 950 medical artificial intelligence (AI) devices. The current regulatory framework freezes AI algorithms after approval, requiring new submissions for updates to ensure compliance with Good Machine Learning Practices (GMLP). This approach imposes a significant administrative burden, while hindering the ability of AI algorithms to learn from new data. To address these challenges, the FDA has explored a novel pathway known as Predetermined Change Control Plans (PCCP), allowing developers to outline future changes during initial submissions and exempting approved changes from regulatory review. Yet, the impact of this exemption on GMLP compliance remains uncertain. In this paper, we model the strategic interaction between a developer and a regulator in a two-stage game with asymmetric information. The developer may choose to follow or deviate from GMLP in developing and retraining the AI algorithm, whereas the regulator reviews the marketing-clearance application for approval. Our analysis shows that, contrary to intuition, less review can actually lead to greater compliance. This scenario arises, even without considering the administrative burden saved, when (1) auditing capability is moderate and (2) the potential for efficiency improvements through retraining is substantial. Conversely, reclearance is valuable when regulatory review effectively detects noncompliance or when efficacy improvements from retraining are unlikely. We also show adaptive algorithms offer advantages over frozen algorithms in terms of not only improved device efficiency but also greater compliance. Interestingly, these advantages are particularly salient when regulatory oversight has limited ability to detect noncompliance.