Skip to Main Content

Artificial intelligence has become a crucial part of our technological infrastructure and the brain underlying many consumer devices. In less than a decade, machine learning algorithms based on deep neural networks evolved from recognizing cats in videos to enabling your smartphone to perform real-time translation between 27 different languages. This progress has sparked the use of AI in drug discovery and development.

Artificial intelligence can improve efficiency and outcomes in drug development across therapeutic areas. For example, companies are developing AI technologies that hold the promise of preventing serious adverse events in clinical trials by identifying high-risk individuals before they enroll. Clinical trials could be made more efficient by using artificial intelligence to incorporate other data sources, such as historical control arms or real-world data. AI technologies could also be used to magnify therapeutic responses by identifying biomarkers that enable precise targeting of patient subpopulations in complex indications.

advertisement

Innovation in each of these areas would provide substantial benefits to those who volunteer to take part in trials, not to mention downstream benefits to the ultimate users of new medicines.

Misapplication of these technologies, however, can have unintended harmful consequences. To see how a good idea can turn bad, just look at what’s happened with social media since the rise of algorithms. Misinformation spreads faster than the truth, and our leaders are scrambling to protect our political systems.

Could artificial intelligence and machine learning similarly disrupt our ability to identify safe and effective therapies?

advertisement

Even well-intentioned researchers can develop machine learning algorithms that exacerbate bias. For example, many datasets used in medicine are derived from mostly white, North American and European populations. If a researcher applies machine learning to one of these datasets and discovers a biomarker to predict response to a therapy, there is no guarantee the biomarker will work well, if at all, in a more diverse population. If such a biomarker was used to define the approved indication for a drug, that drug could end up having very different effects in different racial groups simply because it is filtered through the biased lens of a poorly constructed algorithm.

Concerns about bias and generalizability apply to most data-driven decisions, including those obtained using more traditional statistical methods. But the machine learning algorithms that enable innovations in drug development are more complex than traditional statistical models. They need larger datasets, more sophisticated software, and more powerful computers. All of that makes it more difficult, and more important, to thoroughly evaluate the performance of machine learning algorithms.

Companies operating at the intersection of drug development and technology need standards to ensure that artificial intelligence tools function as intended.

The FDA has already issued several proposals around the regulation of AI products, and it now has an opportunity to build on these efforts. The Center for Devices and Radiological Health has reviewed and cleared a number of devices that use AI. The center has also released a proposed framework, “Artificial Intelligence and Machine Learning in Software as a Medical Device.” These proposals, though, don’t necessarily apply to AI-based tools used as part of the drug development process. As a result, biopharmaceutical and technology companies aren’t sure how these tools fit into current regulatory frameworks.

I’m the founder and CEO of a company that uses artificial intelligence to streamline clinical trials and make them more efficient. You might expect me to counsel the FDA to back off on creating hurdles for companies that want to apply artificial intelligence to drug development. Not so. In a presentation to the FDA on Thursday, I’ll argue that the agency should play an important role in ensuring that AI-based drug development tools meet appropriate standards.

The FDA has an opportunity to ease regulatory uncertainty by proposing a framework that guides how sponsors can use AI tools within drug development programs. By engaging with industry to develop a workable regulatory framework, the FDA can balance the opportunity for artificial intelligence to provide significant public health benefits with its mission to protect public health by ensuring that these new technologies are reliable. At the same time, the FDA could create a pathway for formal qualification of AI-based drug-development tools to ensure that these tools are sufficiently vetted.

In addition, it could encourage the exploratory use of AI-based technologies in drug development that would allow sponsors and regulators to better understand their advantages and disadvantages through use of new regulatory pathways, such as the Complex Innovative Trial Designs Pilot Program.

These concrete actions would open the door to innovative approaches to clinical trials that will make drug development more efficient and so help deliver new treatments to patients who need them as quickly as possible.

Charles K. Fisher, Ph.D., is the founder and CEO of San Francisco-based Unlearn.AI, Inc.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.