Providing context to AI/ML products may address explainability says FDA
Regulatory News | 20 November 2023 |
FDA's Shawn Forrest (left) and Vertex Pharmaceuticals' Matt Schmucki (right) talked about AI/ML at the 2023 AI Summit. (Source: Ferdous Al-Faruque)
Cincinnati, OH – Context is important when trying to address issues of transparency and explainability of artificial intelligence and machine learning products (AI/ML), according to a regulatory expert at the US Food and Drug Administration. Since many such products exist in a “black box,” providing context to the outputs may help ensure confidence in their results, he added.
As medical products companies race ahead to adopt AI/ML technology, FDA wants them to provide as much transparency about their product as possible in terms of how they make decisions and provide answers. The agency also wants the algorithms used by such products to be as explainable as possible to users, so they feel confident in their outputs.
“If you have an explainable algorithm that your device is using, then explain that to the user,” Shawn Forrest, a biomedical engineer at the FDA Center for Devices and Radiological Health (CDRH), said during a panel presentation at the 2023 AI Summit held by the AFDO/RAPS Healthcare Products Collaborative. “Help them understand enough context so that they can second guess it if they need to, and understand what the inputs are that they need to think about as they're interpreting the output,”
AI/ML products often exist in a “black box,” he said, meaning that much of the product’s operation is obscured from the user.
.
“It may be complex enough that you can't explain it to a person in a way that they can interpret it,” Forrest said. “If that's the case, you need to provide evidence, maybe it's clinical evidence, to build the context around it such that it is explainable at that level.”
Explainability may depend on a range of things, such as the algorithm’s function and method of calculating the output.
Hussein Ezzeldin, senior staff fellow at the FDA Center for Biologics Evaluation and Research (CBER), added that explainability is linked to the need for such products to be transparent about how they make decisions. He noted that there are several transparency models to consider, but the point is to understand why a product decided to prefer a certain output over others.
“Sometimes explainability is not really the goal,” said Ezzeldin. “Sometimes it’s a compliment, or it is a confirmation of the transparency here.”
Ultimately, users want to understand and trust the predictions, recommendations, and suggestions that AI/ML products provide. If they can’t trust the product’s algorithm, then they are less likely to adhere to its recommendations, he added.
Panelist Mike Salem, associate director of data science and quality assurance at Gilead Sciences, noted that businesses need to balance the user’s need for transparency while also protecting their proprietary algorithms.
Salem noted that there are tools that companies can use, at least internally, to help them understand how an AI model functions that can then be used to tackle issues such as bias and help better design the product for the intended user. When concerns about potential harm to proprietary information arise, he said companies may look to explain their product using other approaches, such as by conducting testing that describes how the device works in the clinical workflow.
When considering how to address issues of transparency and explainability in product labeling, Forrest said that more interactive view interfaces that are simplified but are available in context-relevant formats may be the way to go.
“They don't need to know the inner workings of many of these devices,” he said. “What they need to know is: What are the limitations from the sense of what can they do and what can't they do with the device, and we can bring key things to their attention at the relevant time.”
Forrest said FDA is working on a guidance that will propose how to address the issue of explainability that considers ideas that have already been floating around the medtech community, such as model cards, which are concise documents that describe the context within which the AI/ML product is meant to be used, and ways to describe the datasets used by the product.
“This is the time to have these discussions as a community and settle on what are those approaches that are most understood and get the key use-based information and are reasonable for proprietary outcomes,” said Forrest.
Forrest also said that FDA is studying the issue from a regulatory science perspective to help come up with answers on how best to regulate AI/ML products with organizations such as the Mayo Clinic and noted that students at medical schools are being trained in digital health technologies so when they use such products, they will at least understand the technical lingo.
As medical products companies race ahead to adopt AI/ML technology, FDA wants them to provide as much transparency about their product as possible in terms of how they make decisions and provide answers. The agency also wants the algorithms used by such products to be as explainable as possible to users, so they feel confident in their outputs.
“If you have an explainable algorithm that your device is using, then explain that to the user,” Shawn Forrest, a biomedical engineer at the FDA Center for Devices and Radiological Health (CDRH), said during a panel presentation at the 2023 AI Summit held by the AFDO/RAPS Healthcare Products Collaborative. “Help them understand enough context so that they can second guess it if they need to, and understand what the inputs are that they need to think about as they're interpreting the output,”
AI/ML products often exist in a “black box,” he said, meaning that much of the product’s operation is obscured from the user.
.
“It may be complex enough that you can't explain it to a person in a way that they can interpret it,” Forrest said. “If that's the case, you need to provide evidence, maybe it's clinical evidence, to build the context around it such that it is explainable at that level.”
Explainability may depend on a range of things, such as the algorithm’s function and method of calculating the output.
Hussein Ezzeldin, senior staff fellow at the FDA Center for Biologics Evaluation and Research (CBER), added that explainability is linked to the need for such products to be transparent about how they make decisions. He noted that there are several transparency models to consider, but the point is to understand why a product decided to prefer a certain output over others.
“Sometimes explainability is not really the goal,” said Ezzeldin. “Sometimes it’s a compliment, or it is a confirmation of the transparency here.”
Ultimately, users want to understand and trust the predictions, recommendations, and suggestions that AI/ML products provide. If they can’t trust the product’s algorithm, then they are less likely to adhere to its recommendations, he added.
Panelist Mike Salem, associate director of data science and quality assurance at Gilead Sciences, noted that businesses need to balance the user’s need for transparency while also protecting their proprietary algorithms.
Salem noted that there are tools that companies can use, at least internally, to help them understand how an AI model functions that can then be used to tackle issues such as bias and help better design the product for the intended user. When concerns about potential harm to proprietary information arise, he said companies may look to explain their product using other approaches, such as by conducting testing that describes how the device works in the clinical workflow.
When considering how to address issues of transparency and explainability in product labeling, Forrest said that more interactive view interfaces that are simplified but are available in context-relevant formats may be the way to go.
“They don't need to know the inner workings of many of these devices,” he said. “What they need to know is: What are the limitations from the sense of what can they do and what can't they do with the device, and we can bring key things to their attention at the relevant time.”
Forrest said FDA is working on a guidance that will propose how to address the issue of explainability that considers ideas that have already been floating around the medtech community, such as model cards, which are concise documents that describe the context within which the AI/ML product is meant to be used, and ways to describe the datasets used by the product.
“This is the time to have these discussions as a community and settle on what are those approaches that are most understood and get the key use-based information and are reasonable for proprietary outcomes,” said Forrest.
Forrest also said that FDA is studying the issue from a regulatory science perspective to help come up with answers on how best to regulate AI/ML products with organizations such as the Mayo Clinic and noted that students at medical schools are being trained in digital health technologies so when they use such products, they will at least understand the technical lingo.
© 2025 Regulatory Affairs Professionals Society.