FDA, with British and Canadian regulators, identifies guiding principles for AI/ML

The US Food and Drug Administration (FDA), along with Health Canada and the United Kingdom's Medicines and Healthcare Products Regulatory Agency, have identified ten guiding principles for development of safe and effective medical devices that use artificial intelligence and machine learning (AI/ML). AI/ML technologies have the potential to improve device performance by deriving insights from data generated through the delivery of health care in real-world use. The ten principles, called Good Machine Learning Practice (GMLP), are intended to be used to identify GMLP best practice and consensus standards.

The principles are:

  • Multi-disciplinary expertise is leveraged throughout the total product life cycle.
  • Good software engineering and security practices are implemented. The agencies explain this should include the fundamentals of good software engineering practices, data quality assurance, data management and cybersecurity.
  • Clinical study participants and data sets are representative of the intended patient population. Data collection should ensure that relevant characteristics of the intended population (such as demographics) are represented in a sample of adequate size so that results can be generalized to the population.
  • Training data sets are independent of test sets.
  • Selected reference datasets are based upon best available methods.
  • Model design is tailored to the available data and reflects the intended use of the device. Design should support the active mitigation of known risks, like overfitting, performance degradation, and security risks.
  • Focus is placed on the performance of the human/AI team. Developers need to address human interpretability of the model outputs.
  • Testing demonstrates device performance during clinically relevant conditions. Test plans are developed to generate clinically relevant device performance information independent of the training data set, considering the intended patient population, clinical environment and other factors.
  • Users are provided clear, essential information. Users have access to information appropriate for the intended audience, such as health care providers or patients, and a means to communicate product concerns to the developer.
  • Deployed models are monitored for performance and re-training risks are managed. Models must be monitored in real world use for potential improvement of safety and performance.

FDA publishes list of authorized AI/ML-enabled devices

The Food and Drug Administration (FDA) has published a list of medical devices incorporating artificial intelligence (AI) and machine learning (ML) which are authorized for marketing in the United States. The potential benefit of ML for medical devices is the potential for product improvement resulting from data generated during delivery of health care. The devices have been authorized through 510(k) clearance, De Novo request, or premarket approval (PMA). This initial list includes 343 entries and is searchable. The FDA states it plans to update the list periodically.

FDA authorizes device that uses machine learning to diagnose autism

On June 2, 2021, the Food and Drug Administration (FDA) authorized marketing of software that assists in diagnosing autism spectrum disorder (ASD) for patients from 18 months through five years of age. The Cognoa ASD Diagnosis Aid is software as a medical device that uses machine-learning to receive and process input from parents, video analysts and health care providers. It includes a mobile app for parents to answer questions about the child's behavior and upload videos of the child, a video analysis portal and a health care provider portal for a health care provider to enter answers about the child's behavior, track information provided by parents and review a report of the results. The system was assessed in a study of 425 patients in 14 clinical care sites, and compared the assessments made by the device against assessments made by a panel of clinical experts. The device made an accurate determination in 98.4% of patients with ASD, and 78.9% of patients without the condition. There were 15 false positives and one false negative. The Cognoa ASD Diagnosis Aid was reviewed through the De Novo premarket review pathway, and approval results in a new regulatory classification so that similar devices may go through the 501(k) premarket process.

CMS announces winner of AI challenge

The Centers for Medicare and Medicaid Services (CMS) announced the winner and runner-up in its Artificial Intelligence (AI) Health Outcomes Challenge. The Challenge was conducted by CMS in collaboration with the American Academy of Family Physicians, and aimed to demonstrate how AI tools such as deep learning and neural networks can accelerate solutions for predicting patient health outcomes. The winner was ClosedLoop.ai, located in Austin, Texas, and the runner-up was Geisinger in Danville, Pennsylvania.


Participants were tasked with developing innovative strategies to explain AI-derived predictions to front-line clinicians and patients, and to increase use of AI-enhanced data feedback for quality improvement. In Stage 1, participants were asked to use AI to predict unplanned hospital and skilled nursing facility (SNF) admissions and adverse events. In Stage 2, participants were to use AI to predict 12-month mortality for Medicare patients, as well as unplanned hospital and SNF admissions and adverse events.

FTC cautions that AI can reproduce biases

In a blog post on April 19, 2021, the Federal Trade Commission (FTC) cautioned that efforts to harness the benefits of artificial intelligence (AI) should be coupled with safeguards to avoid introducing bias. As an example, the FTC cited a recent article in the Journal of the American Medical Informatics Association which warned that AI used to guide resource allocation for COVID-19 patients were fraught with bias. For example, using healthcare spending as a proxy for disease burden exacerbated inequalities arising from barriers to care for Black patients.


The FTC advises companies, including healthcare organizations, to examine data sets used for AI models to determine if the data is missing information from some populations. It also suggests testing the algorithm before use and periodically thereafter to watch out for discrimination on the basis of race, gender or other protected class.

FDA issues EUA for first machine learning COVID screening device

On March 19, 2021, the Food and Drug Administration (FDA) issued an emergency use authorization (EUA) for the first machine learning COVID-19 screening device. The Tiger Tech COVID Plus Monitor is to be used by trained personnel for screening persons without COVID symptoms or fever. It is a screening tool only, not a diagnostic device.  The device is an armband with embedded light sensors and a small computer processor. The sensors obtain pulsatile signals for three to five minutes. The processor extracts key features of the signals for analysis by a probabilistic machine learning model trained to make predictions on whether the individual is demonstrating certain biomarkers, such as hypercoagulation. The Tiger Tech monitor was evaluated in a hospital study and a school study, and identified COVID-19 positive individuals 98.6% of the time, and COVID-19 negative individuals 94.5% of the time.

FDA authorizes marketing of device using AI to highlight potential lesions during colonoscopy

On April 9, 2021, the Food and Drug Administration (FDA) authorized marketing of the first device which uses artificial intelligence based on machine learning (AI/ML) to assist in detecting lesions during colonoscopy. The approval of the GI Genius device relied on results of a multicenter study in Italy that compared identification of lab-confirmed adenomas or carcinomas through colonoscopy using the device, versus standard colonoscopy. The GI Genius device identified suspicious lesions in 55.1% of patients compared to 42% with standard colonoscopy. The device uses AI algorithms to identify regions of interest as the patient is undergoing colonoscopy and generates markers that look like green squares superimposed on the video from the endoscope camera. The device is not intended to provide diagnostic assessments or replace lab sampling, but to alert the clinician so that suspect areas can be examined in real time.

View LinkedIn Profile

Shack Cookies Restrict

This website uses cookies to properly administer the site and improve your experience. Continuing to use this website indicates your acceptance. Please click "accept" to remove this message.