Validating AI Technologies in Pharma Labs & Manufacturing Facilities

The role of AI (artificial intelligence) as an emerging field of technology arguably carries greater risks in the pharmaceutical industry than in other industries. This is because the use of AI technologies in pharma labs and manufacturing facilities won’t just influence metrics like productivity and efficiency. AI will also influence patient safety and product quality. As a result, guardrails are likely to be needed, and validation protocols established.
This information is starting to emerge from regulators in the form of guidance and consultation documents. We have drawn from those documents in this blog, in addition to using Westbourne’s extensive validation experience, to highlight what is likely to be required to validate AI technologies when they begin to be implemented in pharmaceutical laboratories and manufacturing facilities.
For the purposes of this blog, we are focusing on the potential implementation of AI technologies in GxP critical contexts. In non-critical GxP contexts, there is likely to be more scope for implementing and validating novel AI technologies. However, in critical GxP contexts, significantly greater restrictions will apply.
The Challenges of AI Validation in the Pharmaceutical Industry
There are many challenges that can arise when validating AI technologies in the pharmaceutical industry, some of which are only becoming fully understood. Examples include explainability, data quality, reproducibility, and data drift.
Explainability
Sustaining the explainability of an AI technology’s output can be a challenge. This is because some AI technologies are opaque in how they operate. Regulators, however, will require a detailed understanding of why an AI technology produces a particular output. This type of AI, known as explainable AI (XAI), also has operational benefits for pharmaceutical labs and manufacturing facilities.
Data Quality
The quality, completeness, consistency, accuracy, integrity, and traceability of data used to train AI models used in the pharmaceutical industry are additional challenges to overcome. It is also essential that training data doesn’t introduce bias. Data provenance (where it came from) and relevance (is it appropriate for the intended use) are also important.
Reproducibility
Are outputs of an AI technology consistent when the inputs are the same? AI models that are unpredictable in this context are unlikely to be suitable for GxP critical applications.
Data Drift
Data drift is one of the most important and least understood challenges when validating AI systems in the pharmaceutical industry. It occurs when there is a difference in the real-world input data to an AI model compared to the training data. In other words, the data being fed into the AI technology becomes different from the data it was trained on. When this happens, the outputs from the AI technology become less predictable, accurate, and reliable, breaking previously determined validation assumptions.
Key Principles
While the guidance and regulations governing the use and validation of AI technologies in GxP environments are still emerging, there are some key principles that are becoming clearer.
- Static and deterministic – AI technologies should be static and deterministic. Static AI models are trained on a specific dataset, so the knowledge is fixed. Deterministic AI models are predictable as they produce consistent outputs when given the same inputs.
- No difference in standards – AI technologies must adhere to the same standards as other technologies in terms of the accuracy, integrity, and traceability of GxP records. AI technologies must also adhere to those same standards when creating, processing, or influencing data, especially in GxP critical contexts.
- Responsibility – pharmaceutical companies remain solely and fully responsible for the integrity of AI technologies and the data and outputs they produce. In other words, there is no shifting of regulatory responsibility to AI technology vendors.
- Risk-based approach – a risk-based approach builds on modern CSA (computer software assurance) methods of validating technologies in GxP environments. Key steps in the process when AI technologies are involved include defining the intended use and context of use, and then assessing the AI model risk. Determining the AI model risk involves considering the influence the AI model has on decision-making and the potential consequences of those decisions.
Key Considerations When Validating AI Technologies in the Pharmaceutical Industry
The new Annex 22 – Artificial Intelligence in the EU’s EudraLex regulations are a good starting point for understanding the additional steps that will potentially be involved in validating AI technologies in GxP environments. We are going to explore five of the key considerations highlighted in the consultation version of the new AI annex:
- Principles
- Intended use
- Acceptance criteria
- Test data
- Test data independence
- Test execution
- Operation
Principles
The implementation of AI technologies requires close collaboration between all stakeholders during model selection, training, validation, testing, and operation. All team members should have appropriate qualifications, clearly defined responsibilities, and proper access levels.
Documentation of these activities must be maintained regardless of whether the AI model is developed internally or provided externally. Quality risk management practices should be applied according to the level of risk to patient safety, product quality, and data integrity.
Intended Use
Comprehensive documentation should be created that defines and explains the intended use of the AI technology. This includes the tasks the AI technology is designed to complete, as well as the characteristics of the data being processed and the variabilities that exist.
It’s also important to note the relevance of human-in-the-loop situations, such as where an AI technology provides information that informs a decision made by a human operator. The responsibility of the operator in this situation should also be documented, and the operator’s performance should be continuously monitored.
Acceptance Criteria
Suitable test metrics should be applied to check that the AI technology’s performance remains within defined acceptance criteria related to its intended use. Additionally, the performance of the AI technology should be at least as good as the process it is replacing.
Test Data
Test data should be:
- Representative of the intended use
- Cover the full sample space
- Stratified (arranged) with all subgroups included
- Of sufficient size
Limitations, complexities, and variations (including rare variations) should be reflected, and test data labelling should be verified. Data pre-processing and exclusions should also be documented and justified.
Test Data Independence
Test data must be separate from training data, i.e., the data used to test the AI technology should be different from the data used to create it. This helps protect against bias.
Test Execution
Testing should follow a well-defined plan, it should be fully documented, and it should ensure the AI technology is suitable for its intended use.
Operation
AI technologies will require continuous monitoring over time to ensure performance is maintained and there are no performance or input data changes that could impact patient safety and product quality. Regular re-validation is likely to be a key requirement for maintaining AI technologies in a safe and validated state.
AI: A New Era of Validation Based on Established Principles
What is highlighted in this blog – and what is emerging from draft guidance that has been published to date – is that AI technologies could bring transformational change, but their implementation and validation in GxP environments will be based on well-established principles. This includes principles already well covered in GxP guidance, CSV processes, and CSA methodologies.
We are going to continue exploring the realities of implementing and validating AI technologies in pharmaceutical manufacturing and laboratory environments in future blogs. But the validation of current technologies is the priority for today. If you have validation queries or need validation support, please get in touch with us at Westbourne to arrange a consultation.
Latest Insights
Building a Unique Lab IT Solution Provider – a Westbourne Story
“I founded Westbourne in 1994. At the time, I was working as a customer service engineer for a very successful and brilliant American multinational called Digital Equipment Corporation, affectionately known as DEC. It no longer exists after being bought by Compaq, and...
Continuous Oversight: a Key Principle for Implementing AI in Pharmaceutical Manufacturing and Quality Control
Artificial Intelligence (AI) is one of the hottest topics in business right now as executives try to take advantage of...
Empower 3 Audit Trails and Logs Explained
In the context of the computerized systems used in GxP environments, audit trails are a chronological record that...
A Complete Guide to Computer System Validation
Computer System Validation, or CSV, is a modern, risk-based methodology that confirms software used in pharmaceutical...
Why Multidisciplinary Skills Are Essential in Pharma Lab and Manufacturing Operations
Very specific skills are required to successfully, safely, and profitably run a pharmaceutical manufacturing and...

