The increasing uptake of single-use technologies (SUTs) in critical current good manufacturing practice (CGMP) processes and applications has made their integrity a critical quality attribute (CQA) for both suppliers and end users of such systems. Current regulations focus on final packaging, however, without taking into account the unique aspects of assemblies used in bioproduction. Ongoing initiatives include revision of PDA TR 27 (1) and creation of A STM workstreams (2, 3) to propose good practices for the integrity of single-use systems. In the absence of a regulatory framework or guideline specifically applicable to such systems used in bioproduction, a risk-management approach is recommended. In addition, scientific understanding of the critical defect size for a risk of liquid leakages and/or microbial contamination is a prerequisite for integrity control strategies to correlate maximum allowable leakage limits (MALLs) with the detection limits of physical testing that may be applied to ensure the microbial integrity. According to USP<1207>, the MALL is the greatest leakage rate (or leak size) tolerable for a given product/package that will pose no risk to product safety or inconsequential impact on product quality (4).
Risk-Based Approach to Assessing System Integrity
For multiuse systems, end users hold primary responsibility and control over system design, construction, integrity, operation, and maintenance. With single-use systems, by contrast, suppliers/integrators take increased responsibility from the earliest design stage through manufacture, assembly, packaging, sterilization, validation, certification, and shipping of systems to end users. Suppliers also help end users in areas such as application studies, operator training, and procedures to verify and maintain system integrity both before and after use.
That shared responsibility demands collaboration and close partnership between suppliers and end users to achieve the critical objectives. A risk-management plan for single-use systems (Figure 1) can help companies address potential risks both in system manufacturing and in end-user processes. Failure mode and effects analysis (FMEA) typically addresses supplier steps such as assembly, testing, packaging, and shipping of empty components and assemblies; and user steps such as unpacking, handling, installation, and processing.
Quality by design (QbD) provides another risk-management tool for integrity assurance. It demands an understanding of the risks and potential defects associated with each stage of the SUT life cycle (Figure 2), starting with suppliers, from design and development stages through to assembly, manufacture, validation, and packaging. After product irradiation and shipment, end users become responsible for implementation, operator training, assembly installation, use, and disposal.
Practical controls such as visual inspection and integrity testing are important considerations in manufacturing validation for components and single-use systems. These procedures normally should be included in each supplier’s manufacturing controls. But when it is technically feasible, in-process implementation of leak testing and integrity testing can be valuable to end users as well. It may be unacceptable, however, if such testing adds undesirable complexity to systems and standard operating procedures (SOPs). User testing even can introduce a risk of a false-failure results. For large and/or complex single-use systems, current integrity test methods may provide a valuable test method for gross defects but be unable to provide the level of sensitivity required to confirm that a total barrier against microbial ingress or liquid loss exists.
Collaboration Between Suppliers and End Users
Depending on the criticality of the unit operation in which a single-use system actually gets implemented, requirements for validation and testing are likely to differ. For unit operations considered to be high risk for breaches of integrity, additional testing during system manufacturing might be required. Table 1 provides some guidance for supplier evaluations during different stages of SUT development, validation, manufacturing, and transportation. And Table 2 summarizes end-user assurance of integrity during the stages of SUT use in drug manufacturing.
Practical Testing for Assurance of Integrity: Leak detection in single-use systems may be possible through visual inspection for major flaws that generate gross leaks. In most cases, the location of such flaws may be indiscernible or the holes too small for detection by human eyes. Conversely, visual inspection often turns up findings that are not actually defects. Practical testing would alleviate such false findings. More sensitive detection methods are essential to mitigate risks such as microbial contamination. Two types of nondestructive tests meet that requirement and thus are suitable for single-use systems: those based on air-pressure and trace-gas measurement.
Pressure-Based Tests: The basic principle of a pressure test is to detect leaks in a system by inflation of its components with air to a defined pressure. The flow of that air through defects can be detected either by pressure decay or direct flow measurement. Both approaches depend on the ideal gas law: PV = nRT, where p = pressure, V = volume, n = the molar value, R is the gas constant, and T = temperature. Higher pressures and lower test volumes enhance sensitivity, and maintaining a constant temperature is necessary throughout the test.
Direct air-flow measurement monitors the gas-flow rate required to maintain system pressure during testing. The main differences between this and the pressure decay method are that
- the air supply is not isolated during the test
- test pressure is held constant
- air flow through defects is measured directly on the upstream side.
In small single-use systems (e.g., flat bags), pressure-decay and flow-measurement integrity tests can detect barrier defects ≥10 µm in size. Such a detection limit can be correlated to a liquid leak and/or microbial ingress result under real-use conditions. For large-volume 3D systems, the limit will be ≥100 µm. Because defects of that size can allow ingress of microbial contaminants under some process conditions, the test may be defined better as a gross leak test or postinstallation test.
Trace-Gas–Based Tests: Typically using helium, trace-gas integrity testing offers a suitable method for single-use systems with which the highest possible level of sterility assurance may be required. The principle of this method is to measure the amount of a tracer gas leaking through barrier defect(s). A test container is placed inside a rigid chamber and connected to a helium inlet valve. Air is evacuated from the chamber, and helium is injected into the single-use bag. If barrier defects are present, the vacuum will pull helium from the single-use system into the vacuum chamber. Then helium in the chamber is detected and quantified by a mass spectrometer, and the amount of helium released can be correlated with defect size.
As of today, the helium test method most suitable for single-use systems (including some 3D systems up to several hundred liters in volume) is the most sensitive integrity test method available, with a detection limit of ≥2 µm. Such detection limits can be correlated to the most stringent immersion and aerosol bacterial challenge tests, helping to assure the microbial integrity of systems tested. Applicability of the helium test method does have limitations, however, depending on single-use system assembly volume, complexity, materials, and so on.
Critical Defects and Correlation by Microbial Challenge
More sensitive tests will have limited significance if they cannot be correlated to some form of microbial challenge during the validation phase to assess the risk of microbial ingress and demonstrate container closure integrity (CCI). For final drug containers, the MALL is defined as the maximum leak size that poses no risk to product safety (4). For each unit operation, it should be determined based on risk assessment and intended system use.
Currently, two basic tests methods are used for microbial challenge. The aerosol challenge test is appropriate for most applications because it corresponds to worst-case conditions for a single-use system. The liquid immersion test is the method most often used for sterile products in their final dosage containers.
Both methods are probabilistic, however. They differ in sensitivity to microbial ingress, which in both methods is highly dependent on test conditions. The choice of test and test conditions should be informed by risk assessment of a given SUT application.
The critical defect size can be different according to the type of challenge test used and its detailed parameters, as well as the size and shape of test organisms, differential pressure across defects, and so on. Other factors related to process and product also affect the critical defect size. For example, a corrosive fluid with no susceptibility to bacterial infection but requiring zero liquid loss for safety reasons might allow use of a bag with a relatively large critical-defect size. Sterile products, however, present the most demanding requirements for system integrity. Single-use systems for such applications must be validated using appropriate microbial challenge tests to determine the smallest critical-defect size for prevention of microbial ingress either under worst-case conditions or those simulating typical process applications. An ASTM workstream on microbial ingress testing of SUTs has been initiated and should provide more guidance on this specific topic (3).
Implementation of Integrity Tests
Control of integrity assurance can be achieved only through combining a risk-based QbD approach with, when technically practicable, a selective application of integrity tests and gross leak tests together with visual inspection. Testing may be considered at three stages of the SUT life cycle: component testing by the system manufacturer, assembly testing by suppliers/integrators, and assembly testing by end users at the point of use. Figure 3 lists options available for implementation of such testing.
A quality risk management strategy for integrity assurance of single-use systems demands an understanding of their lifecycle and strong working relationships between suppliers and end users (Figure 4). Suppliers should use QbD principles during system design and development and thoroughly validate their assembly, manufacturing, and packaging processes. Leak and integrity testing during the SUT manufacturing is part of the risk mitigation strategy. Through operator training and implementation of detailed SOPs, end users can ensure that integrity will be maintained during installation and operation. Based on SUT application risk assessment, further assurance of integrity could be required and achieved by performing optional tests at the point of use. Work initiated in ASTM workstreams will continue to frame existing recommendations from the Bio-Process Systems Alliance (BPSA) for integrity assurance of single-use systems.
1 PDA Technical Report No. 27: Pharmaceutical Package Integrity. 52(S2) 1998.
2 ASTM E55.04-WK64337: Best Practices — Integrity Assurance and Testing of SingleUse Systems. American Society for Testing and Materials: West Conshohocken, PA, 13 July 2018.
3 ASTM E55.04-WK64975: Testing Method — Microbial Ingress Testing on Single-Use Systems. American Society for Testing and Materials: West Conshohocken, PA, 30 August 2018.
4 USP<1207> Package Integrity Evaluation: Sterile Products. United States Pharmacopeial Convention, Inc. Rockville, MD, 1 May 2018.
Vanness B, et al. Response to the Publication of USP ‹1207›. BioProcess Int. 15(1) 2017: 20–21.
|A White Paper|
|This article is a summary of the Bio-Process Systems Alliance (BPSA) initiative from a task force made up of industry experts from end-user and supplier companies. The published white paper, Design, Control, and Monitoring of Single-Use Systems for Integrity Assurance, can be downloaded at http://bpsalliance.org/technical-guides.|