Practical Considerations for Statistical Analyses in Continued Process Verification

Keith M. Bower

December 17, 2020

7 Min Read

18-11-12-Bower-P1-300x169.jpgSeveral statistical techniques can be used to assist in monitoring biopharmaceutical product quality attributes as part of continued process verification (CPV) activities. These include run charts, control charts, and capability analyses. Below, I provide an overview and recommendations on statistical strategies when developing a CPV program, considering the expected behavior of manufacturing results in the biopharmaceutical industry.

Presence of Autocorrelated Data
In a previous study, I highlighted the tendency for data to be positively autocorrelated (values are closely related to each other in sequential order) in biopharmaceutical manufacturing processes (1). Ideally, each attribute identified in a CPV plan would be assessed to determine the potential causes of serially dependent data. A statistical approach then would be implemented to accommodate that behavior (e.g., control charting the residuals from a fitted time series model) (2). In practical terms, however, that is not feasible given the number of critical quality attributes (CQAs) and critical process parameters (CPPs) for manufacturing a biopharmaceutical product. Therefore, the recommendations herein provide a lean yet robust strategy that applies regardless of the level of serial dependency in the data.

Data Considerations
Although numeric data can be obtained for many product characteristics, a general rule is that statistical analyses will be implemented only on quantitative data if a dataset includes five or more distinct values for an attribute. The analyses to be performed can be based on a tiered structure, based on the total number of manufactured and released commercial lots (N). For example, the structure could take the following form:

  • Tier 1: N < 15

  • Tier 2: 15 ≤ N < 30

  • Tier 3: N ≥ 30.

As noted in the US Food and Drug Administration’s (FDA’s) process validation guideline for CPV (3), “We recommend that a statistician or person with adequate training in statistical process control techniques develop the data-collection plan and statistical methods and procedures used in measuring and evaluating process stability and process capability.” The following information provides specific recommendations when developing a CPV strategy.

Process Monitoring
To assess whether a manufacturing process remains in a state of control, run charts and control charts can be used. Throughout a product’s lifecycle, process monitoring can enable the use of increasingly sophisticated statistical tools. For example, switching from run charts in Tier 1 to Shewhart individuals control charts (X charts) from Tier 2 and onward might be appropriate. Run rules can be used with each X chart to determine situations in which a process is out of statistical control.

Probabilities of false alarms for given combinations of run rules and expected level of autocorrelation can be considered when developing a process monitoring strategy. As noted in the process validation guideline (3), “Procedures should describe how trending and calculations are to be performed and should guard against overreaction to individual events.”

Correspondingly, run rules used in Tiers 2 and 3 can reflect available data and desired levels of sensitivity. For example, assuming independent results from a normal (Gaussian) distribution, using the Nelson 1 test alone, the false-alarm rate is ~0.3%. Using Nelson tests 1, 2, and 3 jointly in Tier 3 results in a ~0.7% false-alarm rate with independent results, which increases at higher levels of positive autocorrelation in the data (4). Note that if an entire lot history is unavailable for control charting (e.g., if only a subset of lots are received from a contract manufacturing organization), then only Nelson test 1 might be appropriate.

The estimate of variation in X charts can be impaired significantly by using the “traditional” (average moving range) approach when data are positively autocorrelated. Thus, some experts recommend fixing control chart limits only in Tier 3 when at least 30 lots are available and for the control limits to be calculated using the Levey–Jennings approach (1, 5). Alternative control charts such as cumulative sum or exponentially weighted moving average charts also can be used to detect small shifts and drifts, although certain run rules implemented with an X chart might be preferred for ease of implementation.

Assessing Normality
Process capability estimates and the expected performance of X charts can be impaired significantly when data are not well modeled by a normal distribution (6). If it can be assumed that results for an attribute arise from a distribution in which there is some common central tendency, with random variation, adequately far from a natural limit (e.g., zero with cycle time data), then a normal distribution might be reasonable.

However, sometimes a priori knowledge indicates that an alternative probability distribution should be used. For example, a Poisson distribution might be appropriate when counting rare occurrences such as subvisible particles (7) or if there is a lognormal distribution for potency assay results (8).

To assess normality, some experts recommend creating a normal probability plot using statistical software. If no serious deviation from a linear pattern is present in the normalized scores, then the assumption of normality should be reasonable. The practice of transforming data to achieve normality can be used, but that approach is used best when a transformation is based on a scientific rationale (e.g., using a square-root transformation with data assumed to be from a Poisson distribution) (9).

Capability Analysis
To compare the potential manufacturing capability with specification acceptance criteria, estimates such as process capability (Cpk) or process performance (Ppk) can be used. As noted above, estimates of process capability are sensitive to the normality assumption, so the data and associated acceptance criteria might need to be transformed before analysis. If available data are inadequate to assess whether a process is stable, the Ppk estimate is used (10).

Because calculated values for Ppk and Cpk will be close with a stable process (11), platforming is possible on the use of Ppk estimates across Tiers 1–3, with the caveat that the estimate can be misleading if a process is not in control. The process capability estimate should be >1 to be rated at least “marginally capable” (10). Note that the estimate of variation (standard deviation) using the Levey–Jennings approach for an X chart (until control chart limits are fixed in Tier 3) will be the same value used in the Ppk estimate, providing a consistent reporting strategy.

Phase-Appropriate Strategy
For CPV activities, the types of statistical analyses and rationale for use can be justified using a risk-based approach. Considering the amount of available data and point in a product’s lifecycle, a CPV statistical strategy can be developed and then implemented as the third stage of process validation. General statistical strategies implemented for CPV should align with approaches implemented in earlier process validation stages, including process design and qualification. The development of a standard operating procedure (SOP) to delineate roles and responsibilities and identify specific statistical analyses to be performed for CPV — potentially using a tiered structure based on the number of manufactured and released lots — is recommended.

References
1 Bower KM. Determining Control Chart Limits for Continued Process Verification with Autocorrelated Data. BioProcess Int. 17(4) 2019: 14–16; https://bioprocessintl.com/manufacturing/process-monitoring-and-controls/determining-shewhart-control-chart-limits-for-continued-process-verification-with-autocorrelated-data.

2 Montgomery DC, Mastrangelo CM. Some Statistical Process Control Methods for Autocorrelated Data. J. Qual. Technol. 23(3) 1991; https://doi.org/10.1080/00224065.1991.11979321.

3 Process Validation: General Principles and Practices. US Food and Drug Administration: Rockville, MD, 2011.

4 Bower KM. Run Rules with Autocorrelated Data for Continued Process Verification. BioProcess Int. 18(10) 2020: 60–62; https://bioprocessintl.com/manufacturing/continuous-bioprocessing/run-rules-with-autocorrelated-data-for-continued-process-verification.

5 Levey S, Jennings ER. The Use of Control Charts in the Clinical Laboratory. Am. J. Clin. Pathol. 20(11) 1950: 1059–1066; https://doi.org/10.1093/ajcp/20.11_ts.1059.

6 Montgomery DC. Introduction to Statistical Quality Control, 5th ed. John Wiley and Sons: New York, NY, 2005.

7 <1788> Methods for the Determination of Particulate Matter in Injections and Ophthalmic Solutions. USP 41–NF 36. United States Pharmacopeial Convention: Rockville, MD, 2018: 8065.

8 <1030> Biological Assay Chapters: Overview and Glossary. USP 41–NF 36. United States Pharmacopeial Convention: Rockville, MD, 2018.

9 Box GEP, Hunter JS, Hunter WG. Statistics for Experimenters, 2nd. ed. John Wiley and Sons: New York, NY, 2005.

10 Yu LX, et al. Using Process Capability to Ensure Pharmaceutical Product Quality. Pharm. Eng. 35(2) 2015: 35–42.

11 PDA Technical Report No. 60 (TR60): Process Validation — A Lifecycle Approach. Parental Drug Association: Bethesda, MD, 2013.

Keith M. Bower, MS, is a senior principal scientist in CMC statistics at Seagen Inc.; [email protected]; www.seagen.com.

You May Also Like