Host-cell proteins (HCPs) are major impurities of concern in biomanufacturing. When present in drug formulations, they can reduce efficacy (by compromising product stability), introduce toxicity, and increase a recipientâs risk for long-term immunogenicity. Understanding HCP profiles and integrating effective removal strategies are important parts of developing new biological drugs â to fulfill regulatory guidelines and to ensure patient safety through product quality.
HCP populations can be both complex and structurally diverse, and some changes in upstream culture conditions can affect their concentrations and thus influence how control strategies work. Accurate and reliable HCP quantitation is essential to monitoring the effects of process adjustments and for optimizing purification steps to ensure adequate removal of impurities.
Removal of HCPs from drug substances is critical in manufacturing high-quality drug products. It is predicated on thorough analysis of HCP contaminants, which vary considerably among expression systems but less based on culture-media components and process parameters. A well-developed, broadly reactive, and qualified HCP immunoassay is vital to such analyses for demonstrating process consistency and final drug-substance purity. How well a particular assay recognizes all HCPs will depend on how well its antibodies recognize the actual HCP profile: Commercial âgenericâ enzyme-linked immunosorbent assay (ELISA) kits can differ substantially in their ability to detect similar types and levels of HCPs. Generic immunoassays are based on polyclonal antibodies raised against HCPs from the cell line cultured in a generic process. Such antibody preparations ideally would react with all potential HCP impurities, but it is impossible for any single assay to suffice in all cases. The reactivity of the antibodies will depend on how the antigen was prepared to induce them, the immunization method used, and how the antibodies are purified. In-houseâdeveloped platform and process-specific assays are necessary, but because of the high risk of drug failure, most companies put off establishing them until later stages of product development. In chapter <1132> âResidual Host Cell Protein Measurement in Biopharmaceuticals,â the US Pharmacopeia (USP) recommends combining two-dimensional (2D) differential gel electrophoresis (DIGE) with Western blot or immunoaffinity approaches as complementary methods. Meanwhile, many laboratories are implementing liquid chromatography with mass spectrometry (LC-MS) workflows for HCP analysis, among other things. Understanding available options can help biopharmaceutical analysts implement a coverage strategy that optimizes accuracy and minimizes the risk to patient safety and product-approval delays.
Denise Krawitz is principal consultant at CMC Paradigms in California. With over 20 years of strategic and technical chemistry, manufacturing, and controls (CMC) experience, she has served in multiple technical and team leadership roles at Genentech, BioMarin, and Ambrx. Krawitz holds a doctor of philosophy in molecular and cell biology from the University of California at Berkeley. She also studied protein folding at the University of Regensburg in Germany on a Fulbright fellowship.
In January 2021, we discussed a number of topics and trends in HCP assay development. What follows is the core of our conversation.
The New Cool Tool
ELISA is the established âgold standardâ for HCP analysis, but in recent years Iâve heard a lot of talk about MS as a more rapid and automatable option. Is that a replacement trend or more of an orthogonal development? MS and ELISA are very orthogonal technologies â and neither one is perfect. Weâve been talking for many years about the imperfections of ELISA, when it comes to coverage, standards, and whatever. We know itâs not a perfect method, and itâs unlikely ever to be. And although MS is the shiny new tool in our laboratories, itâs actually not that new of a technology. Iâve been using it for HCPs since 2002, when we were using the mouse genome to try to find Chinese hamster ovary (CHO) cell proteins. MS has become much more accessible, sophisticated, and powerful over the past 10 years.
What major changes have created that increase? The sensitivity of MS improves logarithmically every few years, the software is getting much better, and the genome databases are improving. But what people donât talk about enough is that MS is also not perfect. It also does not detect everything. There are issues associated with sample preparation and with quantitation. Generally, it is a fantastic tool. I would never run an HCP program now without using MS. But we need to remember that it also has its own inherent biases and weaknesses. Every method has its own bias. The reason why I donât think MS will replace ELISA is that one works very well in quality control (QC) laboratory workflows, and one does not. I wouldnât move to a challenging method that is imperfect over a substantially easier method that is also imperfect. That said, I will continue to use MS; Iâm just not a fan of putting it into a control system.
You would hope that their imperfections are complementary. And they are. It also kind of depends on the stage of development. By the time you get to a commercial stage, you need your HCP ELISA to function more for process control than product purity. So MS can be more important as a characterization tool at later stages in development and commercialization.
USP is working on new standards to support MS analysis of HCPs. Do you know what stage that project is at now? I think the effort at USP is not directed at getting MS into control systems; itâs about standardizing and recommending best practices for identification, characterization, and quantitation of HCPs. They are going to be writing a chapter above <999>, which means that it is not something that people must do. Chapters numbered 1000 and above are considered guidances and best practices, so even the HCP chapter <1132> is not enforceable.
Note that things are different in Europe with the European Pharmacopoeia. If one of its chapters gets referenced in a monograph, then it does become enforceable. That has happened in Europe for HCPs, but in the United States, itâs still just guidance.
The topics the USP is considering for the new chapter are quantitative/qualitative analysis and relative abundance, as well as some standardization on sample preparation, which actually can make a big difference in the results that you get from MS. Standards and data acquisition/processing are both major topics.
Think of this type of standard as an assay calibration: If youâre trying to use MS to quantify total HCPs or individual proteins, then how do you do that? If youâre using it to quantify individual proteins, then how do you do that? The first question is difficult to answer, and there are some great methods being considered â also imperfect, but theyâre very good, and I use them. (Editorâs Note: See the article by SjĂ¶blom, Duus, and Amstrup in this monthâs issue for discussion of a biolayer interferometry method.) If you know you have a problem with a particular HCP, and you want an MS assay for that, then you can develop a standard method specific to that individual protein. Such a highly specific quantitative assay could end up in a control system. As far as generic HCP assays, I donât see MS replacing ELISA, but for individual proteins â or maybe a certain small subset of HCPs â that could be possible.
As for physical standards, itâs generally recognized that the ideal would be intact proteins. Each such protein would go through the same preparation process as your sample. But itâs very hard to generate intact protein standards for all these HCPs. USP is starting with peptides, which are easier to standardize and distribute globally. Theyâre good for quantification in a MS method, but using them as standards wonât account for sample preparation. So you have to assume that you get 100% HCP recovery from the sample digest â because if you donât, then youâre underquantifying HCPs in your sample. If youâre using peptides to quantify your HCP biomass, then youâre quantifying only what you have recovered, so youâd better be able to show that you get good recovery. In most cases, recovery of peptides can be very different. Theoretically, the stoichiometry of all peptides in a protein should be exactly the same. But methods are imperfect, and we rarely observe that with our MS methods.
There is a lot of value to peptide standards and having some standardization across the industry. Using a given method, I know I can detect this peptide at this dilution or at this quantity, or whatever. There is a ton of value to that. But generally speaking, there are weaknesses with only using peptide standards to quantify HCPs.
And around the world, peopleâs knowledge and approaches to sample preparation vary widely. Absolutely. If you only talk to MS people, theyâll tell you itâs fine. But integrating the technology into the overall biology of a system can be a challenge. Some MS folks are really good at both, and I treasure those people.
Managing Data and Risks
Data analysis appears to be an important aspect of LC-MS workflows for HCPs. What databases and software are most useful? Thereâs a lot of good software packages out there. Many are tied to the MS platform. So if you buy from Waters, then you get the Waters system; if you buy from Agilent, you get the Agilent system; and so on. Some other software packages arenât tied to a system, so you can analyze all your data, then export and then pull them into a different package. There are many ways to analyze the data.
Iâm no expert in this MS methodology, so I donât use those packages directly. But I know that the way you set up search parameters and threshold levels for hits and false positives can be very important. Many people say, âOh, I analyzed a sample, and I got 5,000 hits.â And I doubt that. How many of those are false positives? You want to kill your CMC team? Give them a false positive.
Whatâs the next step when youâre trying to figure out whatâs a false positive? Well, databases can be messy. The same protein can have different names and multiple entries, so the database often is the weak link in trying to get a meaningful list of HCPs. You tell me you found 5,000 things, and honestly, I donât believe you. When Iâve gone through and curated those kinds of results, Iâve found database errors and demonstrated that a couple of times by cloning genes from the CHO cells, then sequencing them.
Sometimes itâs an error, sometimes itâs a genetic variation, and both of those things can be true. In at least one case, I saw a frameshift and was able to contact Kelvin Lee (at the University of Delaware), who does a lot of genome maintenance. He went back and looked at the original sequencing data, and yes, in fact, it was ambiguous and an error.
That was a while ago, by the way. Itâs getting better every day. But itâs important to understand what database youâre using, how youâre sorting through and identifying hits. Some people are doing great work on this, and we always have a focus on this at the Biopharmaceutical Emerging Best Practice Association (BEBPA) meetings. Martha Staples and Frieder Kroener, for example, are well versed in parsing what you can crank out of a software package from whatâs meaningful for your team. Those are often different answers.
What do people do when suspecting false positives or trying to deal with them? Whatâs the next step? First, determine how many peptides you have identified from a given protein, going back to tandem mass spectrometry (MS-MS) to see what the quality of the sequence was and to make sure itâs correct. Make sure that the sequence is tied unambiguously to a given protein. There are also sensitivity controls in our laboratories, making sure that youâre not getting carryover from sample to sample. Those things are just good laboratory practices and part of being a diligent scientist â not necessarily trying to crank out the biggest list possible, but rather trying to understand whatâs relevant for your product and your process.
At the same time, you donât want to miss anything, right? Youâd rather have a large list that you can pare down than a very small list thatâs clearly missing something. Ideally, you want to be somewhere in between. Iâve seen spreadsheets with 5,000 proteins on them, and you could sit there for weeks trying to sort through them and figure out whatâs relevant, and thatâs awful.
A false negative would be something thatâs there but not detected. So the question is whether you didnât detect it because itâs not there, or because itâs not at a high enough abundance, or because thereâs a limitation in your method.
How do you define high-enough abundance? Itâs not the same for every protein. Limit of detection is not going to be the same for every molecule. At some point, you have to invoke practicality. We could push technology and find a protein thatâs present at 0.1 ppb. But is that important for my product or process just because the MS can tell me itâs there? Unfortunately, itâs not that simple. It depends on the patient population, indication, dose level, and all those risk-assessment considerations (1â4). But we donât need to push our technology to a point at which itâs not clinically meaningful to safety, efficacy, and stability of your product.
The practical approach some people are using is to develop standards for a mixture of 50 or so proteins, then show that their methods can detect pretty much any protein out of that standard panel down to 10 ng/mg of final product. If you can say, âHere is a universal protein standard of 48 different proteins (with a wide range of molecular weights and pI levels), and I can detect every one of those down to 10 ppm when spiked into my antibody,â well, thatâs pretty reasonable. Thatâs a practical application of what limit of detection (LoD) can be applied to a method.
That brings biosimilars to mind â how you want them to be as much like the originators as possible. But now technology has advanced to the point that you can find things or do things in your process that might be even better than the originator could do. So how âsimilarâ do you have to be? Well for HCPs with biosimilars, itâs actually pretty clear: You do not have to be the same. Given the length of patent protection, a lot of the originator products are made with older technologies and just âdirtier.â But that becomes obvious only now. There are a few published cases (5â10) comparing HCPs in an originator product and a biosimilar. And the biosimilar always has fewer HCPs, and sometimes the profiles overlap. That becomes another data point to indicate that the HCPs copurifying with your product probably have as much to do with your product as your process.
At the online BEBPA conference in October 2020, one MS company was able to show some data on HCPs found in commercial products. I think theyâve done that work with clients working on biosimilars. I expect biosimilars to be cleaner than their original counterparts. And when companies overhaul the production or downstream process for a legacy product, it also needs to end up cleaner. You donât have to match HCPs, but the profile canât be worse.
The more you can do, the more you have to do. This indicates why CHO is so dominant and probably will remain dominant for some time: because we know so much about it. We know a lot more about its HCP makeup than just about any other cell line. Any time you want to try something new, you need standards to compare it with. We have a long history of giving millions of doses of CHO-derived biologics to humans, with an excellent safety record especially when it comes to HCPs. Only a handful of case studies have shown clinical issues related to HCPs. Some of this is because the industry takes this subject seriously and does the work to remove them, and weâre getting better at doing that over time. And with the doses administered in clinical trials, itâs hard to attribute any safety events to HCPs at all.
Safety issues can depend on the expression system. Consider Escherichia coli, for example. Our immune system is primed to detect certain E. coli proteins, so those might be a little more concerning if they show up in a product.
Itâs impossible to think about a safety profile without putting that into the context of a full risk assessment: What does a given HCP derive from? Whoâs getting the final product? Is it going into a pregnant woman, or a baby, or a 90-year-old cancer patient? Is it going into an immune-suppressed population or an immune-activated population? There are many questions to think through. Risk assessment is difficult. But our MS specialists are getting better all the time, and weâre continuing to name specific HCPs. We have to learn how to manage HCP risk when we know the names of the HCPs in biologics. Thereâs probably not a single product derived from cells that is 100% free of them.
It seems impossible. Thatâs right. As our technologies get more sensitive, weâll get better at detecting those things. Then the question becomes, âHow do I know this is okay?â And thatâs a hard question to answer. One of the first times I had to deal with that was with the issue of PLBL2 [hamster phospholipase B-like 2] at Genentech (11â13). We had to work with toxicologists and our clinical team to understand the full context of what was going on with that protein.
|For More Discussion|
|The BioPharmaceutical Emerging Best Practices Association (BEBPA) is a nonprofit organization created to serve as an international forum for discussion of scientific issues and problems encountered in product and process development. The group provides a platform for industrial scientists to discuss common technical problems, suggest potential solutions, and openly discuss their merits. BEBPA conferences include two to three days of presentations, workshops, and round-table discussions on topics of current interest to the analytical biopharmaceutical community. BEBPAâs annual bioassay conferences will be 22â25 March 2021 (virtual) and 22â24 September 2021 (Rome, Italy). The annual host-cell protein conference will be virtual this year, 17â19 May 2021. Find more information online at https://www.bebpa.org/conferences. On the same website, you can access presentation abstracts from past conferences as well as a number of white papers, survey results, and other resources.|
We talked a little bit about specification limits. Whatâs the latest thinking? There tend to be what I would call âgenericâ limits that people apply particularly in earlier clinical development, usually 100 ng/mg (100Â ppm). We reference that as the acceptable number. But if you look back to where that number came from, someone just made it up. Itâs been working for decades, but itâs important for people to understand that there was no scientific basis for establishing that as a generic limit.
So there could have been other arbitrary numbers back in the day that didnât pan out very well? Thatâs right. Weâre talking about CHO-based products. Itâs hard to say anything about the safety of HCPs without taking context into account. It is different for E. coli, though many people use 100 ppm for that too. But Iâd want to do a little deeper digging with MS before I said 100 ppm was okay â make sure nothing in there looks scary. Anyway, I donât see a lot of pushback on that 100-ppm number. Even though itâs based on nothing, the industry has tons of experience to back it up.
As you go toward commercial manufacturing, those generic limits donât apply anymore. Some people argue that they can. But our limits at the commercial stage need to be tied to the product quality that was used in clinical trials, from which safety and efficacy were demonstrated. With a typical monoclonal antibody, youâre unlikely to have more than 10 ng/mg in clinical materials. For a commercial specification, I wouldnât go as high as 100 ppm. If you always run your commercial process below 10 ng/mg, and suddenly you see a batch at 80Â ng/mg, then you know something happened with your process. And thatâs when you need your HCP ELISA as a process monitoring tool. I wouldnât release that batch without understanding what happened with my process. So it would be crazy, if all of your clinical experience is really low, to set your specification up high.
Nonetheless, some people try to do it. I think that at the biologics license application (BLA) stage, your specs need to be tied to clinical experience. This is where you pull in your biostatistician for better understanding.
At that stage, you should have lots of data to analyze. Thatâs right. And you can set a limit based on a confidence interval, for example, but what you really want is to know whether your process is delivering something unexpected. Then you will want to investigate that. You can set an alert limit and acceptance criteria or just acceptance criteria. Say youâve never had more than 10 ng/mg, then maybe you want to set an alert limit at 15 ng/mg. So if you see something, youâll at least look at it before product gets released. There needs to be some mechanism that shows when the process is delivering something unexpected.
Does that relate to design space? So you have different levels from ideal to OK to acceptable to unacceptable? Absolutely. When people do those process validation studies to define that design space, often Iâll say, âDo a worst-case linkage study. Find the worst possible parameters for purity and what your HCPs look like.â That design space investigation for HCP clearance also can be a part of your biostatisticianâs analysis. The important thing to say about setting specifications is that your commercial limits need to be tied to clinical experience; they cannot be based on some platform.
To determine acceptable coverage levels for ELISAs, laboratories use 2D Western blotting, immunoaffinity chromatography (IAC), and dye-based 2D DIGE. What are the main concerns in this kind of assay qualification effort? Itâs for ELISA characterization. Youâre asking how good the antibodies are. Iâll simplify: Letâs say that at harvest there are 100 HCPs. How many can my antibody detect? If only five of them, then I have 5% coverage and a terrible monitoring tool. If my antibody detects 95% of them, Iâm detecting almost everything coming through, so I probably have a pretty good monitoring tool.
An ELISA monitors only a subset of the HCP profile â hopefully a large subset, but still only a subset. Itâs a statistical sampling of the HCPs that went into your downstream process. So if you have high coverage, thatâs a pretty good statistical sample; if itâs low coverage, then itâs a pretty poor statistical sample. If your process goes out of control, then as youâre sampling these groups of proteins, you should see changes in the readout.
When a HCP copurifies with a product, thatâs not random. Usually it is because that protein interacts with your product in some way. People used to say it was the same sort of biophysical and biochemical characteristics letting it copurify. But weâre getting more evidence that HCPs coming all the way through often have some sort of affinity for the protein of interest.
Wow, thatâs even worse. Theyâre not just alongside; theyâre bound to it! Thatâs right. It could be a hydrophobic interaction. Some products are glutathionated in cell culture, so glutathione S-transferase would bind to those. And then after you figure out that connection, you just smack yourself in the head: âOf course.â If I see an HCP copurify with a product, and I donât immediately understand why thatâs happening, then I just have to figure that out.
Does structural analysis help there? It could. Often, itâs a hydrophobic interactions, so you just need to wash your protein A column differently.
HCPs coming through with your product are not some random statistical sample. When youâre talking about coverage, youâre really talking about your ability to monitor process control. Because you can miss proteins that specifically bind your product, you need orthogonal techniques like MS.
And then thereâs the other question of what percent coverage is good enough. Is it 5%? No. Is it 95%? Of course. But usually weâre somewhere between those extremes. My experience has been that better than 70% is good. This is where the third piece comes in: determining what that coverage is â and thatâs really hard. I guarantee that every one of the different methods to do it will give you different results.
You can use IAC: Set up a column of your ELISA antibodies and pour your HCPs over it to find out what binds and what doesnât. Thatâs a great method, but there are some issues with it. For one thing, proteins in solution donât just float around separately. They bind with other molecules. So if protein 1 binds the resin, and protein 2 is just stuck onto protein 1, then itâs going to look like you have coverage for protein 2 when you donât.
Another problem is that I can keep pouring more sample over that column and drive low-affinity interactions that arenât representative of real binding on an ELISA well plate.
Then there are 2D gels and blots, where one spot does not equal one protein. Itâs hard to tell visually whatâs going on. If you give two people the same image, youâll get two different numbers. They might be in the same ballpark, but they will differ. When you look at a 2D gel and see only a few proteins lighting up with your Western blot, then you know you have a problem. If it is such a complex pattern that you have trouble counting, then youâre more likely to be in good shape.
The higher the coverage, the messier the result? Pretty much. Health authorities expect a number for coverage, and itâs hard to get an unbiased analysis of that number. So what you can do as a responsible assay developer is to look at the totality of data. Use a couple of different methods and show youâre getting that good statistical sampling. If not, then your ELISA canât tell you if your process is out of control.
How is that subset determined? There are different ways to think about that. For a process-specific ELISA, if you had the time and resources, you could use MS on every column pool in downstream processing. You would determine what HCPs are cleared at each step. You could analyze the coverage with a combination of techniques and assess what HCPs youâre actually detecting. And youâd call those that persist further into the process your âhigher-risk HCPsâ â from a process standpoint, though, not necessarily for patients.
Theyâre not necessarily the most dangerous, but theyâre the most problematic. Thatâs right. So I would be happy if my antibodies covered and recognized all of those higher risk proteins. But perfection doesnât exist. Thatâs what I would drive for, and then if there are gaps â one HCP goes all the way through and the ELISA canât detect it â then I consult the process validation studies to see how well the process can control that particular impurity. I can use MS to ask that question. If it is well within control, then I donât have to worry about it so much. If itâs not well within control, or itâs been somewhat variable, then I need to think about controlling that HCP. If my ELISA doesnât control it, then I need another method: an individual-protein ELISA or a multiple-reaction monitoring (MRM) MS method, for example, depending on the protein.
But itâs not practical to do all that kind of testing for each step in a downstream process. You can, but itâs expensive.
Is the alternative to look at your harvest and your drug substance, and then go from there? You have to test the harvest because thatâs how you establish coverage. Itâs upstream-process specific. From a downstream perspective, you can bookend it with capture and final product. If something comes through the protein A pool, for example, then itâs probably a higher risk for your process. Then you look at the end and compare the two. Thatâs the more practical approach.
Some people do a full process characterization. But how many processes do you analyze? Do you consider different variations of a process â both upstream and downstream? You can end up with an enormous amount of data. You canât do it comprehensively; there are too many variables, and MS is not high-throughput enough to work at that scale.
Is it something you do to support process changes? Absolutely â and process transfers.
But if I wanted to determine whether an HCP ELISA is acceptable for process control, then I begin with three questions: Does my standard represent the HCPs that enter my process? Do the antibodies recognize a significant portion of those proteins? And do those antibodies recognize HCPs that should be removed by the process? If the ELISA shows no reduction in HCPs over a given step, then itâs not recognizing the proteins that are removed by that step.
Then you can dig in a little deeper and say, âThese are the HCPs in my capture pool,â or âThese are most commonly seen copurifying with my product. Do my antibodies recognize those proteins?â Thatâs another level thatâs not required to show as part of assay characterization, but I think itâs good practice.
There has to be a practical limit. With HCPs, you could go down a long and expensive rabbit hole that is not necessarily relevant for your product. At a certain point, thereâs only so much you can do. Itâs hard to know which questions will be meaningful. Itâs reasonable to ask whether your HCP ELISA detects the three proteins that frequently copurify with your product. But Iâd be a little nervous if health authorities started asking that specifically â because Iâd want to know where they were going with it. But I recommend that my clients know the answer to the question.
When you hear people talking about laboratory automation freeing up your time to worry about other things, thatâs what they mean. You canât automate risk assessment, which is really important. Itâs going to become more so as our technologies get better.
1 Wang F, et al. Host-Cell Protein Risk Management and Control During Bioprocess Development: A Consolidated Biotech Industry Review. BioProcess Int. 16(5â6) 2018; https://bioprocessintl.com/business/risk-management/host-cell-protein-risk-management-and-control-during-bioprocess-development-a-consolidated-biotech-industry-review-part-1;
2 de Zafra CLZ, et al. Host Cell Proteins in Biotechnology-Derived Products: A Risk Assessment Framework. Biotechnol. Bioeng. 112(11) 2015: 2284â2291; https://doi.org/10.1002/bit.25647.
3 Jawa V, et al. Evaluating Immunogenicity Risk Due to Host Cell Protein Impurities in Antibody-Based Biotherapeutics. AAPS J. 22 July 2016: 1439â1452; https://doi.org/10.1208/s12248-016-9948-4.
4 Wang X, Hunter AK, Mozier NM. Host Cell Proteins in Biologics Development: Identification, Quantitation and Risk Assessment. Biotechnol. Bioeng. 103(3) 2009: 446â458; https://doi.org/10.1002/bit.22304.
5 CBER/CDER. Quality Considerations in Demonstrating Biosimilarity of a Therapeutic Protein Product to a Reference Product: Guidance for Industry. US Food and Drug Administration: Rockville, MD, April 2015; https://www.fda.gov/media/135612/download.
6 Mihara K, et al. Host Cell Proteins: The Hidden Side of Biosimilarity Assessment. J. Pharma. Sci. 104(12) 2015: 3991â3996; https://doi.org/10.1002/jps.24642.
7 Reichert J. Next Generation and Biosimilar Monoclonal Antibodies: Essential Considerations Towards Regulatory Acceptance in Europe, February 3â4, 2011, Freiburg, Germany. mAbs 3(3) 2011: 223â240; https://doi.org/10.4161/mabs.3.3.15475.
8 Fang J, et al. Advanced Assessment of the Physicochemical Characteristics of Remicade and Inflectra By Sensitive LC/MS Techniques. mAbs 8(6) 2016: 1021â1034; https://doi.org/10.1080/19420862.2016.1193661.
9 Liu J, et al. Assessing Analytical Similarity of Proposed Amgen Biosimilar ABP 501 to Adalimumab. BioDrugs 30, 2016: 321â338; https://doi.org/10.1007/s40259-016-0184-3.
10 EMA/CHMP/BWP/247713/2012. Guideline on Similar Biological Medicinal Products Containing Biotechnology-Derived Proteins As Active Substance: Quality Issues (Revision 1). European Medicines Agency: London, UK, 2014; https://www.ema.europa.eu/en/similar-biological-medicinal-products-containing-biotechnology-derived-proteins-active-substance#current-effective-version-section.
11 Vanderlaan M, et al. Hamster Phospholipase B-Like 2 (PLBL2): A Host-Cell Protein Impurity in Therapeutic Monoclonal Antibodies Derived from Chinese Hamster Ovary Cells. BioProcess Int. 13(4) 2015: 18â29, 55; https://bioprocessintl.com/analytical/downstream-validation/hamster-phospholipase-b-like-2-plbl2-a-host-cell-protein-impurity-in-therapeutic-monoclonal-antibodies-derived-from-chinese-hamster-ovary-cells.
12 Tran B, et al. Investigating Interactions Between Phospholipase B-Like 2 and Antibodies During Protein A Chromatography. J. Chromatog. A 1438, 2016: 31â38; https://doi.org/10.1016/j.chroma.2016.01.047.
13 Zhang S, et al. Putative Phospholipase B-Like 2 Is Not Responsible for Polysorbate Degradation in Monoclonal Antibody Drug Products. J. Pharm. Sci. 109(9) 2020: 2710â2718; https://doi.org/10.1016/j.xphs.2020.05.028.
Levy NE. Identification and Characterization of Host Cell Protein Product-Associated Impurities in Monoclonal Antibody Bioprocessing. Biotechnol. Bioeng. 111(5) 2014: https://doi.org/10.1002/bit.25158.
Nogal B, Chhiba K, Emery JC. Select Host Cell Proteins Coelute with Monoclonal Antibodies in Protein A Chromatography. Biotechnol. Prog. 28(2) 2012: 454â458; https://doi.org/10.1002/btpr.1514.
Seisenberger C, et al. Questioning Coverage Values Determined By 2D Western Blots: A Critical Study on the Characterization of Anti-HCP ELISA Reagents. Biotechnol. Bioeng. 2020: 1â11; https://doi.org/10.1002/bit.27635.
Singh SK, et al. Understanding the Mechanism of Copurification of âDifficult to Removeâ Host Cell Proteins in Rituximab Biosimilar Products. Biotechnol. Prog. 36, 2020:1â12; https://doi.org/10.1002/btpr.2936.
Sisodiya VN, et al. Studying Host Cell Protein Interactions with Monoclonal Antibodies Using High Throughput Protein A Chromatography. Biotechnol. J. 7, 2012: 1233â1241; https://doi.org/10.1002/biot.201100479.
Vanderlaan M. Experience with Host Cell Protein Impurities in Biopharmaceuticals. Biotechnol. Prog. 2018; https://doi.org/10.1002/btpr.2640.
Wilson MR, Easterbrook-Smith SB. Clusterin Binds By a Multialent Mechanism to the Fc and Fab Regions of IgG. Biochim. Biophys. Acta. 1159, 1992: 319â326; https://doi.org/10.1016/0167-4838(92)90062-i.
Zhang Q, et al. Characterization of the Co-Elution of Host Cell Proteins with Monoclonal Antibodies during Protein A Purification. Biotechnol. Prog. 32(3) 2016: https://doi.org/10.1002/btpr.2272.
Denise Krawitz, PhD, is principal consultant with CMC Paradigms LLC, 49 Oak Springs Drive, San Anselmo, CA 94960; firstname.lastname@example.org. She also teaches a FasTrain course on host-cell protein methods (https://fastraincourses.com). Cheryl Scott is cofounder and senior technical editor of BioProcess International, part of Informa Connect, PO Box 70, Dexter, OR 97431; email@example.com.