Anna Quinlan

January 13, 2015

8 Min Read

Not long ago, chromatography automation meant strip recorders and peristaltic pumps. Today, few people would consider that to be true automation, and even fewer would settle for binders full of strip-recorder paper reels. Automation is becoming intelligent and in the process is making our workflows smarter. But how close is automation to being as smart as an experienced scientist? Bio-Rad Laboratories spoke with academics, biotechnology R&D scientists, and industrial process engineers about the evolution of chromatography automation — where it started, how far it has come, and what its limits might be.14-0975-protein-purification-infographic-300x272.jpg

Improved Software
Early fast protein liquid chromatography (FPLC) systems automated chromatography runs, but setting up a run had a steep learning curve. “It almost looked like you were programming code,” explains David Grabowski, team leader at R&D Systems. “You programmed these things in a list orientation. You’ve got parentheses where you’re typing in different column volumes, and then you’ve got different variables for that block and for your main block. If you weren’t trained, then it was very hard to use.” For a new employee or a student, being able to take advantage of automation meant learning two new skills: the basics of chromatography and how to program that FPLC.

But times have changed, says Edward Ha, chemist at Igenica Inc. “People who have never done protein purification are off and going in under 20 minutes with a canned method that we have on the machine.” The software that was once a hurdle, an additional skill to acquire, is now a great equalizer that arguably makes a chromatography novice smarter.

Increasing Productivity
Automation is making workflows smarter than ever before. Software packages with preprogrammed and customizable protocols not only give scientists instant basic chromatography know-how, but also allow multiple users to use one FPLC without the need to reprogram methods every time.

“We have different functional units within Igenica,” shares Ha, “and they’ll have to purify different proteins using SEC [size-exclusion chromatography], HIC [hydrophobic interaction chromatography], and affinity purifications such as nickel, protein A, and protein G. And it’s just been a godsend as far as ease of use.” Ha credits autosamplers for streamlining his workflow: “Getting an autosampler on the system was a huge achievement.”

Advances such as air sensors and automatic flow-rate adjustment allow scientists to run columns overnight unsupervised. Time that previously was spent programming runs or supervising a gravity column can be spent more productively, says David Grabowski. “Extending your workday by running columns overnight lets you spend the day analyzing data and running gels. You don’t have to waste your time watching little blue lines go up and down.”

Not only does automation make Grabowski’s team more productive, but it also reduces operator-introduced variability. “Being first to market is important,” explains Grabowski. “When we’re developing a protocol, we’re trying to eliminate as many variables as possible. By automating chromatography, we can eliminate that as one of the variables.” That capability is crucial for his team, which is tasked with developing standard operating procedures for hundreds of proteins every year.

The Importance of Experience
Automation in its present state cannot completely replace experience, points out Meng Guo Gi, a process engineer at Main Luck Pharmaceuticals. He praises features such as buffer scouting and tandem chromatography for streamlining design of experiments (DoE) in his team. But he cautions that current automation options “may not be suitable for unexpected conditions during process screening.”

Matthew Groves, assistant professor at the University of Groningen, agrees: “Once I’ve got a system set up, and it’s expressing and crystallizing, then I can use automation to rerun my protocols to test a new compound. But when you play with a new protein, automation is less straightforward.”

Groves adds, “Running the buffer scouting, you still need to define sensible ranges. You need to define your own ranges. Hypothetically there could be a simple piece of paper I could give my student that reads ‘Put buffer there and there, press that button, and that’s it, it will run.’ But no software is at that point right now.”

Others agree that automation is not at a point at which it can replace the human eye or brain. “Automation may not be suitable for unexpected conditions during process screening,” says Gi. “Automation can’t be smarter than the human brain. Some special conditions or proteins need manual analysis and operation.”

Looking Ahead
Groves believes chromatography systems that can make intelligent decisions are not too far off in the future. “In 10 years, an ideal automation experiment would be that I have my sample, I know it needs an ion exchange, so I load into the loop. It spends the first 10% of the sample screening through the pHs, establishes the best pH on a small scale, and then runs on the large scale.”

But how far are we from highly automated systems that are as smart as an experienced chromatographer? Where an established protocol is run repeatedly (such as for the purification of variants or biologicals), being able to trust that the machine will do what it is supposed to do seems to be the greatest concern. “In the past, people did not trust overnight runs. They feared that they would come back in the morning, and their protein would be on the floor because a valve didn’t switch at the right time” explains Grabowski. Improvements in engineering as well as features such as air sensors, automatic flow-rate adjustment, and increases in back pressure are eliminating the distrust people had of older machines. Instead, when asked about what automation would be most useful for routine purifications, those interviewed were considering front-end automation such as sample and buffer preparation or automated analysis of fractions. “If there were some automated way to make buffers and filter them, then that would be pretty handy,” says Bruning. Grabowski put being able to “autosample your fractions and run them on a gel so that you can come in in the morning and just have a picture” high on his wish list.

However, that picture can grow murky quickly when you consider nonroutine purification workflows. Groves, who developed “launch and leave” beamline software for crystallographers, explains that “the advantage of doing it for a beamline was simply that when the data come off, they are computationally very accessible. I can write a script that does exactly the same thing that I would do. Peak finding is slightly more complex for chromatography. You have to worry about peaks not splitting off nicely or developing shoulders or asymmetry.”

Being able to scout buffer pH on 10% of your sample may not be too far off in the future. Fast flow rates and small columns that are directly scalable to large columns for final purification could help accomplish this task.

Finally, chromatography software must be able to identify the peak that contains the protein of interest. “For a nickel column,” says Bruning, “it’s definitely possible because there’s one big peak, and you generally take it all.” However, it quickly can become more complicated: “With a Q column, you have to think about which fractions to go with.”

Jeff Habel, senior scientist in protein technologies R&D at Bio-Rad Laboratories, adds that “this system of the future will also have to make some qualitative judgments. If I’m a structural biologist, then I may want only the first half of the peak because you have some contaminant that’s eluting in the back half. If I’m an enzymologist, then impurities may not bother me as much. I may want the entire peak.”

Unlike peak-finding, identification of fractions of interest cannot be accomplished computationally. Fractions will have to be assayed to identify which ones contain the protein of interest. An FPLC UV detector could be used to track chromogenic proteins whereas automated gel loading and electrophoresis could be helpful for tracking contaminants. For other purification schemes, automated testing of fractions for desired enzymatic activities may be required.

Although such developments may seem like insurmountable feats of engineering, similar problems already have been solved for small-molecule and peptide chromatography. HPLC can be coupled to mass or NMR spectrometers, permitting direct identification of an analyte, or to diode array and fluorescence detectors to allow detection through postcolumn functionalization. Adapting similar techniques to protein identification can be envisioned easily.

“The automation system of the future will probably do exactly what Dr. Groves wants,” says Habel. “It will give you the best conditions for the first, second, and third columns; and then assemble them into a multidimensional chromatography method.” For many applications, he says, chromatography will become like sample preparation. “Being able to do separation, visualization of your protein, and even quantitation to let you know how much there is, all in an automated way — that becomes a very interesting idea for the laboratory of the future.”

That level of automation — something that is truly intelligent and able to replace human decision and intervention — may not lie in the immediate future. But the advances in automation seen in recent years suggest that the laboratory of the future that Groves and Habel envision may be a little closer than many would have envisioned just a decade ago.

Anna Quinlan, PhD, is a technical editor at Bio-Rad Laboratories, 1000 Alfred Nobel Drive, Hercules, CA 94547; 1-510-408-2075; [email protected]. This article was adapted from bioradiations.com.

You May Also Like