Ensuring a healthy future for AI in biomanufacturing

An expert touts the value of written guidance and Explainable AI for establishing industry-wide trust.

Josh Abbott

August 7, 2024

4 Min Read

“The integration of AI into biomanufacturing may outpace existing regulations and standards. Ensuring that AI systems comply with regulatory requirements and industry standards can be challenging.” At least, that’s the sentiment generated in less than two seconds by Open AI’s ChatGPT program when asked about the dangers of using artificial intelligence in biomanufacturing.

Artificial intelligence — including the generative AI found in ChatGPT — can provide us with impressive insights, but the conversation surrounding it is multifaceted and nuanced. In journalism, there can be a sense of underlying anxiety creeping around the outskirts of that conversation. After all, the first two sentences of this article came faster and easier than all the rest, inspiring existential questions about the viability of traditional journalism.

But even if they’re for different reasons, AI hesitancies affect other industries too, including biomanufacturing.

We have all witnessed the growth and proliferation of various forms of AI in the past few years. It fuels versatile tools that are capable of enhancing countless global industries. Yet it is important to answer the questions of ethics, utility, and accuracy of AI products. If AI is to forever be linked with biopharmaceutical development and manufacturing, it’s important that it is safely and responsibly integrated into our professions.

Last week, BioProcess Insider spoke with Taylor Chartier, CEO of Modicus Prime about how the company's mpVision software improves process development and commercial drug manufacturing. She also discussed the future of AI in biomanufacturing and how industry leaders are working to standardize and regulate its use.

“Every biopharma organization moves through different stages of the AI-hype cycle in their AI initiatives,” she said. “Generally, we start at the so-called Peak of Inflated Expectations, where the prospective benefits of AI are widely recognized, and an enormous amount of resources are allocated toward AI, typically under a digitalization strategy.”

However, she said that in practice, many pharma companies are initially taken aback by the complexities inherent to AI integration, ending up in a phase known as the Trough of Disillusionment during execution. “AI's data-driven nature introduces unique intricacies; for example, an AI model may converge to a local minimum rather than a global minimum while reducing its prediction error. Additionally, models can be overfit on a subset of data, introducing bias, or underfit if not adequately trained to learn the multi-dimensional features of a dataset.”

She added, “The key to bypassing this trough and reaching the plateau of productivity lies in rigorous validation and appropriate operational controls throughout the AI-enabled system's lifecycle. This includes guiding the maintenance and deployment of AI-enabled systems.”

But to reach the heights worthy of their investments, companies need trusted systems and guidance on how best to use AI, particularly when patient health is a concern. Modicum Prime’s mpVision software uses the concept of Explainable AI, which entails a set of processes that are comprehensible to humans. “These models provide clear, understandable insights into their decision-making processes, ensuring regulatory compliance and fostering trust among stakeholders,” Chartier said.

She said that after founding Modicus Prime “to provide pharma with the right AI computational techniques to help safely and more efficiently produce drugs for patients,” she was invited to join a team of experts in writing the International Society for Pharmaceutical Engineering’s (ISPE’s) GAMP Artificial Intelligence Good Practice Guide (AI GPG).

“The guide was developed in response to the industry's need for practical, actionable guidance on the application of AI and machine learning (ML) in GxP regulated areas,” Chartier said. Its authoring follows initial guidance that has been published in previous GAMP guides that cover topics such as data integrity and ML, but those documents lacked comprehensive guidance to link their overarching concepts with practical details. “The Good Practice Guide aims to address this gap by providing a holistic framework that spans from foundational concepts to practical implementation details, thereby supporting the effective operationalization of AI-enabled systems in the pharmaceutical industry.”

The guide works to address concerns that stakeholders in the sector might have regarding the use of AI. “The primary aim of the guide is to ensure that AI technologies are implemented in a way that maintains patient safety, product quality, and data integrity, which are the chief concerns in the healthcare life-sciences industry,” Chartier said. She explained that the guide aims to ensure that AI technologies are implemented to maintain patient safety, product quality, and data integrity. It offers an understanding of AI’s strengths and limitations while encouraging collaboration among stakeholders from different parts of the industry.

You May Also Like