Artificial Intelligence in the Biopharmaceutical Industry: Treacherous or Transformative?
The popularity of OpenAI’s ChatGPT program exemplifies society’s growing awareness of the remarkable power of artificial intelligence (AI) and machine learning (ML). Digital transformation is both democratizing access to information and helping users to translate it into knowledge. AI and ML can augment our ability to collect and analyze data in ways similar to how robots increase our ability to examine and relocate physical objects.
Biopharmaceutical professionals also are recognizing AI’s/ML’s potential for applications along the pharmaceutical life cycle. Drug-development activities are becoming more complex and difficult as the amount and variety of process and product data grow. But AI is demonstrating its ability to handle tasks such as analysis of massive data sets, multivariate analysis, decision-making, issue identification, process automation, and system modeling and control.
We humans are good at addressing creative tasks and applying abstraction to complex problems. On the other hand, people tend not to be very good at tedious, repetitive physical tasks. Robots perform such activities more consistently and without productivity decreases from boredom and fatigue. As digital technologies advance beyond simple robotics, we are recognizing more clearly our limitations in examining, comprehending, organizing, editing, and correlating massive amounts of information. AI and ML are proving to be excellent assistive tools for such activities — and are uniquely capable of enabling such initiatives as Pharma 4.0.
In the pharmaceutical industry — where data integrity, regulatory compliance, and patient health are paramount — deep knowledge in AI-system design is critical. For instance, large language models are becoming increasingly complex and require specific expertise for effective implementation. Such expertise helps to ensure development of robust security measures for sensitive data, adherence to regulatory standards, streamlined drug-discovery processes, risk mitigation against biases and ethical concerns, and ultimately, positive impacts on patient outcomes. Especially in industries such as pharmaceutical development, proper understanding of AI design and implementation is essential for achieving successful, ethically sound AI solutions.
Danger, Harm, Hazard, and Risk
Lately, we have heard pronouncements from even renowned data scientists regarding the limitations, risks, and dangers of AI/ML (1). The remarks range from concerns about real-world examples (e.g., obvious errors from popular generative-AI applications) to theoretical predictions about AI-driven extermination of the human race. Such warnings have different degrees of validity and evidentiary support, but they also refer to diverse aspects of AI power and application, both current and future. For instance, concerns about bias often are attributed to AI systems, but those issues derive from human programmers’ training of AI with nonrepresentative samples. Nevertheless, the social, organizational, scientific, and economic complexities of AI/ML are important considerations in the life sciences.
As with all advanced technologies, the hazards presented by AI can be quite discrete and nuanced. Consider how some people recoil at the very concept of genetically modified organisms (GMOs) despite the significant difference in risks between using GMOs to produce recombinant insulin and using them to perform in utero gene therapy. Rather than considering risks associated with a broad sense of AI/ML, we must distinguish, for instance, between potential hazards of narrow-ML–supported applications (e.g., classifiers assisting in anomaly analysis) and highly networked applications based on adaptive algorithms in generative AI. AI applications based on complex systems developed for critical environments such as the pharmaceutical industry necessitate work from experts who can validate an entire process of AI model creation. That level of oversight ensures that the maximum benefit of AI output is obtained without compromising patient safety or risking adverse outcomes.
Three major concerns have arisen for AI/ML in the biopharmaceutical industry. First, AI models are trained on specific data sets, which might not represent accurately the diversity of process and equipment states. Many AI algorithms, such as deep-learning neural networks (NNs), can be complex and opaque, making it difficult for us to understand or document their decision-making processes (the “black-box” concern). And the lack of explainability in some AI algorithms could limit operational reliability and robustness. However, biopharmaceutical companies are mitigating all three concerns using improved practices and tools that have evolved coincident with AI/ML maturation. Meanwhile, technologies such as the Linux Foundation’s Open Neural Network Exchange (ONNX) program are standardizing deep-learning models, helping to alleviate concerns about the “black box” perception by enhancing interoperability and simplifying model comprehension across frameworks.
Experts have expressed other general concerns, such as potential issues with data security and bias. Although such factors can influence AI/ML performance, they apply to supporting technologies rather than to AI/ML specifically. Particular considerations for AI/ML in the pharmaceutical industry pertain to the quality of the data used, the degree of networking among systems, the depth of an algorithm’s adaptivity or dynamicity, and the level of autonomous system control designed into a given application.
As with any powerful tool, faulty design, misapplication, neglect of control, and improper operation could compromise AI’s use. Nevertheless, much is being accomplished to improve supporting systems and therefore the accuracy, reliability, and security of AI-enabled applications. Indeed, numerous AI tools are available to mitigate those risks, ensuring robust design, proper application, effective control, and secure operation.
Criticisms of AI/ML sometimes appear in isolation and fail to put their analyses in proper context. AI-driven applications can fail to provide the expected level of performance, but underperformance must be scrutinized by considering limitations in the power, relevance, and reliability of existing alternatives, including human or computer-assisted analysis by classical statistical methods, differential equations, regressions, tabular accounting, and other such means.
Below, we consider the evolution of AI/ML applications and supporting technologies that are improving the security, reliability, and relevance of their outputs. Then, we examine developments in the power and performance of emerging AI applications in the biopharmaceutical industry.
Data Developments
Much is being accomplished in the fields of data science and governance relevant to AI/ML tools and applications. For instance, although a lack of structured, labeled, and digitized data has impeded AI/ML implementation in the biopharmaceutical industry, companies continue to make impactful improvements in collection, storage, curation, access/control, and transmission of massive data sets. Increasing adoption of ALCOA+ principles (for data attributability, legibility, contemporaneity, originality, accuracy, completeness, consistency, and endurance) and FAIR guidelines (for findability, accessibility, interoperability, and reuse of digital assets) are ensuring the confidentiality, integrity, and availability of product and process information. Programs for risk-based management assess and curate data while guiding their collection and distribution. Data are gathered and disseminated according to analysis of their value and the consequences of failing to acquire them. Alerts have been developed to notify stakeholders when data issues or anomalies occur.
Data observability ensures that data are reliable, trustworthy, and appropriate. The term refers to the ability to understand and monitor data quality, performance, and behavior. Thus, observability involves analysis of data and their flow through a given program to gain insights into their use and value.
Observability analyzers create visualizations and reports that provide insights into data-use patterns and performance metrics. Especially as applied to large volumes of diverse data from complex sources, such programs help stakeholders to understand and interpret data’s value effectively. Drift monitoring also is proving to be a reliable tool for ensuring that AI/ML models will respond as expected in the face of the ongoing realities.
The move to leverage data intentionality is improving approaches to data generation and governance. In this context, intentionality refers to the generation of data sets to reflect an application’s objective. In the context of AI and data structures, the concept refers to the goal of data collection, storage, and processing. Incorporating a description of an AI model’s scope, purpose, and means of intended use is powerful and essential because a model can be managed properly only if those elements are understood.
Data typically are neutral and lack inherent intentionality. However, when data are collected and governed with a goal in mind, they gain intentionality through application of the specific objectives driving their acquisition, labeling, governance, and formatting.
Connectivity and Networking: As does any digital tool or activity, AI/ML applications have different connectivity capabilities. Both incoming data and program results are designed to participate in different levels of integration. Those range from isolated processing that requires manual data input and results harvesting to fully networked systems with autonomous reception of data from multiple sources and unsupervised control of the systems to which received data are applied.
The current goal for many companies is to establish a cloud-based network of interconnected systems and applications, making comprehensive information about business history, current project statuses, and resulting predictions available to any designated destination. Systems for enterprise resource planning (ERP), manufacturing execution (MES), laboratory information management (LIMS), warehouse management (WMS), and/or building management (BMS) must be linked properly to provide actional feedback and receive instructions from AI/ML technologies autonomously. In the pharmaceutical industry, adopting a strategy based on the “data-centric” concept is critical for effectively managing and using vast data sets, as opposed to relying solely on distributed data systems. Cloud technologies are playing a pivotal role in facilitating data-centric approaches.
Cloud-based systems now manage vast stores of data and bolster computing power through such features as automated data governance and integration, advanced multimodal analytics, natural language processing (NLP) for querying, interactive data visualization, intelligent-result and excursion notifications, chatbots for user support, and enhanced data security and privacy measures. New techniques for data observability, intentionality, and governance are facilitating establishment of very large, representative, and properly labeled training data.
Digital Twins
Comprising a computer model and means for real-time data exchange, a digital twin (DT) is a virtual simulation of an object or system. Such models operate upon information received from sensors or analytics reporting on different aspects of a physical system — e.g., equipment temperature, materials levels, and product accumulation. As highly connected, dynamic mathematical models containing time-based derivative terms of relevant variables, DTs replicate system processes in real time.
DTs are distinctive in that they maintain multidirectional information flow and operate in parallel with their real-world counterparts. Whereas a classical simulation typically reports about a particular process, a DT simulates many operations concurrently by compiling results from multiple contemporaneous models. From such data, a virtual model can run simulations that are valuable in many activities, including the study, development, and optimization of product characteristics or system performance. A DT predicts specified outcomes, informs users about required actions, and even supports closed-loop process control. The Internet of Things (IoT), cloud functions, and AI/ML all are orchestrated to produce the virtual representation within a DT.
The AI/ML empowerment of DT enables a new generation of generative learning systems and represents the perfect merger of technologies that are revolutionizing activities for pharmaceutical manufacturing. Thus, DTs are now the main drivers for AI/ML industrialization because of their ability to create virtual replicas of physical processes, equipment, and systems. Those capabilities allow for real-time monitoring, predictive maintenance, optimization, and simulation — all of which are crucial for enhancing efficiency, reducing downtime, and improving overall productivity and quality in industrial settings.
Algorithm Developments
Many powerful initiatives occurring in AI/ML design and application are improving the power, suitability, and safety of such technologies in the biopharmaceutical industry. Below are a few concepts of note.
Explainability: Interpretable or explainable artificial intelligence (XAI) enables human users to understand the reasoning behind the algorithm’s decisions or predictions. XAI also allows users to identify the contribution of each datum or algorithm component to the output. That capability builds stakeholder trust in model outputs, generates mechanisms for accountability, facilitates regulatory compliance, and enables stakeholders to understand and address errors and potential bias. Tools such as the ONNX program can transform AI/ML models into graphics illustrating the paths that data follow through the nodes of a trained model, helping to elucidate the model’s “skeleton.”
Transparency in digital activities refers to an application’s ability to display its components. AI provides intelligent coordination and condenses results from activities such as distributed processing, integration, and analytics. Through NLP and advanced data visualization, a model will hide “how the sausage is made” — e.g., details about data access and location harmonization — from application programmers and operators. The ONNX package is a good example of newly available tools for increasing model transparency.
Foundation Models: In traditional AI/ML based applications, each new use case requires a new model. Now, more generalizable foundation models are being trained on massive stores of unlabeled data, enabling such models to be adapted for different applications. A foundation model thus greatly reduces requirements for data use in training new models (2).
Transfer learning refers to a class of techniques in AI/ML that enable knowledge acquired from one task to be used for a related task. For example, information learned during development of an efficient fill–finish process for a protein drug could be applied when developing a fill–finish workflow for a vaccine. Transferring information from previously learned tasks to new activities can improve learning efficiency and risk reduction significantly.
Algorithm Life Cycle
Monitoring, Observability, and Assessment: All programs require monitoring to ensure that they function and perform as intended. That AI/ML is a developing field has resulted in some ambiguity in user terminology, expectations, and implementations — as well as in model behavior and results. The dynamic nature of some AI/ML models, real-world examples of drift in their behavior, and user expectations all necessitate continued process monitoring (3). That supports assessment of model alignment with intended goals and priorities, performance of the code itself during operation, model and data drift, input-data quality and outliers, and integrated services.
Modern monitoring involves alerts based on near–real-time data about system resource use (for central processing units (CPUs), memory, and storage), response times, and error rates. Best practices are being established for monitoring data and associated metadata, algorithms/models, and generated outputs and predictions (4). Monitoring begins during model training and continues through an application’s life cycle into the operation of subsequent versions in the field.
Users now assert many definitions of observability, leading to confusion. Monitoring focuses on evaluating data rather than algorithms and captures information about system inputs and outputs instead of analyzing system activity. Observability extends beyond monitoring and encompasses the extraction of data from a system, particularly through data drifting, to comprehend its internal state and pinpoint the origins of issues.
Thus, observability activities provide deeper insights into system behavior, even revealing how different components interact. Such analyses are particularly valuable for distributed systems in which monitoring individual components may not give enough information to diagnose problems. Observability provides power in establishing traceability and identifying the root causes of problems because it supports understanding of data flow through a system.
The goal of observability assessment is to use monitoring tools to gauge an algorithm’s overall effectiveness, accuracy, efficiency, reliability, and ethical conformance. The activity provides a high-level analysis to ensure that an entire system meets its intended objectives, adheres to ethical standards, and operates securely. Assessment usually is subdivided into studies of such parameters as performance, bias, reliability, scalability, and compliance. Evaluating how well an AI/ML system performs its intended tasks involves measuring accuracy, precision, recall, and related parameters.
Monitoring, observability, and assessment all require establishment of easy-to-understand metrics that are useful across applications to support actionable algorithm adjustments. Advanced techniques for data explainability, transparency, and security are helping biopharmaceutical companies to meet both regulatory and in-house requirements for risk tolerance while ensuring output accuracy and model alignment with user expectations. Meanwhile, ambiguity in user terminology and expectations continue to reduce.
Qualification and Validation: AI/ML qualification is the process of verifying that a finalized algorithm performs as described, is effective for its intended application, and adheres to relevant criteria and regulatory standards. The process helps to ensure that an algorithm maintains consistent outputs, minimizing risk for errors or deviations.
In biopharmaceutical applications, validation applies to AI/ML algorithms in two ways (Figure 1). In a narrow sense, the term refers to optimization of activities that an algorithm performs upon data. That process involves selection of an algorithm architecture (e.g., random-forest models and NNs) and establishment of critical configurations that are external to the model (e.g., hyperparameters) (5).
Figure 1: Artificial intelligence/machine learning (AI/ML) algorithms undergo two forms of validation: one sense distinctive to the AI/ML field and another sense that occurs in a good-practice (GxP) context for work in the life sciences.
A second definition of validation involves a more common use of the term, referring to activities required in providing products for use in good-practice (GxP) fields, such as the biopharmaceutical industry. Challenges with traditional computer-system validation begin with definition of critical functionalities, appropriate testing, and establishment of acceptance criteria. The growing size and complexity of systems are increasing the difficulty of those activities. Software as a service (SaaS), risk-based validation, computer-software assurance (CSA), and industry 4.0 initiatives all are influencing changes in validation requirements and solutions.
Past problems with algorithm explainability and data security have slowed AI/ML acceptance and approvals in healthcare and biomanufacturing. In the pharmaceutical industry, validation in the field of AI necessitates adherence to best practices in software development. That is because AI algorithms are code based and governed by mathematical and statistical expressions encapsulated within algorithmic logic — ultimately manifesting in programmatically structured sentences. That capability differs from traditional software approaches that follow a typical validation cycle from software-requirements specification (SRS) to functional specification (FS), design specification (DS), installation qualification (IQ), operational qualification (OQ), and finally, performance qualification (PQ). The fundamental distinction lies in the critical role of data as pivotal elements influencing a model’s final outputs.
However, many of the advances that we described previously are easing validation by improving the volume, quality, availability, and objectivity of data and the transparency, interpretability, and generalizability of algorithms. Approaches to meeting evolving regulatory guidelines are appearing as collaborations advance among data scientists, healthcare professionals, regulatory bodies, and pharmaceutical sponsors (6).
In the biopharmaceutical context, AI/ML validation requires integration of data, algorithm, model insights, and continuous model assessment to ensure full traceability of and appropriate governance over all involved elements. Collaboration among subject-matter experts, data scientists, and software testers is crucial to establishing and maintaining system quality, reliability, safety, and alignment throughout a drug product’s life cycle.
AI Capabilities
Risks in the application of AI/ML must be considered alongside the technology’s potential value. AI/ML-empowered classifiers, natural language processors, computer vision, and DTs are supporting such functions as virtual analysis assistance, forecasting, decision recommendation, proactive and autonomous process actuation, enterprise-system modeling, and unsupervised system control. AI/ML provides advanced capabilities for solving nonlinear, discontinuous, and multivariate problems in large data sets with highly interactive groups of variables. The technology facilitates classification and regression, and it rapidly derives insights from massive amounts of information. Thus, it supports such functions as pattern recognition and identification of data clusters and anomalies. AI/ML also provides additional power for data curation and labeling as well as in structure management of multivariate and polytomous terms (7).
In the biopharmaceutical industry, AI/ML approaches are advancing both new-therapy development and drug repurposing. Despite the numerous factors involved, many physicochemical properties required to predict a biologic’s pharmacokinetics and pharmacodynamics (PK/PD) can be calculated in silico. Those values, along with models of quantitative structure–activity relationships and off-target activity, can be used to predict a drug’s biological effects and clinical outcomes (8).
“Reality-centric” approaches to AI/ML have emerged in response to the inherent and unavoidable complexity of real-world model designing, training, testing, and deployment. The reality-centric initiative proposes a practical, use-driven approach to developing AI/ML tools and models (9).
AI/ML Applications in the Biopharmaceutical Industry
AI/ML is transforming methods in most every element of biopharmaceutical commercialization, from drug discovery and development to postmarketing surveillance. Healthcare applications must fulfill ethical and regulatory requirements. For instance, the World Health Organization (WHO) recently published guidance about AI/ML design, development, and clinical implementation (10). However, such technologies are opening new opportunities for increasing manufacturing-process efficiency and quality and for emulating human capabilities in monitoring and control (11). Below are a few examples of note.
Drug Discovery: DeepMind’s AlphaFold model has shown remarkable results in predicting protein structures from amino-acid sequences. Those successes have created hope that NNs trained upon well-characterized protein sequences and structures could help to engineer novel proteins with understood functionality. The biological activity of immunogens, receptor traps, enzymes, metalloproteins, and protein-binding proteins often is mediated by a small number of functional residues, and those domains must be presented properly by a therapeutic protein’s overall structure.
We are familiar with generative-AI programs such as the GPT4 application, which can produce images from user-specified inputs. Similarly, researchers are demonstrating generative AI’s ability to design diverse types of functional proteins from simple molecular specifications. For example, Watson et al. recently described their development of the RFdiffusion NN-based model to design a novel therapeutic-protein structure containing prespecified functional sites in an effective orientation (12). The model was released as an open-source program in March 2023 (13).
In 2022, Wang et al. similarly reported on two deep-learning–based methods for scaffolding protein functional sites (14). Using the first method, they discovered amino-acid sequences that were predicted to fold into stable structures containing a needed functional site. With the second approach, they leveraged a structure-prediction network to recover the sequence and full structure of a protein given only a desired functional site.
Drug Development: Mechanistic models now provide insight into protein transcription and translation processes, coding-sequence positional effects, protein secondary structures, and other biophysical parameters. Such tools are proving to be powerful in optimizing coding sequences and signal peptides, increasing the efficiency of product development, and supporting process intensification (15).
Clinical Trials: Pharmaceutical R&D represents a significant investment of time and resources. Drug development accounts for a major share of overall R&D expenses, primarily because of the extensiveness of clinical trials conducted before regulatory approval. To enhance trial processes, the US Food and Drug Administration (FDA) introduced its Critical Path Initiative (CPI) in 2004. The CPI seeks to integrate scientific breakthroughs (e.g., in genomics) and advanced technologies. In tandem with health authorities, drug sponsors have begun adopting novel approaches such as electronic data capture (EDC) and advanced analytics tools for predictive modeling and data visualization.
The increasing pace of technological advancement provides a fertile ground for pharmaceutical enterprises to capitalize on insights derived from vast data sets, improving decision-making and leading to further advances in drug development. AI/ML now ranks high among such opportunities. Remarkably, according to a recent report, the FDA now encourages companies to perform in silico “clinical trials” based on computational modeling and simulation (CM&S). Such simulations are expected to augment and perhaps eventually replace classical clinical studies (16).
Manufacturing Operations: AI/ML support for industry 4.0 in biomanufacturing has grown steadily for years. Consider AI’s recent application to soft sensors for real-time management of fermentation processes (17). Yet the number of applications is accelerating rapidly because of newly developed foundation models and recent implementation of DTs.
Difficulties with using bioprocess data stem from their high dimensionality and complexity, which arise from multiple interacting variables. Advances in process monitoring and analytics have increased capabilities for processing high volumes of diverse data, supporting demand for improved process control. Currently, multivariate analysis and advanced modeling techniques are the only ways to leverage available types and amounts of product and process data. ML applications and AI-enabled DTs improve researchers’ and engineers’ ability to analyze, model, and control biomanufacturing processes based on data-driven approaches. In turn, users can address difficulties related to continuous variability associated with biopharmaceutical operations (18).
Advanced Therapies: Historically, biopharmaceutical manufacturing mainly involved single, standardized batches of protein biologics that were later vialed in high-throughput automated equipment. Therapies based on autologous cells and virally vectored nucleic acids necessitate increasingly “personalized” production processes. For example, gene-modified cell therapies entail the same general tasks as classical biologics do but require real-time adjustment based upon the performance of materials and equipment in recent past procedures, assayed features of patient starting material, and patient-specific factors that emerge during manufacturing. AI/ML is an excellent candidate to supply rapid, unbiased, multiparametric instructions for next steps in or active control of manufacturing.
Personalized Medicine: ML-based translational pharmacogenomic tools are being used to advance personalized medicine. Pharmacogenomics reveals how an individual’s genome will influence a therapy’s efficacy and safety.
Results from such assessments can help clinicians to develop treatment regimens that maximize clinical benefits and minimize risks. Recent advancements in genome sequencing and multiomic approaches even can reveal specific information, such as heterogeneity between a patient’s phenotype and harvested cells. But the enormous volume of molecular data generated by diagnostics requires suitable analysis and advanced algorithm development.
PK/PD modeling and simulation can facilitate both therapy development and point-of-care disease diagnosis and prognostication. By addressing multiple dynamic variables comprehensively, AI/ML-based systems can provide timely advice to operators and clinicians in designing and managing therapeutic interventions, ultimately helping to improve patient outcomes (19).
Measurable, Manageable Risks
A number of challenges have arisen in implementing narrow AL/ML applications into medicine. Recent concerns have arisen regarding the wider adoption of generative AI/ML in society. Some worries stem from a failure to appreciate the discrete and nuanced risks between individual, even static AI/ML-supported activities and the anthropomorphisms and unrelated risks that we have projected onto narrow-ML algorithms.
There is a difference between intelligence and consciousness, and most AI/ML algorithms present no risk of becoming sentient. When an AI/ML program beats a grandmaster at chess, it is not happy about doing so. We now recognize that risks associated with focused AI/ML programs applied along the biopharmaceutical life cycle are limited, defined, and measurable — and those concerns are being addressed systematically.
True risks of current AI/ML applications must be evaluated while
• considering technological advances that could diminish or control such risks
• understanding that AI/ML is relatively new to the biopharmaceutical industry and will improve in time
• acknowledging AI’s demonstrated ability to facilitate activities along the entire drug-product life cycle.
We can only imagine what power further advances such as quantum reinforcement learning could provide to the biopharmaceutical industry. By embracing AI within the pharmaceutical landscape, we pave the way for groundbreaking advancements that could improve patient outcomes significantly, underscoring the importance of continuous learning and exploration in this transformative field. Now is not the time to halt progress; rather, we must push forward with innovation and embrace the possibilities that AI offers for the benefit of patients and society in general.
References
1 Pause Giant AI Experiments: An Open Letter. Future of Life Institute: Campbell, CA, 22 March 2023; https://futureoflife.org/open-letter/pause-giant-ai-experiments.
2 Schneider J. Foundation Models in Brief: A Historical, Socio-Technical Focus. arXiv 17 December 2022; https://doi.org/10.48550/arXiv.2212.08967.
3 Oladele S. A Comprehensive Guide on How To Monitor Your Models in Production. Neptune Labs: Palo Alto, CA, 8 September 2023; https://neptune.ai/blog/how-to-monitor-your-models-in-production-guide.
4 Islam A. Best Practices for Machine Learning Model Monitoring. MarkTechPost 22 January 2023; https://www.marktechpost.com/2023/01/22/best-practices-for-machine-learning-model-monitoring.
5 Manzano T, et al. Getting Closer to AI Adoption in the Pharmaceutical Industry. PDA Lett. 8 December 2020; https://www.pda.org/pda-letter-portal/home/full-article/getting-closer-to-ai-adoption-in-the-pharmaceutical-industry.
6 AI Summit 2023. Association of Food and Drug Officials/Regulatory Affairs Professionals Society, Healthcare Products Collaborative: Cincinnati, OH, 14–16 November 2023; https://www.raps.org/events/artificial-intelligence-summit-11-2023.
7 Whitford WG, Manzano T. Growing Value of Artificial Intelligence in Biopharmaceutical Operations. BioProcess Int. 20(5) 2022: 28–30, 37; https://bioprocessintl.com/2022/may-2022/growing-value-of-artificial-intelligence-in-biopharmaceutical-operations.
8 Mohan R, et al. AI/ML Models To Predict the Severity of Drug-Induced Liver Injury for Small Molecules. Chem. Res. Toxicol. 36(7) 2023: 1129–1139; https://doi.org/10.1021/acs.chemrestox.3c00098.
9 van der Schaar M, Rashbass A. The Case for Reality-Centric AI. van der Scharr Lab: Cambridge, UK, 17 February 2023; https://www.vanderschaar-lab.com/the-case-for-reality-centric-ai.
10 WHO Calls for Safe and Ethical AI for Health. World Health Organization: Geneva, Switzerland, 16 May 2023; https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health.
11 Whitford W, Manzano T. AI Applications for Multivariate Control in Drug Manufacturing. A Handbook of Artificial Intelligence in Drug Delivery. Philip AK, et al., Eds. Academic Press: Cambridge, MA, 2023: 55–82.
12 Watson JL, et al. De Novo Design of Protein Structure and Function with RFdiffusion. Nature 620, 2023: 1089–1100; https://doi.org/10.1038/s41586-023-06415-8.
13 RFdiffusion Now Free and Open Source. University of Washington, Institute for Protein Design: Seattle, WA, 30 March 2023; https://www.ipd.uw.edu/2023/03/rf-diffusion-now-free-and-open-source.
14 Wang J, et al. Scaffolding Protein Functional Sites Using Deep Learning. Science 377(6604) 2022: 387–394; https://doi.org/10.1126/science.abn2100.
15 Chung H, et al. Quantitative Synthetic Biology for Biologics Production. BioProcess Int. 21(4si) 2023: 17–20; https://www.bioprocessintl.com/sponsored-content/quantitative-synthetic-biology-for-biologics-production.
16 Credibility of Computational Models Program: Research on Computational Models and Simulation Associated with Medical Devices. US Food and Drug Administration: Silver Spring, MD, 27 January 2022; https://www.fda.gov/medical-devices/medical-device-regulatory-science-research-programs-conducted-osel/credibility-computational-models-program-research-computational-models-and-simulation-associated.
17 Ondracka A, et al. CPV of the Future: AI-Powered Continuous Process Verification for Bioreactor Processes. PDA J. Pharm. Sci. Technol. 77(3) 2023: 146–165; https://doi.org/10.5731/pdajpst.2021.012665.
18 Manzano T, Whitford W. AI-Enabled Digital Twins in Biopharmaceutical Manufacturing. BioProcess Int. 21(7–8) 2023: 22–27.
19 Mystridis GA, et al. Artificial Intelligence/Machine Learning and Mechanistic Modeling Approaches as Translational Tools To Advance Personalized Medicine Decisions. Adv. Molec. Path. 5(1) 2022: 131–139; https://doi.org/10.1016/j.yamp.2022.06.003.
Toni Manzano is cofounder and chief scientific officer of Aizon, C/ de Còrsega, 301, 08008 Barcelona, Spain; 34-937-68-46-78; [email protected]; https://www.aizon.ai.
Longtime BPI editorial advisor William Whitford is life sciences strategic solutions leader at Arcadis; [email protected]; https://www.arcadis.com.
You May Also Like