Innovative tools and advanced automation are changing the product landscape for life sciences, pharmaceutical, and biotechnology organizations.

We have all heard of breakthroughs in the ability to bring the power of artificial intelligence (AI) and the scale of big data and cloud computing to the life sciences, pharmaceutical, and biotechnology industries. These advancements include everything from identification of diseases, advanced image analysis, robotic assisted surgery, human genome mapping, and personalized medicine. The same technologies can also be applied to optimize and automate simple or complex business processes such as enrollments or claims processing. Machine learning (ML) is an integral part of the product development lifecycle spanning from back office functions to business processes and operations.

Recommendation Engine

Nearly everyone has encountered a predictive model. If you have had a product suggested while shopping on Amazon or a movie recommended while looking for something to watch on Netflix, an algorithm is running behind the scenes using all the data necessary to predict the product or service that is likely to be best for you. These algorithms are designed to build models with the purpose of uncovering connections or patterns that can be used to make better decisions without human intervention. Although machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data is a more recent development.

Now, we can apply this ‘recommendation engine’ concept to the pharmaceutical and biotechnology industries. Every year in the United States “…7,000 to 9,000 people die as a result of a medication error,” according to the National Center for Biotechnology Information. “Additionally, hundreds of thousands of other patients experience but often do not report an adverse reaction or other complications related to a medication. The total cost of looking after patients with medication-associated errors exceeds $40 billion each year, with over 7 million patients affected.”

Implementing a recommendation engine at each stage of the patient care process could significantly reduce or eliminate these errors. For example at the time of prescribing, a recommendation engine could apply an algorithm library to make the best possible data-driven decision on not only the correct medication, but also the dosage and frequency, all customized to the individual patient. This technology could also be used as another point of validation at the time of drug dispensing, as well as to monitor and validate patient usage. 

Computer Vision

Can a computer or software application actually see like a human? The answer is sort of, but the way a computer learns to see is very different than a person; it requires machine learning. A computer acquires an image and translates it into a numeric representation that it can understand, but that alone tells it nothing. It must take that representation and compare it to other representations until it understands the meaning of the newly acquired image. This can be a very simple pixel by pixel comparison, or a complex algorithm such as a neural network to help align this process as closely to human thinking as possible. This capability is often used to identify and classify images at a scale beyond what any person could accomplish.

Applying this capability to medical devices can provide tremendous possibilities. If a device can ‘see’ inside the human body, it can also identify an anomaly like a tumor, and even classify that tumor as malignant or benign based on its understanding of previous images. Devices can also ‘see’ video in addition to images. Like the image identification and classification process, the same can be done for motion within the body such as the rate of blood flow to critical organs, the flow of fluid through the lymphatic system, or even biochemical responses from the brain to the rest of the body. This new sense of ‘digital sight’ is powered by big data and machine learning, and will drive incredible innovation within the life sciences industry.

Machine learning capabilities can predict medical device failure, and make proactive recommendations to remedy errors.

Digital Twin

A digital twin is a virtual (or digital) representation of a noun from our physical world such as a person, product, or process. In addition to being a digital replica or model, the digital twin can also ingest data from its ‘physical’ partner. This information transfer is often enabled by Internet of Things (IoT) technology embedded within products the digital twins are modeled after. Adoption of this technology is growing very rapidly. The global digital twin market size was valued at $3.1 billion in 2020, and is projected to reach $48.2 billion by 2026. The power of the digital twin lies in its ability to run product simulations to test changes, optimize performance, and reduce the chance of failure or degradation. 

The potential value of the digital twin for medical devices is immeasurable. For example, there are approximately 200,000 cardiac pacemakers implanted in patients in the U.S. annually and over one million worldwide. While complete device failure is rare, numerous issues may occur such as lead failure, generation problems, or other malfunctions. Imagine that both pacemaker and person have a digital twin not only constantly monitoring device performance, but using that data to run numerous simulations to determine the most likely outcome over the course of time. This machine learning capability could predict the type of failure, the probability of occurrence, and even make proactive recommendations to remedy the situation as needed. Digital twins can be used to further optimize the performance of devices like pacemakers and drive down rates of failure.

Self-Healing (Preventative Maintenance)

We have all owned and experienced devices and applications that either fail or slow in performance over the course of time. You have probably had a laptop that needed maintenance, upgrades, or patches in order to stay current, close vulnerabilities, or run the latest and greatest applications. Often times, this maintenance process is completed when the patch or upgrade is released, and you consent to the update.

This broad approach to device management may have no specific bearing on the performance of your laptop, and the unique potential problems you may encounter. However, machine learning algorithms may run on your laptop that constantly monitor applications, the operating system, and hardware looking for patterns that may cause performance problems or potential failure in the future. When discovered, automatic updates can be performed, configuration settings can be changed, or recommendations can be made to replace components and reduce or eliminate the possibilities of these issues. This self-healing concept means that a product can perceive that it is (or will soon begin) operating incorrectly, and take preventative action accordingly.

Life sciences companies and academic institutions are researching incredible self-healing materials, like rubber that can repair its own cracks. But start with considering the digital aspect of medical devices. Just like the laptop example, each layer – application, operating system, and physical components – will generate data for monitoring. There is a relatively new discipline combining artificial intelligence and IT operations, or AIOps for short. The term was coined by Gartner in 2016, and is defined as the combination of big data and machine learning to automate IT operations processes, including event correlation, anomaly detection, and causality determination.

AIOps allows us to use the telemetry data from our medical devices in machine learning models to not only make predictions and recommendations, but also to initiate automated processes to make the necessary changes to ‘heal’ these devices. This application of machine learning can become a vital part of meeting regulatory requirements such as Corrective Action and Preventive Action (CAPA) that help ensure safety, produce better outcomes, and improve the customer experience.

Natural Language Processing and Voice Recognition

Natural language processing (NLP) is a field dedicated to teaching computers how to understand our written language. Voice recognition is the ability of a computer or application to ‘hear’ and translate human verbal communication into a format it can understand, and in many cases is simply the translation of voice to text. The combination of voice recognition and natural language processing gives the computer the capability to communicate with a person by receiving, understanding, and even acting on a verbal message. This technology has proliferated with smart speakers and personal digital assistants such as Alexa and Siri. So how can this technology be applied in the life sciences field?

There are many potential applications for natural language processing and voice recognition technology, such as transcribing a doctor’s notes during a patient encounter or monitoring patients at home with simple voice interactions. However, diagnosing illness through voice analysis is one of the most compelling possibilities. According to a recent study, “biomarkers derived from human voice can offer insight into neurological disorders, such as Parkinson's disease, because of their underlying cognitive and neuromuscular function.” The same study showed that “peak accuracy of 85% provided by the machine learning models exceed the average clinical diagnosis accuracy of non-experts (73.8%) and average accuracy of movement disorder specialists (79.6% without follow-up, 83.9% after follow-up).”

This increase in the diagnostic accuracy of Parkinson’s disease provides some insight into the future of this technology, which will only improve over time, potentially widening the margins between machine learning and human predications. This voice biomarker technology has been used to diagnose everything from obvious illnesses and injuries to those that could not distinguished by the human ear such, as heart disease and even COVID-19.

Conclusion

We are in the early stages of an artificial intelligence revolution across every industry. In this article, we have explored various types of artificial intelligence and machine learning, and discussed how they may be utilized in the life sciences, pharmaceutical, and biotechnology industries. Implementing intelligent and innovative technologies like artificial intelligence, cloud, data, automation, and agility into these areas can improve functionality and drive development of more effective products.