Discussion and practical use of artificial intelligence (AI) in the laboratory is, perhaps to the surprise of some, not a recent phenomena. In the mid-1980s, researchers were developing computerized AI systems able "to develop automatic decision rules for follow-up analysis of [clinical laboratory] tests depending on prior information, thus avoiding the delays of traditional sequential testing and the costs of unnecessary parallel testing."[1] In fact, discussion of AI in general was ongoing even in the mid-1950s.[2][3]
Hiring demand for laboratorians with AI experience (2015–18) has historically been higher in non-healthcare industries, such as manufacturing, mining, and agriculture, shedding a light on how AI adoption in the clinical setting may be lacking. According to the Brookings Institute, "Even for the relatively-skilled job postings in hospitals, which includes doctors, nurses, medical technicians, research lab workers, and managers, only approximately 1 in 1,250 job postings required AI skills." They add: "AI adoption may be slow because it is not yet useful, or because it may not end up being as useful as we hope. While our view is that AI has great potential in health care, it is still an open question."[4]
Today, AI is being practically used in not only clinical diagnostic laboratories but also clinical research labs, life science labs, and research and development (R&D) labs, and more. Practical uses of AI can be found in:
Materials science: The creation of "a modular robotic platform driven by a model-based optimization algorithm capable of autonomously optimizing the optical and electronic properties of thin-film materials by modifying the film composition and processing conditions ..."[19]
Materials science: "Most of the applications of [machine learning (ML)] in chemical and materials sciences, as we have said, feature supervised learning algorithms. The goal there is to supplement or replace traditional modeling methods, at the quantum chemical or classical level, in order to predict the properties of molecules or materials directly from their structure or their chemical composition ... Our research group was applying the same idea on a narrower range of materials, trying to confirm that for a given chemical composition, geometrical descriptors of a material’s structure could lead to accurate predictions of its mechanical features."[20]
Life science: "In biological experiments, we generally cannot as easily declare victory, but we can use the systems biology approach of cycling between experimentation and modelling to see which sequences, when tested, are most likely to improve the model. In artificial intelligence, this is called active learning, and it has some similarity to the way in which we as humans learn as infants: we get some help from parents and teachers, but mainly model the world around us by exploring it and interacting with it. Ideally then, we would recreate such an environment for our machine learning algorithms in the laboratory, where we start with an initial ‘infant’ model of a certain regulatory system or protein function and let the computer decide what sequence designs to try out – a deep learning version of the ‘robot scientist’. Microbes are ideal organisms for such an approach, given the ease and speed with which they can be grown and genetically manipulated. Combined with laboratory automation, many microbial experiments can (soon) be performed with minimal human intervention, ranging from strain construction and screening, such as operated by Amyris, Gingko, Transcriptic, etc., to full-genome engineering or even the design of microbial ecologies."[12]
Digital pathology: "The collaboration combines two AI solutions, VistaPath’s Sentinel, the world’s first automated tissue grossing platform, and Gestalt’s AI Requisition Engine (AIRE), a leading-edge AI algorithm for accessioning, to raise the bar in AI-driven pathology digitization. Designed to make tissue grossing faster and more accurate, VistaPath’s Sentinel uses a high-quality video system to assess specimens and create a gross report 93% faster than human technicians with 43% more accuracy. It not only improves on quality by continuously monitoring the cassette, container, and tissue to reduce mislabeling and specimen mix-up, but also increases traceability by retaining original images for downstream review."[25]
Chemistry and molecular science: "The benefits of combining automated experimentation with a layer of artificial intelligence (AI) have been demonstrated for flow reactors, photovoltaic films, organic synthesis, perovskites and in formulation problems. However, so far no approaches have integrated mobile robotics with AI for chemical experiments. Here, we built Bayesian optimization into a mobile robotic workflow to conduct photocatalysis experiments within a ten-dimensional space."[22]
Chemistry and immunology: "Chemistry and immunology laboratories are particularly well-suited to leverage machine learning because they generate large, highly structured data sets, Schulz and others wrote in a separate review paper. Labor-intensive processes used for interpretation and quality control of electrophoresis traces and mass spectra could benefit from automation as the technology improves, they said. Clinical chemistry laboratories also generate digital images—such as urine sediment analysis—that may be highly conducive to semiautomated analyses, given advances in computer vision, the paper noted."[26]
Clinical research: "... retrospective analysis of existing patient data for descriptive and clustering purposes [and] automation of knowledge extraction, ranging from text mining, patient selection for trials, to generation of new research hypotheses ..."[5]
Clinical research: "AI ... offers a further layer to the laboratory system by analyzing all experimental data collected by experiment devices, whether it be a sensor or a collaborative robot. From data collected, AI is able to produce hypotheses and predict which combination of materials or temperature is desired for the experiment. In short, this system will allow scientists to be aided by a highly intelligent system which is constantly monitoring and analyzing the experimental output. In this way, AI will help an experiment from its inception to conclusion."[27]
Clinical research/medical diagnostics: "Artificial intelligence (AI) in the laboratory is primarily used to make sense of big data, the almost impossibly large sets of data that biologists and pharmaceutical R&D teams are accustomed to working with. AI algorithms can parse large amounts of data in a short amount of time and turn that data into visualizations that viewers can easily understand. In certain data-intensive fields, such as genomic testing and virus research, AI algorithms are the best way to sort through the data and do some of the pattern recognition work."[28]
Medical diagnostics: "Finally, in the laboratory, AI reduces the number of unnecessary blood samples when diagnosing infection. Instead of the 'gold standard blood sample' that takes 24-72 hours, the algorithm can predict the outcome of the blood sample with almost 80% accuracy based on demographics, vital signs, medications, and laboratory and radiology results. These are all examples of how Artificial Intelligence can be used to test better and faster with information that already exists. This saves time and costs."[10]
Medical diagnostics: "Chang sees two overarching classes of AI models: those that tackle internal challenges in the lab, such as how to deliver more accurate results to clinicians; and those that seek to identify cohorts of patients and care processes to close quality gaps in health delivery systems. The lab, however, 'isn’t truly an island,' said Michelle Stoffel, MD, PhD, associate chief medical information officer for laboratory medicine and pathology at M Health Fairview and the University of Minnesota in Minneapolis. 'When other healthcare professionals are working with electronic health records or other applications, there could be AI-driven tools, or algorithms used by an institution’s systems that may draw on laboratory data.'"[26]
Medical diagnostics: AI is used for the formulation of reference ranges, improvement of quality control, and automated interpretation of results. "Continuous monitoring of specimen acceptability, collection and transport can result in the prompt identification and correction of problems, leading to improved patient care and a reduction in unnecessary redraws and delays in reporting results."[8]
Reproduction science: "The field of AI is the marriage of humans and computers while reproductive medicine combines clinical medicine and the scientific laboratory of embryology. The application of AI has the potential to disconnect healthcare professionals from patients through algorithms, automated communication, and clinical imaging. However, in the embryology laboratory, AI, with its focus on gametes and embryos, can avoid the same risk of distancing from the patient. Areas of application of AI in the laboratory would be to enhance and automate embryo ranking through analysis of images, the ultimate goal being to predict successful implantation. Might such a trend obviate the need for embryo morphological assessment, time-lapse imaging and preimplantation genetic testing for aneuploidy (PGT-A), including mosaicism. Additionally, AI could assist with automation through analysis of testicular sperm samples searching for viable gametes, embryo grading uniformity."[15]
Chromatography-heavy sciences: " A great example of this is AI in the Liquid Chromatography Mass Spectrometry (LC-MS) field. LC-MS is a great tool used to measure various compounds in the human body, including everything from hormone levels to trace metals. One of the ways AI has already integrated with LC-MS is how it cuts down on the rate limiting steps of LC-MS, which more often than not are sample prep and LC separations. One system that Physicians Lab has made use of is parallel processing using SCIEX MPX 2.0 High Throughput System. This system can couple parallel runs with one LCMS instrument, resulting in twice the speed with no loss to accuracy. It can do this by staggering two runs either using the same method, or different methods entirely. What really makes this system great is its ability to automatically detect carryover and inject solvent blanks to clean the instrument. The system will then continue its analyzing, while automatically reinjecting samples that may be affected by the carryover. It will also flag high concentration without user input, allowing for easy detection of possibly faulty samples. This allows it to operate without users from startup to shut down. Some of the other ways that it can be used to increase efficiency are by using integrated network features to work on anything from streamlining management to increased throughput."[11]
Most any lab: "Predictive analytics, for example, is one tool that the Pistoia Alliance is using to better understand laboratory instruments and how they might fail over time... With the right data management strategies and careful consideration of metadata, how to best store data so that it can be used in future AI and ML workflows is essential to the pursuit of AI in the laboratory. Utilizing technologies such as LIMS and ELN enables lab users to catalogue data, providing context and instrument parameters that can then be fed into AI or ML systems. Without the correct data or with mismatched data types, AI and ML will not be possible, or at the very least, could provide undue bias trying to compare data from disparate sources."[29]
Most any lab: "When the actionable items are automatically created by Optima, the 'engine' starts working. An extremely sophisticated algorithm is able to assign the tasks to the resources, both laboratory personnel and instruments, according to the system configuration. Optima, thanks to a large amount of time dedicated to research the best way to automate this critical process, is able to automate most of the lab resource scheduling."[30]
A number of challenges exist in the realm of effectively and securely implementing AI in the laboratory. This includes:
Data access limitations, including "where to get it, how to share it, and how to know when you have enough to train a machine-learning system that will produce good results"[4][26][32][33]
Failure to meet the needs of the professionals using it[7]
Given those challenges, some considerations should be made about implementing AI-based components in the laboratory. Examples include:
Clinical diagnostics: "From an industry and regulatory perspective, however, only the intended uses supported from the media manufacturer can be supported from AI applications, unless otherwise justified and substantive evidence is presented for additional claims support. This means strict adherence to specimen type and incubation conditions. Considering that the media was initially developed for human assessment using the well-trained microbiologist eye, and not an advanced imaging system with or without AI, this paradigm should shift to allow advancements in technology to challenge the status-quo of decreasing media read-times especially, as decreased read-times assist with laboratory turnaround times and thus patient management. Perhaps with an increasing body of evidence to support any proposed indications for use, either regulatory positions should be challenged, or manufacturers of media and industry AI-development specialists should work together to advance the field with new indications for use.
While the use of AI in the laboratory setting can be highly beneficial there are still some issues to be addressed. The first being phenotypically distinct single organism polymorphisms that may be interpreted by AI as separate organisms, as may also be the case for a human assessment, as well as small colony variant categorization. As detailed earlier, the broader the inputs, the greater the generalization of the model, and the higher the likelihood of algorithm accuracy. In that respect, understanding and planning around these design constraints is critical for ultimate deployment of algorithms. Additionally, expecting an AI system to correctly categorize “contamination” is a difficult task as often this again seemingly innocuous decision is dependent on years of experience and understanding the specimen type and the full clinical picture with detailed clinical histories. In this respect, a fully integrated AI-LIS system where all data is available may assist, but it is currently not possible to gather this granular detail needed to make this assessment reliable."[9]
Clinical diagnostics and pathology: "Well, if I’ve learned anything in my research into this topic, it’s that AI implementation needs to be a two-way street. First, any company who is active in this space must reach out to pathologists and laboratory medicine professionals to understand their daily workflows, needs, and pain points in as much detail as possible. Second, pathologists, laboratory medicine professionals, and educators must all play their important part – willingly offering their time and expertise when it is sought or proactively getting involved. And finally, it’s clear that there is an imbalanced focus on certain issues – with privacy, respect, and sustainability falling by the wayside."[31]
Healthcare: "While we are encouraged by the promise shown by AI in healthcare, and more broadly welcome the use of digital technologies in improving clinical outcomes and health system productivity, we also recognize that caution must be exercised when introducing any new healthcare technology. Working with colleagues across the NHS Transformation Directorate, as well as the wider AI community, we have been developing a framework to evaluate AI-enabled solutions in the health and care policy context. The aim of the framework is several-fold but is, at its core, a tool with which to highlight to healthcare commissioners, end users, patients and members of the public the considerations to be mindful when introducing AI to healthcare settings."[34]
Most any lab: A code of AI ethics should address objectivity, privacy, transparency, accountability, and sustainability in any AI implementation.[31]
Most any lab: "Another approach is to implement an AI program alongside a manual process, assessing its performance along the way, as a means to ease into using the program. 'I think one of the most impactful things that laboratorians can do today is to help make sure that the lab data that they’re generating is as robust as possible, because these AI tools rely on new training sets, and their performance is really only going to be as good as the training data sets they’re given,' Stoffel said."[26]