Analysis of cannabidiol, delta-9-tetrahydrocannabinol, and their acids in CBD/hemp oil products

In this 2020 paper published in Medical Cannabis and Cannabinoids, ElSohly et al. present the results from an effort to demonstrate the use of a relatively basic gas chromatography–mass spectrometry (GC-MS) method for accurately measuring the cannabidiol, tetrahydrocannabinol, cannabidiolic acid, and tetrahydrocannabinol acid content of CBD oil and hemp oil products. Noting the problems with inaccurate labels and the proliferation of CBD products suddenly for sale, the authors emphasize the importance of a precise and reproducible method for ensuring those products' cannabinoid and acid precursor concentration claims are accurate. From their results, they conclude their validated method achieves that goal.

Data without software are just numbers

A growing trend in producing academic research is abiding by the FAIR principles, which state that produced research data be findable, accessible, interoperable, and re-usable. These, in theory, lend to the important concept of reproducibility. However, what about the software used to generate the data? Often that software is a home-grown solution, and the software and its developers are rarely cited in academic research. This does not lend to reproducibility. As such, researchers such as Davenport et al. have written on the topic of improving research output reproducibility by addressing good software development and citation practices. In their 2020 paper published in Data Science Journal, they present a brief essay on the topic, offering background and suggestions to researchers on how to improve research software development, use, and citation. They conclude that "[e]ncouraging the use of modern methods and professional training will improve the quality of research software" and by extension the reproducibility of research results themselves.

Screening for more than 1,000 pesticides and environmental contaminants in cannabis by GC/Q-TOF

Sure, there are laboratory methods for looking for a small number of specific contaminates in cannabis substrates (target screening), but what about more than a thousand at one time (suspect screening)? In this 2020 paper published in Medical Cannabis and Cannabinoids, Wylie et al. demonstrate a method to screen cannabis extracts for more than 1,000 pesticides, herbicides, fungicides, and other pollutants using gas chromatography paired with a high-resolution accurate mass quadrupole time-of-flight mass spectrometer (GC/Q-TOF), in conjunction with several databases. They note that while some governmental bodies are mandating a specific subset of contaminates to be tested for in cannabis products, some cultivators may still use unapproved pesticides and such that aren't officially tested for, putting medical cannabis and recreational users alike at risk. As proof-of-concept, the authors describe their suspect screening materials, methods, and results of using ever-improving mass spectrometry methods to find hundreds of pollutants at one time. Rather than make specific statements about this method, the authors instead let the results of testing confiscated cannabis samples largely speak to the viability of the method.

Persistent identification of instruments

In this 2020 paper published in Data Science Journal, Stocker et al. present their initial attempts at generating a schema for persistently identifying scientific measuring instruments, much in the same way journal articles and data sets can be persistently identified using digital object identifiers (DOIs). They argue that "a persistent identifier for instruments would enable research data to be persistently associated with" vital metadata associated with instruments used to produce research data, "helping to set data into context." As such, they demonstrate the development and implementation of a schema to address the managements of instruments and the data they produce. After discussing their methodology, results, those results' interpretation, and adopted uses of the schema, the authors conclude by declaring the "practical viability" of the schema "for citation, cross-linking, and retrieval purposes," and promoting the schema's future development and adoption as a necessary task,

Cannabis contaminants limit pharmacological use of cannabidiol

In this 2020 review published in the journal Frontiers in Pharmacology, Montoya et al. discuss the various potential contaminants found in cannabis products and how those contaminants may create negative consequences medically, particularly for immunocompromised individuals. Though the authors take a largely global perspective on the topic, they note at several points the lack of consistent standards—particularly in the United States—and what that means for the long-term health of cannabis users, especially as legalization efforts continue to move forward. In addition to contaminants such as microbes, heavy metals, pesticides, plant growth regulators, and polycyclic aromatic hydrocarbons, the authors also address the dangers that come with inaccurate laboratory analyses and labeling of cannabinoid content in cannabidiol (CBD)-based products. They conclude that "it is imperative to develop universal standards for cultivation and testing of products to protect those who consume cannabis."

Development of an informatics system for accelerating biomedical research

Appearing originally in a 2019 issue of F1000Research, this recently revised second version sees Navale et al. expand on their development, implementation, and various uses of the Biomedical Research Informatics Computing System (BRICS), "designed to address the wide-ranging needs of several biomedical research programs" in the U.S. With a focus on using common data elements (CDEs) for improving data quality, consistency, and reuse, the authors explain their approach to BRICS development, how it accepts and processes data, and how users can effectively access and share data for improved translational research. The authors also provide examples of how the open-source BRICS software is being used by the National Institutes of Health and other organizations. They conclude that not only does BRICS further biomedical digital data stewardship under the FAIR principles, but it also "results in sustainable digital biomedical repositories that ensure higher data quality."

Mini-review of laboratory operations in biobanking: Building biobanking resources for translational research

In this 2020 article published in Frontiers in Public Health, Cicek and Olson of Mayo Clinic discuss the importance of biorepository operations and their focus on the storage and management of biospecimens. In particular, they address the importance of maintaining quality biospecimens and enacting processes and procedures that fulfill the long-term goals of biobanking institutions. After a brief introduction of biobanks and their relation to translational research, the authors discuss the various aspects that go into long-term planning for a successful biobank, including the development of standard operating procedures and staff training programs, as well as the implementation of informatics tools such as the laboratory information management system (LIMS). They conclude by emphasizing that "[b]iorepository operations require an enormous amount of support, from lab and storage space, information technology expertise, and a LIMS to logistics for biospecimen tracking, quality management systems, and appropriate facilities" in order to be most effective in their goals.

Laboratory information system requirements to manage the COVID-19 pandemic: A report from the Belgian national reference testing center

Though the COVID-19 pandemic has been in force for months, seemingly little has been published (even ahead of print) in academic journals of the technological challenges of managing the growing mountain of research and clinical diagnostic data. Weemaes et al. are one of the exceptions, with this pre-print June publication in the Journal of the American Medical Informatics Association. The authors, from Belgium's University Hospitals Leuven, present their approach to rapidly expanding their laboratory's laboratory information system (LIS) to address the sudden workflow bottlenecks associated with COVID-19 testing. They describe how using a change management framework to rapidly drive them forward, the authors were able "to streamline sample ordering through a CPOE system, and streamline reporting by developing a database with contact details of all laboratories in Belgium," while improving epidemiological reporting and exploratory data mining of the data for scientific research. They also address the element of "meaningful use" of their LIS, which often gets overlooked.

Comprehensive analyses of SARS-CoV-2 transmission in a public health virology laboratory

With COVID-19, we think of small outbreaks occurring from humans being in close proximity at restaurants, parties, and other events. But what about laboratories? Zuckerman et al. share their tale of a small COVID-19 outbreak at Israel’s Central Virology Laboratory (ICVL) in mid-March 2020 and how they used their lab tech to determine the transmission sources. With eight known individuals testing positive for the SARS-CoV-2 virus overall, the researchers walk step by step through their quarantine and testing processes to "elucidate person-to-person transmission events, map individual and common mutations, and examine suspicions regarding contaminated surfaces." Their analyses found person-to-person contact—not contaminated surface contact—was the transmission path. They conclude that their overall analysis verifies the value of molecular testing and capturing complete viral genomes towards determining transmission vectors "and confirms that the strict safety regulations observed in ICVL most likely prevented further spread of the virus."

Extending an open-source tool to measure data quality: Case report on Observational Health Data Science and Informatics (OHDSI)

In this 2020 short report from Dixon et al., lessons learned from attempts to expand data quality measures in the open-source tool Observational Health Data Science and Informatics (OHDSI) are presented. Noting a lack of data quality assessment and improvement mechanisms in health information systems in general, the researchers sought to improve OHDSI for public health surveillance use cases. After explaining the practical uses of OHDSI, the authors state their case for why measuring completeness, timeliness, and information entropy within OHDSI would be useful. Though not getting approval to add timeliness to the system, they conclude that high value remains "in adapting existing infrastructure and tools to support expanded use cases rather than to just create independent tools for use by a niche group."

Advancing laboratory medicine in hospitals through health information exchange: A survey of specialist physicians in Canada

This February 2020 paper published in BMC Medical Informatics and Decision Making examines the state of health information exchange in Québec and other parts of Canada and how its application to laboratory medicine might be improved. In particular, laboratory information exchange (LIE) systems that "improve the reliability of the laboratory testing process" and integrate "with other clinical information systems (CISs) physicians use in hospitals" are examined in this work. Surveying hospital-based specialist physicians, Raymond et al. paint a picture of how varying clinical information management solutions are used, what functionality is being used and not used, and how physicians view the potential benefits of the clinical systems they use. They conclude that there is very much a "complementary nature" between systems and that "system designers should take a step back to imagine a way to design systems as part of an interconnected network of features."

A high-throughput method for the comprehensive analysis of terpenes and terpenoids in medicinal cannabis biomass

In this 2020 paper published in the journal Matabolites, Krill et al. of Australia's AgriBio present their findings in an attempt to reduce runtimes and extraction complexity associated with quantifying terpenes in cannabis biomass. Noting the evolving needs of high-throughput cannabis breeding programs, the researchers present a method "based on a simple hexane extract from 40 mg of biomass, with 50 μg/mL dodecane as internal standard, and a gradient of less than 30 minutes." After presenting current background on terpene extraction, the researchers discuss the various aspects of their method and provide the details of materials and equipment used. They conclude that their method "covers a large cross-section of commonly detected cannabis volatiles, is validated for a large proportion of compounds it covers, and offers significant improvement in terms of sample preparation and sample throughput over previously published studies."

Existing data sources in clinical epidemiology: Laboratory information system databases in Denmark

In this data resource review, researchers at several Danish hospitals discuss how the country puts two laboratory information systems (LISs) that collect routine biomarker data to use, as well as how it can be accessed for research. The researchers explain how data is collected into the LISs, how data quality is managed, and how it is used, providing several real-world examples. They then discuss the strengths and weaknesses of their data as they relate to epidemiology, as well as how the data can be accessed. They emphasize "that access to data on routine biomarkers expands the detailed biological and clinical information available on patients in the Danish healthcare system," while the "full potential is enabled through linkage to other Danish healthcare registries."

HEnRY: A DZIF LIMS tool for the collection and documentation of biospecimens in multicentre studies

Effective biobanking of biospecimens for multicenter studies today requires more than spreadsheets and paper documents; a software system capable of improving workflows and sharing while keeping critical personal information deidentified is critical. Both commercial off-the-shelf (COTS) and open-source biobanking laboratory information management systems (LIMS) are available, but, as the University of Cologne found out, those options may be too complex to implement or cost-intensive for multicenter research networks. The university took matters into their own hands and developed the HIV Engaged Research Technology (HEnRY) LIMS, which has since expanded into a broader, open-source biobanking solution that can be applied to contexts beyond HIV research. This 2020 paper discusses the LIMS and its development and application, concluding that it offers "immense potential for emerging research groups, especially in the setting of limited resources and/or complex multicenter studies."

Fast SARS-CoV-2 detection by RT-qPCR in preheated nasopharyngeal swab samples

This "short communications" paper published in the International Journal of Infectious Diseases describes the results of simplifying reverse transcription and real-time quantitative PCR (RT-qPCR) analyses for SARS-CoV-2 by skipping the RNA extraction step and instead testing three different types of direct heating of the nasopharyngeal swab. Among the three methods tested—heating directly without additives, heating with a formamide-EDTA buffer, and heating with an RNAsnap buffer—the direct heating method "provided the best results, which were highly consistent with the SARS-CoV-2 infection diagnosis based on standard RNA extraction," while also processing nearly half the time. The authors warn, however, that "choice of RT-qPCR kits might have an impact on the sensitivity of the Direct protocol" when trying to replicate their results.

Bringing big data to bear in environmental public health: Challenges and recommendations

When it comes to analyzing the effects on humans of chemicals and other substances that make their way into the environment, "large, complex data sets" are often required. However, those data sets are often disparate and inconsistent. This has made epidemiological studies of air pollution and other types of environmental contamination difficult and limited in effectiveness. This problem has compounded in the age of big data. Comess et al. address the challenges associated with today's environmental public health data in this recent paper published in Frontiers in Artificial Intelligence and discuss how augmented intelligence, artificial intelligence (AI), and machine learning can enhance understanding of that data. However, they note, additional work must be put into improving not only analysis but also into how data is collected and shared, how researchers are trained in scientific computing and data science, and how familiarity with the benefits of AI and machine learning must be expanded. They conclude their paper with a table of five distinct challenges and their recommended way to address them so as to "create an environment that will foster the data revolution" in environmental public health and beyond.

Enzyme immunoassay for measuring aflatoxin B1 in legal cannabis

In this 2020 paper published in the journal Toxins, Di Nardo et al. of the University of Turin present their findings from adapting an existing enzyme immunoassay to cannabis testing for purposes of more accurately detecting mycotoxins, including aflatoxins such as aflatoxin B1 (AFB1). Citing benefits such as fewer steps, more cost-efficient equipment, and fewer training level demands, the authors viewed the application of enzyme immunoassay to testing cannabis for mycotoxins a worthy endeavor. Di Nardo et al. discuss at length how they converted an immunoassay for measuring aflatoxins in eggs to one for the cannabis substrate, as well as the various challenges and caveats associated with the resulting methodology. In comparison to techniques such as ultra-high performance liquid chromatography coupled to high resolution tandem mass spectrometry, the authors concluded that enzyme immunoassay more readily allows for "wide applications in low resource settings and for the affordable monitoring of the safety of cannabis products, including those used recreationally and as a food supplement."

The regulatory landscape of precision oncology laboratory medicine in the United States: Perspective on the past five years and considerations for future regulation

What is the current status of laboratory developed tests (LDTs) and the U.S. Food and Drug Administration's (FDA) approach to them? What are the arguments for and against their regulation? In this 2020 review article published in Practical Laboratory Medicine, University of Washington's Eric Konnick covers the current U.S. regulatory environment for LDTs, as well as recent developments since the October 2014 FDA release of LDT draft guidance. Konnick then provides extensive commentary on the scope of LDT regulation in the U.S., including perceived risks to patients, LDT accuracy and equivalency, and how regulatory certainty fits into LDT innovation. He closes with recommendations for approaching future LDT regulation, concluding that "identifying tools that can be leveraged to improve laboratory test quality may offer many benefits that do not necessarily require a burdensome regulatory framework."

Laboratory testing methods for novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)

In this 2020 article published in Frontiers in Cell and Developmental Biology, D'Cruz et al. provide an overview of the current laboratory methods available to test for coronaviruses, with a focus on SARS-CoV-2, the virus responsible for COVID-19. After providing an introduction to COVID-19 and its virus, the authors discuss the most common methods—such as qRT-PCR, ELISA, and LFI—as well as emerging diagnostic methods involving isothermal nucleic acid amplification, CRISPR, and NGS. They conclude with several visuals comparing the methods and when they are used, and they emphasize the importance of these test methods (as well as laboratory preparedness) in addressing rapidly evolving viral infection scenarios.

Bridging the collaboration gap: Real-time identification of clinical specimens for biomedical research

In this 2020 article published in Journal of Pathology Informatics, Durant et al. of the Yale New Haven Health system present the results of their attempt to automate the identification and notification of laboratory biospecimens for biomedical researchers. Noting biospecimens' value to basic, translational, and clinical research, the authors wanted to get around the "technical, logistic, regulatory, and ethical challenges" of accessing biospecimen data. They developed Prism, a tool built on open-source technology for efficiently identifying and notifying investigators of biospecimen availability in real time. The authors present details of two use cases and conclude that their "solution is highly scalable to meet the needs of even large academic centers and reference laboratories," while also guaranteeing that the virtualization of the associated workflow "within a microservices environment does not introduce a performance penalty."