Compared to other U.S. states, California arguably has some of the most strict laws regarding the laboratory testing of cannabis. Economically, what have been some of the effects of these regulations? Valdes-Donoso et al. attempt to contribute to that conversation in this 2019 paper published in California Agriculture. Using state regulations, expert opinions, primary data from California’s laboratories, and data from cannabis testing equipment manufacturers, the authors attempt to estimate the cost per pound of testing and sampling under the state’s regulatory framework. This includes what they consider to be particularly costly: cases where cannabis is rejected for testing failure. They conclude their research by discussing the economic and regulatory implications of their findings, including supply and demand issues, costs of legal vs. illegal cannabis, and comparisons to other state-mandated agricultural testing in the state.
In this brief paper published in Frontiers in Marine Science, Armstrong et al. present the details of their OceanWorks integrated data analytics platform (IDAP) (which later was open sourced as the Apache Science Data Analytics Platform [SDAP]). Confronted with disparate data management solutions for performing research on oceanographic research data, the authors developed OceanWorks to provide an integrated platform capable of advanced data queries, data analysis, anomaly detection, data matching, data subsetting, and more. Since its creation, OceanWorks has been deployed in multiple NASA environments to handle a wide variety of data management tasks at various deployment intensities. They conclude that under its open-source SDAP iteration, the software platform will “continue to evolve and leverage any new open-source big data technology” in order “to deliver fast, web-accessible services for working with oceanographic measurements.”
Over the last decade, various researchers have proposed numerous methods for improving the security and privacy of critical data on virtual machines hosted on remote cloud servers, particularly when mobile applications are involved. Annane and Ghazali review those research efforts in this 2019 paper published in the International Journal of Interactive Mobile Technologies and detail the various gaps in that research. After carefully looking at the strengths and drawbacks of numerous techniques, the authors additionally propose the three biggest challenges not well addressed in that research: scalability, credibility, and robust communication techniques. They also suggest future research that will address “three secure policies to protect users’ sensitive data against co-resident and hypervisor attacks, as well as preserve the communication of users’ sensitive data when deployed on a different cloud host.”
In this brief article by Greene et al. of the National Institute of Standards and Technology (NIST), details of the organization’s attempt to open access to it scientific data infrastructure are provided. After introducing the details of the type of data held at NIST, as well as the U.S. governments “open access to research” (OAR) initiative, the authors describe the technological architecture that went towards making NIST’s data more open. They describe the OAR application workflow all the way to the end, where efforts to “enable data interoperability as much as possible in order to maximize the usability of NIST data” were put in place. The authors conclude that their system not only meets OAR requirements, making basic data access a breeze, but it also “will facilitate the use of AI and machine learning applications and help solve many complexities in mining data-rich resources.”
Development of standard operating protocols for the optimization of Cannabis-based formulations for medical purposes
In this 2019 paper by Baratta et al., published in Frontiers in Pharmacology, the authors examine a variety of different procedures for preparing decoctions and oil-based extracts using the Cannabis plant. Using the different formulations as a base, they wanted to determine the efficacy of the various “standard operating procedures for the preparation and optimization of Cannabis-based galenic formulations.” After discussing the materials and methods, and reviewing the results, the authors discussed the ramifications of their research, in particular in contrast to what the Italian government recommends procedure-wise. They concluded that their “β-4” method of oil preparation yielded “significantly higher [recovered THC and CBD] compared to those with water-based extraction (decoctions) or other current oil-based extraction techniques.”
Where do technology, security, and the DNA synthesis market intersect? Most notably is the need for minimizing biological risks and maximizing the safety and security of DNA synthesis practices around the world. Diggans and Leproust of Twist Bioscience highlight this major intersection and discuss what they and the International Gene Synthesis Consortium (IGSC) view as the most important aspects of DNA synthesis to address in order to mitigate risks and improve global safety and security. Using the IGSC’s Harmonized Screening Protocol as a base, the authors provide background on the subject and address current industry best practices and how they could be improved using cybersecurity and strong software testing methodologies. They also address how research funding priorities should be addressed to build and maintain databases, reduce risks, and “democratize access to sequence screening.” The authors conclude that a multi-faceted approach of practices such as “red teaming,” stronger investments in screening—particularly with oligonucleotide pools—and stronger efforts to “teach and promote the evaluation of the security implications of new synthetic biology techniques or materials” to practicing synthetic biologists will yield positive results for the future of DNA synthesis.
Japan Aerospace Exploration Agency’s public-health monitoring and analysis platform: A satellite-derived environmental information system supporting epidemiological study
In this 2019 paper published in Geospatial Health, Oyashi et al. present details on the Japan Aerospace Exploration Agency’s (JAXA) Public-health Monitoring and Analysis Platform (PMAP). The web-based system was designed to take a wide variety of environmental data and present it in such a way so as to allow epidemiologists to make local, regional, national, or even global insights. Using more than 30 years of archived Earth-observed satellite data from multiple sources, the authors demonstrate the system and explain its underlying technology. While recognizing some of the inherent limitations of optical-sensor-based data towards user goals, the team concludes that its JPMAP system successfully takes “various satellite-derived environmental information related to epidemiological data and pre-processes the data to improve its accessibility for epidemiological research.”
Where do government regulations, privacy, big data, and the management of the energy grid infrastructure collide? The implementation and management of modern smart grids, that’s where. In this 2019 paper by Hatzakis et al., a major distribution system operator (DSO) in the Netherlands is at the heart of discovering how Europe’s General Data Protection Regulation (GDPR) are affecting modern implementations of smart energy grids. From cybersecurity risks to public concerns about the privacy of their energy data, this article reviews various ethical issues, concluding with “the need for clarification in practice of privacy policies (particularly of GDPR) to lift concerns about the capability of organizations to remain within its boundaries without holding back progress.”
This brief perspective article by Anwar et al. of the BHF Centre for Cardiovascular Science at University of Edinburgh examines the potential for using health informatics to improve heart failure outcomes in clinical settings. Noting “decompensated heart failure accounts for up to five percent of all acute unscheduled hospital admissions and has the longest length of stay of any cardiac condition,” the authors remind readers that despite significant evidence-based practices backed by science, there continues to be a disconnect between those practices and actual clinical practice. They then turn to a 2019 cohort study of nearly 100,000 U.K patients to provide hints at how collecting, managing, and effectively using heart failure data in contemporary clinical practice can be beneficial to many. The authors conclude that “we need to collate healthcare data from across both primary and secondary care settings in real time and use robust methodology to evaluate major changes in clinical practice or policy decisions” and attempt to show a visual of what such a platform would look like.
Cybersecurity for biopharmaceutical manufacturing? The “rapid pace of innovation dictates that it is not too early to consider the cyberbiosecurity implications” of the potential dangers that can arise in the industry, argue Mantle et al. in this 2019 paper published in Frontiers in Bioengineering and Biotechnology. From compromised master cell banks to intentional corruption of ” the design, reading, and writing of DNA sequences to produce pathogenic, self-replicating entities,” the authors demonstrate worst case scenarios and practical considerations that scientists should be considering when incorporating various forms of automation into biopharmaceutical manufacturing. After presenting their scenarios and considerations, they conclude that “current best practices from industrial manufacturing and state-of-the-art cybersecurity could serve as a starting point to safeguard and mitigate against cyberbiosecurity threats to biomanufacturing.”
A bibliometric analysis of Cannabis publications: Six decades of research and a gap on studies with the plant
What are the trends and glaring holes in scientific publications regarding the Cannabis plant? We know some parts of the world have been more proactive in said research, often thanks to slightly more lax regulations than other countries. But where is the research coming from, and what still must be addressed? Matielo et al. tested this concept in a bibliometric analysis of roughly six decades of publications. They found an increase in the relational study of cannabis and its affects on human genetics but a significant dearth of publications on the genetics of the plant itself. Only since about 2005 has genetics studies of the plant picked up. The authors found several other patterns in their analysis, included in the paper’s “Discussion” section.
This 2017 paper by Mudge et al. proposes a means for analyzing the Cannabis plant and its flowers in such a way so as to reduce the use of chlorinated solvents and provide a safer work environment for laboratorians. The authors test methanol as a greener alternative and present the results of their tests, concluding the method as “a significant improvement over previous methods that can be used in a variety of settings and has the potential to be expanded for inclusion of new cannabinoids as required.” Along with being greener and improving lab safety, the authors also suggest a reduction in material cost as a benefit of the method.
Many laboratorians and researchers who have investigated open-source laboratory information management systems (LIMS) have run across LabKey Server. LabKey has been especially useful to those working with high-throughput assays, flow cytometry, genotyping/sequencing, proteomics, specimen tracking, and observational study data management. It’s also an extensible LIMS, and as can be seen in this 2019 journal article by Brusniak et al., where they describe the process of updating LabKey to handle “generalized engineered protein compounds workflow that tracks entities and assays from creation to preclinical experiments.” Noting rapid advances in protein therapeutics, the authors developed LabKey into Optide-Hunter to handle optimized peptides (thus, optides) and their production. After discussing its inner workings, the authors conclude that their open-source Optide-Hunter solution fits the bill for a “cost-effective and flexible LIMS for early-stage experimental pipeline development for engineered protein therapeutics development.”
In this brief perspective article by Caswell et al,, of various U.S. Department of Energy Laboratories, the topic of mitigating the risks—expected and unexpected—associated with public biological databases is briefly presented, pointing out “several existing research areas that can be leveraged to protect against accidental and intentional modifications and misuse.” The authors provide background on the data integrity and vulnerability concerns of these databases, including sources of errors and exploitation of weaknesses in the systems inadvertently or by bad-faith actors. Then they suggest a wide variety of tools, methodologies, and other approaches that are available to gatekeepers of this type of data. They close by comparing these databases to the creation of the internet and its “wide penetration of open functionality” but initial lack of having “integrity and security in mind,” stating that now is the time to focus on mitigating risks to the integrity of public biological databases.
Determining the hospital information system (HIS) success rate: Development of a new instrument and case study
Hospital information systems (HIS) represent an important information and function management system for hospitals, providing assistance with diagnosis, management, and education functions for improved service and practice. But how effective are they at what they’re designed to do? This work by Ebnehoseini et al. offers a mechanism for gauging HIS effectiveness through statistics and the information systems success model (ISSM). The authors applied their methodology to the Ibn-e Sina and Dr. Hejazi Psychiatry Hospital in Mashhad, Iran and determined a 65% success rate for the hospital’s HIS. They conclude that their methodology “can be adopted for HIS evaluation in future studies” by others and that their results provide a clearer picture into “the viewpoints of HIS users in a developing country.”
With a growing number of big data and artificial intelligence (AI) implementations (referred to as “smart information systems” or SIS) in enterprises, important ethical questions related to cybersecurity must be asked. Has proper informed consent been given? Have discovered vulnerabilities been handled in such a way as to limit potential harm to users? How is trust and transparency handled? These and other questions are asked by Macnish et al. in this 2019 research report published in ORBIT Journal. Noting that technical and ethical issues both underpin the tenants of cybersecurity, the authors discuss the ethical issues of using SIS in cybersecurity and provide their own case study of a major U.K. business using SIS in their cybersecurity practices. After providing their ethical implications of the case study, they conclude “that ethical concerns regarding SIS in cybersecurity go further than mere privacy issues” and “claim that there is a need to improve the ethics of research in SIS.”
In this 2018 paper published in Scientific Reports, Mudge et al. of the University of British Columbia attempt to demonstrate how a growing loss of biodiversity is occurring across strains of Cannabis due to cultivation practices. The authors introduce current research on the “understanding of metabolite commonality and diversity within plant species” and hypothesize that simply measuring cannabinoid potency (Δ9-tetrahydrocannabinol or THC) and cannabidiol (CBD) content is not sufficient in painting a proper picture of cannabinoid composition and the resulting impact of breeding programs. The authors then get into the results of their analytical research, discuss their findings, and conclude that, at least in Canada, ” evaluating single metabolite classes [of available Cannabis strains] does not provide sufficient information to understand the phytochemical diversity available.”
We take another look at the security of biological data, this time through the eyes of Berger and Schneck, and the discussion of major players such as China. The authors first introduce advances in biotechnology and benefits and challenges that arise from it today, including exploitation of the resulting data “by state actors, malicious nonstate actors, and hackers.” They then discuss common approaches used to protect this soft of data, as well as the various vulnerabilities that come from (or from the lack of) those approaches. The authors close their paper by offering four specific strategies “toward protecting biological data from unauthorized acquisition and use, enhancing efforts to preserve data integrity and provenance, and enabling future benefit of biotechnological advances.”
In this 2019 paper published in Frontiers in Public Health, Wholey et al. propose a curriculum for an effective and modern public health informatics (PHI) higher education program. Drawing competencies from computer, information, and organizational sciences, the researchers provide program design objectives, competencies, and challenges involved in implementing such a curriculum focused on the professional role of the informatician. Based on the information systems program of Carnegie Mellon, the authors report that their proposed curriculum has already seen success with its implementation at the School of Public Health at the University of Minnesota. They conclude that the proposed curriculum “is a starting point,” keeping in mind that practices, knowledge, and technology change often, requiring program vigilance for improvement to best ensure “an effective and relevant workforce that can navigate the changing trends of public health to improve population health.”
Here is one more journal article on the topic of cyberbiosecurity, this time discussing related vulnerabilities and the need for a resilient infrastructure to limit them. Schabacker et al. of the Argonne National Laboratory present a base “assessment framework for cyberbiosecurity, accounting for both security and resilience factors in the physical and cyber domains.” They first discuss the differences between “emerging” and “converging” technologies and how they contribute to vulnerabilities, and then they lead into risk mitigation strategies. The authors also provide clear definitions of associated terminology for common understanding, including the topics of dependency and interdependency. Then they discuss the basic framework, keeping vulnerabilities in mind. They close with a roadmap for a common vulnerability assessment framework, encouraging the application of “lessons learned from parallel efforts in related fields,” and reflection on “the complex multidisciplinary cyberbiosecurity environment.”