Cybersecurity for biopharmaceutical manufacturing? The “rapid pace of innovation dictates that it is not too early to consider the cyberbiosecurity implications” of the potential dangers that can arise in the industry, argue Mantle et al. in this 2019 paper published in Frontiers in Bioengineering and Biotechnology. From compromised master cell banks to intentional corruption of ” the design, reading, and writing of DNA sequences to produce pathogenic, self-replicating entities,” the authors demonstrate worst case scenarios and practical considerations that scientists should be considering when incorporating various forms of automation into biopharmaceutical manufacturing. After presenting their scenarios and considerations, they conclude that “current best practices from industrial manufacturing and state-of-the-art cybersecurity could serve as a starting point to safeguard and mitigate against cyberbiosecurity threats to biomanufacturing.”
A bibliometric analysis of Cannabis publications: Six decades of research and a gap on studies with the plant
What are the trends and glaring holes in scientific publications regarding the Cannabis plant? We know some parts of the world have been more proactive in said research, often thanks to slightly more lax regulations than other countries. But where is the research coming from, and what still must be addressed? Matielo et al. tested this concept in a bibliometric analysis of roughly six decades of publications. They found an increase in the relational study of cannabis and its affects on human genetics but a significant dearth of publications on the genetics of the plant itself. Only since about 2005 has genetics studies of the plant picked up. The authors found several other patterns in their analysis, included in the paper’s “Discussion” section.
This 2017 paper by Mudge et al. proposes a means for analyzing the Cannabis plant and its flowers in such a way so as to reduce the use of chlorinated solvents and provide a safer work environment for laboratorians. The authors test methanol as a greener alternative and present the results of their tests, concluding the method as “a significant improvement over previous methods that can be used in a variety of settings and has the potential to be expanded for inclusion of new cannabinoids as required.” Along with being greener and improving lab safety, the authors also suggest a reduction in material cost as a benefit of the method.
Many laboratorians and researchers who have investigated open-source laboratory information management systems (LIMS) have run across LabKey Server. LabKey has been especially useful to those working with high-throughput assays, flow cytometry, genotyping/sequencing, proteomics, specimen tracking, and observational study data management. It’s also an extensible LIMS, and as can be seen in this 2019 journal article by Brusniak et al., where they describe the process of updating LabKey to handle “generalized engineered protein compounds workflow that tracks entities and assays from creation to preclinical experiments.” Noting rapid advances in protein therapeutics, the authors developed LabKey into Optide-Hunter to handle optimized peptides (thus, optides) and their production. After discussing its inner workings, the authors conclude that their open-source Optide-Hunter solution fits the bill for a “cost-effective and flexible LIMS for early-stage experimental pipeline development for engineered protein therapeutics development.”
In this brief perspective article by Caswell et al,, of various U.S. Department of Energy Laboratories, the topic of mitigating the risks—expected and unexpected—associated with public biological databases is briefly presented, pointing out “several existing research areas that can be leveraged to protect against accidental and intentional modifications and misuse.” The authors provide background on the data integrity and vulnerability concerns of these databases, including sources of errors and exploitation of weaknesses in the systems inadvertently or by bad-faith actors. Then they suggest a wide variety of tools, methodologies, and other approaches that are available to gatekeepers of this type of data. They close by comparing these databases to the creation of the internet and its “wide penetration of open functionality” but initial lack of having “integrity and security in mind,” stating that now is the time to focus on mitigating risks to the integrity of public biological databases.
Determining the hospital information system (HIS) success rate: Development of a new instrument and case study
Hospital information systems (HIS) represent an important information and function management system for hospitals, providing assistance with diagnosis, management, and education functions for improved service and practice. But how effective are they at what they’re designed to do? This work by Ebnehoseini et al. offers a mechanism for gauging HIS effectiveness through statistics and the information systems success model (ISSM). The authors applied their methodology to the Ibn-e Sina and Dr. Hejazi Psychiatry Hospital in Mashhad, Iran and determined a 65% success rate for the hospital’s HIS. They conclude that their methodology “can be adopted for HIS evaluation in future studies” by others and that their results provide a clearer picture into “the viewpoints of HIS users in a developing country.”
With a growing number of big data and artificial intelligence (AI) implementations (referred to as “smart information systems” or SIS) in enterprises, important ethical questions related to cybersecurity must be asked. Has proper informed consent been given? Have discovered vulnerabilities been handled in such a way as to limit potential harm to users? How is trust and transparency handled? These and other questions are asked by Macnish et al. in this 2019 research report published in ORBIT Journal. Noting that technical and ethical issues both underpin the tenants of cybersecurity, the authors discuss the ethical issues of using SIS in cybersecurity and provide their own case study of a major U.K. business using SIS in their cybersecurity practices. After providing their ethical implications of the case study, they conclude “that ethical concerns regarding SIS in cybersecurity go further than mere privacy issues” and “claim that there is a need to improve the ethics of research in SIS.”
In this 2018 paper published in Scientific Reports, Mudge et al. of the University of British Columbia attempt to demonstrate how a growing loss of biodiversity is occurring across strains of Cannabis due to cultivation practices. The authors introduce current research on the “understanding of metabolite commonality and diversity within plant species” and hypothesize that simply measuring cannabinoid potency (Δ9-tetrahydrocannabinol or THC) and cannabidiol (CBD) content is not sufficient in painting a proper picture of cannabinoid composition and the resulting impact of breeding programs. The authors then get into the results of their analytical research, discuss their findings, and conclude that, at least in Canada, ” evaluating single metabolite classes [of available Cannabis strains] does not provide sufficient information to understand the phytochemical diversity available.”
We take another look at the security of biological data, this time through the eyes of Berger and Schneck, and the discussion of major players such as China. The authors first introduce advances in biotechnology and benefits and challenges that arise from it today, including exploitation of the resulting data “by state actors, malicious nonstate actors, and hackers.” They then discuss common approaches used to protect this soft of data, as well as the various vulnerabilities that come from (or from the lack of) those approaches. The authors close their paper by offering four specific strategies “toward protecting biological data from unauthorized acquisition and use, enhancing efforts to preserve data integrity and provenance, and enabling future benefit of biotechnological advances.”
In this 2019 paper published in Frontiers in Public Health, Wholey et al. propose a curriculum for an effective and modern public health informatics (PHI) higher education program. Drawing competencies from computer, information, and organizational sciences, the researchers provide program design objectives, competencies, and challenges involved in implementing such a curriculum focused on the professional role of the informatician. Based on the information systems program of Carnegie Mellon, the authors report that their proposed curriculum has already seen success with its implementation at the School of Public Health at the University of Minnesota. They conclude that the proposed curriculum “is a starting point,” keeping in mind that practices, knowledge, and technology change often, requiring program vigilance for improvement to best ensure “an effective and relevant workforce that can navigate the changing trends of public health to improve population health.”
Here is one more journal article on the topic of cyberbiosecurity, this time discussing related vulnerabilities and the need for a resilient infrastructure to limit them. Schabacker et al. of the Argonne National Laboratory present a base “assessment framework for cyberbiosecurity, accounting for both security and resilience factors in the physical and cyber domains.” They first discuss the differences between “emerging” and “converging” technologies and how they contribute to vulnerabilities, and then they lead into risk mitigation strategies. The authors also provide clear definitions of associated terminology for common understanding, including the topics of dependency and interdependency. Then they discuss the basic framework, keeping vulnerabilities in mind. They close with a roadmap for a common vulnerability assessment framework, encouraging the application of “lessons learned from parallel efforts in related fields,” and reflection on “the complex multidisciplinary cyberbiosecurity environment.”
From software development companies to pharmaceutical manufacturers, businesses, governments, and non-profit entities alike are beginning to adopt tools meant to improve communication, information sharing, and productivity. In some cases, this takes the form of an enterprise social media platform (ESMP) with chat rooms, discussion boards, and file repositories. But while this can be beneficial, adding a focused attempt at sharing to work culture can create its own share of problems, argues Norwegian University of Science and Technology’s Halvdan Haugsbakken, particularly due to varying definitions and expectations of what information “sharing” actually is. Using a case study of regional government in Norway, Haugsbakken found “when the end-users attempted to translate sharing into a manageable practice—as the basis for participation in a knowledge formation process—they interpreted sharing as a complicated work practice, with the larger consequence of producing disengaged users.” This result “suggests a continued need for the application of theoretical lenses that emphasize interpretation and practice in the implementation of new digital technologies in organizations.”
Following up on the cyberbiosecurity article posted a few weeks ago, this one by Murch et al. steps away from the strong agriculture focus of the prior and examines cyberbiosecurity from a more broad perspective. Not only do the authors provide background into the convergence of cybersecurity, cyber-physical security, and biosecurity, but they also provide a look at how it extends to biomanufacturing facilities. They conclude that cyberbiosecurity could be applied to various domains—from personalized genomics and medical device manufacturing to food production and environmental monitoring— though continued “[d]irect and ordered engagements of the pertinent sectors of the life sciences, biosecurity, and cyber-cybersecurity communities,” as well as tangentially within academia and government, must continue to occur in order “to harmonize the emerging enterprise and foster measurable value, success, and sustainability.”
The cannabinoid content of legal cannabis in Washington State varies systematically across testing facilities and popular consumer products
In 2017, Washington State’s cannabis testing laboratories were going through a mini-crisis, with questions of credibility being raised about certain laboratories’ testing methodologies and practices. How much were differences in tested samples based on differences in methodologies? What more can be done by laboratories in the industry? This 2018 paper by Jikomes and Zoorob answer these questions and more, using “a large dataset from Washington State’s seed-to-sale traceability system.” Their analyses of this dataset showed “systematic differences in the results obtained by different testing facilities in Washington,” leading to “cannabinoid inflation” in certain labs. The authors conclude that despite the difficulties of having cannabis illegal at the federal level, efforts to standardize testing protocols etc. must continue to be made in order to protect consumers and increase confidence in the testing system.
We know that the research process creates data, and data is increasingly valuable. But can we visualize all the nooks and crannies that data is coming from in a more technologically connected society? Take for instance the food and agriculture sectors, influencing more than 20 percent of the U.S. economy. Herd data records for the dairy industry, pedigree information for the food animal industry, and soil and machine condition data from row crop farmers are only a few examples of data sources in the sectors. But what of the security of that data? Duncan et al. take a brief look at these sectors and provide insight into the ways that cybersecurity, biosecurity, and their intersections are increasingly important. The conclude by making suggestions for “[w]orkforce development, effective communication strategies, and cooperation across sectors and industries” to better “increase support and compliance, reducing the risks and providing increased protection for the U.S. bioeconomy.”
In this 2018 paper published in Sensors, Perez-Castillo et al. present DAQUA-MASS, their own take on the ISO 8000-61 data quality standard but updated for the world of the internet of things and smart, connected products (SCPs), in particular sensor networks. While recognizing that data quality has been studied significantly over the decades, little work has gone into the formal policies for SCP and sensor data quality. After presenting data challenges in SCP environments, related work, their model, and their methodology, the authors conclude that their “data quality model, along with the methodology, offers a unique framework to enable designers of IoT projects—including sensor networks and practitioners in charge of exploiting IoT systems—to assure that the business processes working over these systems can manage data with adequate levels of quality.”
Security architecture and protocol for trust verifications regarding the integrity of files stored in cloud services
Have you considered the security of your files hosted in the cloud? Of course you have, and reputable providers provide a solid guarantee of safety. But what if you could monitor the status of your files as well as the behavior of your cloud service provider? In this 2018 paper published by Pinheiro et al., such a proposition is proven as practical. The authors demonstrate just that using architecture that includes a “protocol based on trust and encryption concepts to ensure cloud data integrity without compromising confidentiality and without overloading storage services.” They conclude that not only is such monitoring able to be done efficiently and rapidly, but also the “architecture proved to be quite robust during the tests, satisfactorily responding to the fault simulations,” including unplanned server shutdown.
For information technology professionals and informaticists alike, when handling data, the idea of “garbage in, garbage out” remains a popular refrain. Collecting data isn’t enough; its quality for future analysis, sharing, and use is also important. Similarly, with the growth of the internet, the amount of health-related information being pumped online increases, but its quality isn’t always attended to. In this 2018 paper by Al-Jefri et al., the topic of online health information quality (IQ) gets addressed in the form of a developed framework “that can be applied to websites and defines which IQ criteria are important for a website to be trustworthy and meet users’ expectations.” The authors conclude with various observations, including differences in how education, gender, and linguistic background affects users’ ability to gauge information quality, and how there seems to be an overall lack of caring about the ethical trustworthiness of online health information by the public at large.
In this 2018 article published in Future Internet, Teixeira et al. test five machine learning algorithms in a supervisory control and data acquisition (SCADA) system testbed to determine whether or not machine learning is useful in cybersecurity research. Given the increasing number and sophistication of network-based attacks on industrial and research sensor networks (among others), the authors assessed the prior research of others in the field and integrated their findings into their own SCADA testbed dedicated to controlling a water storage tank. After training the algorithms and testing the system with attacks, they concluded that the Random Forest and Decision Tree algorithms were best suited for the task, showing ” the feasibility of detecting reconnaissance attacks in [industrial control system] environments.”
Semantics for an integrative and immersive pipeline combining visualization and analysis of molecular data
The field of bioinformatics has really taken off over the past decade, and so with it has the number of data sources and the need for improved visualization tools, including in the realm of three-dimensional visualization of molecular data. As such, Trellet et al. have developed the infrastructure for “an integrated pipeline especially designed for immersive environments, promoting direct interactions on semantically linked 2D and 3D heterogeneous data, displayed in a common working space.” The group discusses in detail bioinformatics ontologies and semantic representation of bioinformatics knowledge, as well as vocal-based query management with such a detailed system. They conclude their efforts towards their “pipeline might be a solid base for immersive analytics studies applied to structural biology,” including the ability to propose “contextualized analysis choices to the user” during interactive sessions.