What are the links between pathology in the clinical setting and bioinformatics? Are residents in pathology gaining enough training in bioinformatics? And why should they learn about it in the first place? Clay and Fisher provide their take on these questions in this 2017 review paper published in Cancer Informatics. From their point-of-view, bioinformatics training is vital to practicing pathology “in the ‘information age’ of diagnostic medicine,” and training should be taken more seriously at the residency and fellowship levels. They conclude that “in order for bioinformatics education to firmly integrate into the fabric of resident education, its importance and broad application to the practice of pathology must be recognized and given a prominent seat at the education table.”
This 2015 paper by Faria-Campos et al. of the Brazilian Universidade Federal de Minas Gerais presents the reader with an overview of FluxCTTX, essentially a cytotoxicity module for the Flux laboratory information management system (LIMS). Citing a lack of laboratory informatics tools that can handle the specifics of cytotoxicity assays, the group develop FluxCTTX and tested it in five different laboratory environments, concluding that it can better “guarantee the quality of activities in the process of cytotoxicity tests and enforce the use of good laboratory practices (GLP).”
PathEdEx – Uncovering high-explanatory visual diagnostics heuristics using digital pathology and multiscale gaze data
The visual and non-visual analysis techniques of pathology diagnosis are still in a relative infancy, representing a complex conglomeration of various data sources and techniques “which currently can be described more like a subjective exercise than a well-defined protocol.” Enter Shin et al., who developed the PathEdEx system, an informatics computational framework designed to pair digital pathology images and pathologists’ eye/gaze patterns associated with those images to better understand and develop diagnostics methods and educational material for future pathologists. “All in all,” they conclude, “the PathEdEx informatics tools have great potential to uncover, quantify, and study pathology diagnostic heuristics and can pave a path for precision diagnostics in the era of precision medicine.”
Where are electronic laboratory notebooks (ELNs) going, and what do they lack? How does data from several user groups paint the picture of the ELN and its functionality and shortcomings? In this 2017 paper published in Journal of Cheminformatics, researchers from the University of Southampton and BioSistemika examine the market and users of ELNs/paper laboratory notebooks, intent of identifying areas ELNs could be improved. They conclude that optimally there should be “an ELN environment that can serve as an interface between paper lab notebooks and the electronic documents that scientists create, one that is interoperable and utilizes semantic web and cloud technologies, particularly given that “current technology is such that it is desirable that ELN solutions work alongside paper for the foreseeable future.”
Like many other fields of science, earth science is increasingly swimming in data. Unlike other fields, the discussion of earth science data and its analysis hasn’t been as vigorous, comparatively speaking. Kempler and Mathews of NASA speak of their efforts within the Earth Science Information Partners’ (ESIP) earth science data analytics (ESDA) program in this 2017 paper published in Data Science Journal. The duo shares their experiences and provides a set of techniques and skills typical to ESDA “that is useful in articulating data management and research needs, as well as a working list of techniques and skills relevant to the different types of ESDA.”
This brief paper published in BMC Bioinformatics provides a sociological examination of the world of bioinformatics and how it’s perceived institutionally. Bartlett et al. argue that institutionally while less focus is place on bioinformatics processes and more on the data input and output, putting the contributions of bioinformaticists into a “black box,” losing scientific credit in the process. The researchers conclude that “[i]n the pursuit of relevance and impact, future scientific careers will increasingly involve playing the role of a fractional scientist … combining a variety of expertise and epistemic aspirations…” to become “tomorrow’s bioinformatic scientists.”
In this 2017 journal article published in the Data Science Journal, data scientist Sabina Leonelli reflects on a paradigm shift in biology where “specific data production technologies [are used] as proxy for assessing data quality,” creating problems along the way, particularly for the open data movement. Leonelli’s major concern: “Ethnographic research carried out in such environments evidences a widespread fear among researchers that providing extensive information about their experimental set-up will affect the perceived quality of their data, making their findings vulnerable to criticisms by better-resourced peers,” hindering data and provenance sharing. And the conclusion? Endorsing specific data production and management technologies as indicators of data quality can cloud the goals of open data initiatives.
In this 2017 article published in Frontiers in Neuroinformatics, Grigis et al., like many before them, note that data in scientific areas of research such as genetics, imaging, and the social sciences has become “massive, heterogeneous, and complex.” Their solution is a Python-based one that integrates the CubicWeb open-source semantic framework and other tools “to overcome the challenges associated with data sharing and collaborative requirements” found in population imaging studies. The resulting data sharing service (DSS) proves to be flexible, integratable, and expandable based on demand, they conclude.
Wikipedia defines bibliometrics as a “statistical analysis of written publications, such as books or articles.” Related to information and library science, bibliometrics has been helping researchers make better sense of the trends and impacts made across numerous fields. In this 2017 paper, Heo et al. use bibliometric methods new and old to examine the field of bioinformatics via related journals over a period of 20 years to better understand how the field has changed in that time. They conclude that “the characteristics of the bioinformatics field become more distinct and more specific, and the supporting role of peripheral fields of bioinformatics, such as conceptual, mathematical, and systems biology, gradually increases over time, though the core fields of proteomics, genomics, and genetics are still the major topics.”
As Khan and Mathelier note in their abstract, one of the more common tasks of a bioinformatician is to take lists of genomes or genomic regions from high-throughput sequencing and compare them visually. Noting the lack of a comprehensive tool to visualize such complex datasets, the authors developed Intervene, a tool for computing intersections of multiple genomic and list sets. They conclude that “Intervene is the first tool to provide three types of visualization approaches for multiple sets of gene or genomic intervals,” and they have made the the source code, web app, and documentation freely available to the public.
Users’ perspectives on a picture archiving and communication system (PACS): An in-depth study in a teaching hospital in Kuwait
The picture archiving and communication system (PACS) is an increasingly important information management component of hospitals and medical centers, allowing for the digital acquisition, archiving, communication, retrieval, processing, distribution, and display of medical images. But do staff members using it find that a PACS makes their job easier and more effective? This journal article by Buabbas et al. represents another attempt by medical researchers to quantify and qualify the impact of the PACS on radiologists and technologists using the system. In their case, the authors concluded that “[d]espite some of the technical limitations of the infrastructure, most of the respondents rated the system positively and as user-friendly” but, like any information system, there are still a few areas of improvement that need attention.
Effective information extraction framework for heterogeneous clinical reports using online machine learning and controlled vocabularies
Even in the digital realm (think electronic medical records), extracting usable information from narrated medical reports can be a challenge given heterogeneous data structures and vocabularies. While many systems have been created over the years to tackle this task, researchers from Emory and Stony Brook University have taken a different approach: online learning. Here Zheng et al. present their methodology and findings associated with their Information and Data Extraction using Adaptive Online Learning (IDEAL-X) system, concluding that “the online learning–based method combined with controlled vocabularies for data extraction from reports with various structural patterns … is highly effective.”
Selecting a laboratory information management system for biorepositories in low- and middle-income countries: The H3Africa experience and lessons learned
What’s important for a biorepository laboratory information management system (LIMS), and what options are out there? What unique constraints in Africa make that selection more difficult? This brief 2017 paper from the Human Heredity and Health in Africa (H3Africa) Consortium outlines their take on finding the right LIMS solution for three of their regional biorepositories in Africa. The group emphasizes in the end that “[c]hoosing a LIMS in low- and middle-income countries requires careful consideration of the various factors that could affect its successful and sustainable deployment and use.”
Baobab Laboratory Information Management System: Development of an open-source laboratory information management system for biobanking
This journal article, published in Biopreservation and Biobanking in early 2017, presents the development philosophy and implementation of a custom-modified version of Bika LIMS called Baobab LIMS, designed for biobank clients and researchers. Bendou et al., who enlisted customization help directly from Bika Lab System, describe how “[t]he need to implement biobank standard operation procedures as well as stimulate the use of standards for biobank data representation motivated the implementation of Baobab LIMS, an open-source LIMS for biobanking.” The group concludes that while the open-source LIMS is quite usable as is, it will require further development of more “generic and configurable workflows.” Despite this, the authors anticipate the software to be useful to the biobanking community.
Most scientists know that much of the data created in academic research efforts ends up being locked away in silos, difficult to share with others. But what are scientists doing about? In this 2016 paper published in Scientific Data, Wilkinson et al. outline a distinct set of principles created towards reducing the silos of information: the FAIR Principles. The authors state the primary goal of the FAIR Principles is to “put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals.” After describing the principles and giving examples of projects that adhere to them, the authors conclude that the principles have the potential to “guide the implementation of the most basic levels of good Data Management and Stewardship practice, thus helping researchers adhere to the expectations and requirements of their funding agencies.”
The problem? Disparate data sources, from weather and wave forecasts to navigation charts and natural hazard assessments, made oceanography research in Southern Italy more cumbersome. Solution? Create a secure, standardized, and interoperable data platform that can merge all that and other information together into one powerful and easy-to-use platform. Thus the TESSA (Development of Technology for Situational Sea Awareness) program was born. D’Anca et al. discuss the creation and use of TESSA as a geospatial tool that merges real-time and archived data to help researchers in Southern Italy. The authors conclude that TESSA is “a valid prototype easily adopted to provide an efficient dissemination of maritime data and a consolidation of the management of operational oceanographic activities,” even in other parts of the world.
MASTR-MS: A web-based collaborative laboratory information management system (LIMS) for metabolomics
In development since at least the summer of 2009, the open-source MASTR-MS laboratory information management system (LIMS) was designed to better handle the data and metadata of metabolomics, the study of an entity’s metabolites. In this 2017 paper published in Metabolomics, the development team of MASTR-MS discuss the current state of their LIMS, how it’s being used, and what the future holds for it. They conclude by stating the software’s “comprehensive functions and features enable researchers and facilities to effectively manage a wide range of different project and experimental data types, and it facilitate the mining of new and existing [metabolomic] datasets.”
The effect of a test ordering software intervention on the prescription of unnecessary laboratory tests – A randomized controlled trial
When designing something as simple as a menu of laboratory tests into a piece of clinical software, it’s relatively easy to not think of the ramifications of the contents of such a menu. In this 2017 article published in BMC Medical Informatics and Decision Making, Martins et al. argue that there are consequences to what’s included in a laboratory test drop-down menu, primarily that the presence — or lack thereof — of a test type may influence how frequently that test is prescribed. The group concludes that “[r]emoving unnecessary tests from a quick shortcut menu of the diagnosis and laboratory tests ordering system had a significant impact and reduced unnecessary prescription of tests,” which in turn led “to the reduction of negative patient effects and to the reduction of unnecessary costs.”
In this journal article published in JMIR Medical Informatics in 2017, Alsaffar et al. review research from mid-2014 that looked at the state of open-source electronic health record (EHR) systems, primarily via SourceForge. The authors, noting a lack of research concerning the demographics and motivation of open-source EHR projects, present their finding, concluding that “lack of a corporate entity in most F/OSS EHR projects translates to a marginal capacity to market the respective F/OSS system and to navigate [HITECH] certification.”
PCM-SABRE: A platform for benchmarking and comparing outcome prediction methods in precision cancer medicine
In this 2017 paper published in BMC Bioinformatics, Eyal-Altman et al. explain the use and benefits of their KNIME-based cancer outcome analysis software PCM-SABRE (Precision Cancer Medicine – Survival Analysis Benchmarking, Reporting and Evaluation). The group demonstrates its effectiveness by reconstructing the previous work of Chou et al. and showing how the results necessitate the tool for better reproducibility. The researchers conclude that when used effectively, PCM-Sabre’s “resulting pipeline can be shared with others in an intuitive yet executable way, which will improve, if adopted by other investigators, the comparability and interpretability of future works attempting to predict patient survival from gene expression data.”