In 2010, O’Connor et al. published a paper on their experience developing and implementing a cloud-based query engine that can support thousands of genome datasets. Still actively developed as of 2015, the SeqWare Query Engine “can load and query variants (SNVs, indels, translocations, etc) with a rich level of annotations including coverage and functional consequences.” The research team concluded in their paper that the software was then “capable of supporting large scale genome sequencing research projects involving hundreds to thousands of genomes as well as future large scale clinical deployments utilizing advanced sequencer technology that will soon involve tens to hundreds of thousands of genomes.”
SaDA: From sampling to data analysis—An extensible open source infrastructure for rapid, robust and automated management and analysis of modern ecological high-throughput microarray data
In 2015, Singh et al. published their notes and reflections on developing SaDA, designed “for storing, retrieving and analyzing data originated from microorganism monitoring experiments.” The group developed the software after discovering a lack of free, open-source software for microarray data management and analysis. The group concluded that “the platform has the potential to become an appropriate tool for a wide range of users focused not only in water based environmental research but also in other studies aimed at exploring and analyzing complex ecological habitats.”
Launching genomics into the cloud: Deployment of Mercury, a next generation sequence analysis pipeline
Seeking to overcome some of the challenges of massively parallel DNA sequencing, Reid et al. developed the Mercury analysis pipeline and deployed it on Amazon Web Services. Publishing their results in 2014 in BMC Bioinformatics, the group demonstrated the success of “a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.”
Appearing in the magazine Open Source Business Resource (today called Technology Innovation Management Review) in 2011, this non-journal article by Sandro Groganz describes how “[o]pen source vendors can benefit from business ecosystems that form around their products,” using open-source OXID eShop and its vendor OXID eSales as a representative example. Groganz concludes that “an open source community is not a state but a process,” one that “creates a foundation for long-term growth and sustainability” when appropriately embraced.
adLIMS: A customized open source software that allows bridging clinical and basic molecular research studies
The San Raffaele Telethon Institute for Gene Therapy wanted a new LIMS to manage the data coming from their PCR techniques and next generation sequencing (NGS) methods. Not finding something suitable, the group developed its own LIMS, adLIMS. This 2015 research paper by Calabria et al., published in BMC Bioinformatics, covers the academic aspects of the information collection and development process.
This 2014 research published in BMC Bioinformatics sees Grimes and Ji presenting their “highly configurable and extensible” web-based laboratory information management system (LIMS) for next generation DNA sequencing, MendeLIMS. The group concludes that the software is “an invaluable tool for the management of our clinical sequencing studies,” primarily due to its ability to reduce sample tracking errors and give “a comprehensive view of data being sequenced.”
In this 2014 article published in BMC Bioinformatics, Dander et al., not content with the disparity among existing cancer treatment and bioinformatics platforms, discussed the results of creating Personalized Oncology Suite (POS). The web-based, scalable software system “integrates clinical data, NGS data and whole-slide bioimages from tissue sections” and, as the team concludes, “can be used not only in the context of cancer immunology but also in other studies in which NGS data and images of tissue sections are generated.”
Incorporating domain knowledge in chemical and biomedical named entity recognition with word representations
Munkhdalai et al. express their belief that named entity recognition (NER) is vital to future text mining efforts in the biochemical sciences in this 2015 paper published in Journal of Cheminformatics. As such, the researchers set about to create a scalable biomedical NER system utilizing natural language processing (NLP) tasks. The group concluded that their work has yielded a well-performing “integrated system that can be applied for chemical and drug NER or biomedical NER.”
In this 2015 open-access research paper published in PeerJ Computer Science, Ganzinger and Knaup present a reference model for developing information technology infrastructure for biomedical research networks. The researchers found that the model to be useful when “used by research networks as a basis for a resource efficient acquisition of their project specific requirements.”
In 2013, Barker et al. concluded their testing of a home-grown open-access undergraduate bioinformatics course called 4273π Bioinformatics for Biologists. Happy with the results of their course, they group published a paper about it in BMC Bioinformatics, concluding their Raspberry Pi-based “4273π [operating system] is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.”
University-level practical activities in bioinformatics benefit voluntary groups of pupils in the last 2 years of school
The topic of teaching methodologies in bioinformatics at the pre-university level is brought up by Barker et al. in this 2015 journal article published in the International Journal of STEM Education. Using Rasperry Pi and 4273π in their implementation, the group found “our preliminary study supports the feasibility of bringing university-level, practical bioinformatics activities to school pupils.”
Zheng et al. “developed an online machine learning based information extraction system called IDEAL-X” in 2015. They published the results of their experience in the Journal of Pathology Informatics, concluding “[b]y combining iterative online learning and adaptive controlled vocabularies, IDEAL-X can deliver highly adaptive and accurate data extraction to support patient search.”
Factors associated with adoption of health information technology: A conceptual model based on a systematic review
This 2014 article published byJMIR Medical Informatics sees Kruse et al. attempting to identify internal and external factors that affect health information technology (HIT) adoption. The group concludes that “[c]ommonalities exist in the literature for internal organizational and external environmental factors associated with the adoption of the EHR and/or CPOE.”
Published for the first time on LIMSwiki, this 2015 article from Dr. John Joyce explains how he developed a general starting procedure for screening free and open-source applications to determine which one is best for use. In the article summary, Joyce states that the end result is a high-level survey tool “synthesized [from] a general survey process to allow us to quickly assess the status of any given type of FLOSS applications, allowing us to triage them and identify the most promising candidates for in-depth evaluation.”
In 2013, Deroulers et al. published in the journal Diagnostic Pathology their research and development story of open-source software tools for quickly opening large pathology images. The group concludes that their new tools “open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster.”
Beyond information retrieval and electronic health record use: Competencies in clinical informatics for medical education
This 2014 article by Hersh et al. represents an effort to better describe the competencies of clinical informatics as they relate to curriculum development at the Oregon Health & Science University. The group concludes with “a substantial number of informatics competencies and a large body of associated knowledge that the 21st century clinician needs to learn and apply,” though not without recognizing their effectiveness must still be evaluated.
In this 2014 article published in Perspectives in Clinical Research, Bellary et al. argue that the development of electronic case report forms (eCRFs) requires clear standardization, organization, and implementation. The team concludes that “it is important to have design principles in mind well in advance before CRF designing is initiated” or risk poorly collected data and lack of user-friendliness.
iLAP: A workflow-driven software for experimental protocol development, data acquisition and analysis
Noticing a lack of software tools that would both assist with the “management of large datasets and digital recording of laboratory procedures,” Stocker et al. set out in 2009 to create iLAP, “a workflow-driven information management system specifically designed to create and manage experimental protocols, and to analyze and share laboratory data.”
Jennifer Alford-Teaster et al. suggest in an issue of Journal of Health & Medical Informatics that integrating geoinformatics with medical- and health-related data could provide many benefits. They conclude that such an integration “has high potential for advancing currently used research methods to monitor and evaluate new technologies as they are translated from experimental settings into communities and populations.”
Sharon Mickan et al. ask whether or not handheld computers improve physicians’ information access and support clinical decision making where services are performed in this article originally published in BMC Medical Informatics & Decision Making. The group concludes that such technology may in fact “improve [physicians’] information seeking, adherence to guidelines and clinical decision making.”