Posted on January 28, 2016
By John Jones
Journal articles
Forensic science researcher Bruce Levy of the University of Illinois at Chicago presents his ideas about how forensic pathologists and clinical informaticians can address the historical and current shortcomings of putting sudden and violent death data to better use. Levy argues that improved collaborations and data standards "will enable forensic pathology to maximize its effectiveness by providing timely and actionable information to public health and public safety agencies."
Posted on January 21, 2016
By John Jones
Journal articles

Nanomedicine researchers at the University of Southern Denmark required a "dedicated LIMS for the management of large vector construct and cell line libraries." With no other open-source options available, the team developed their own, OpenLabFramework. Documenting their process, List
et al. conclude "OLF can be deployed using different database management systems either locally, to a server, or to the cloud," and "[t]he incorporation of modern technologies, such as mobile devices and printing of barcode labels may increase productivity even further."
Posted on January 12, 2016
By John Jones
Journal articles

In this research paper, Nguyen et al. of Monash University in Australia "describes the design, implementation and operation of a multi-modality research imaging data management system that manages imaging data obtained from biomedical imaging scanners" at their facility. Faced with limitations of existing image management software and frameworks, the group custom built a system "based on DaRIS and XNAT has been designed and implemented to enable researchers to acquire, manage and analyse large, longitudinal biomedical imaging datasets."
Posted on January 11, 2016
By John Jones
Journal articles

Stahl et al. and Universités de Montpellier needed a system that could "streamline [biological] data storage and annotation collaboratively." Not finding a system to their liking, the group developed a Joomla!-based LIMS called Djeen and published their research on the development process in 2013. The group concludes: "Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group."
Posted on December 28, 2015
By John Jones
Journal articles

In 2010, O'Connor et al. published a paper on their experience developing and implementing a cloud-based query engine that can support thousands of genome datasets. Still actively developed as of 2015, the SeqWare Query Engine "can load and query variants (SNVs, indels, translocations, etc) with a rich level of annotations including coverage and functional consequences." The research team concluded in their paper that the software was then "capable of supporting large scale genome sequencing research projects involving hundreds to thousands of genomes as well as future large scale clinical deployments utilizing advanced sequencer technology that will soon involve tens to hundreds of thousands of genomes."
Posted on December 23, 2015
By John Jones
Journal articles

In 2015, Singh et al. published their notes and reflections on developing SaDA, designed "for storing, retrieving and analyzing data originated from microorganism monitoring experiments." The group developed the software after discovering a lack of free, open-source software for microarray data management and analysis. The group concluded that "the platform has the potential to become an appropriate tool for a wide range of users focused not only in water based environmental research but also in other studies aimed at exploring and analyzing complex ecological habitats."
Posted on December 23, 2015
By John Jones
Journal articles

Seeking to overcome some of the challenges of massively parallel DNA sequencing, Reid et al. developed the Mercury analysis pipeline and deployed it on Amazon Web Services. Publishing their results in 2014 in
BMC Bioinformatics, the group demonstrated the success of "a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples."
Posted on December 17, 2015
By John Jones
Journal articles

Appearing in the magazine
Open Source Business Resource (today called
Technology Innovation Management Review) in 2011, this non-journal article by Sandro Groganz describes how "[o]pen source vendors can benefit from business ecosystems that form around their products," using open-source OXID eShop and its vendor OXID eSales as a representative example. Groganz concludes that "an open source community is not a state but a process," one that "creates a foundation for long-term growth and sustainability" when appropriately embraced.
Posted on December 16, 2015
By John Jones
Journal articles

The San Raffaele Telethon Institute for Gene Therapy wanted a new LIMS to manage the data coming from their PCR techniques and next generation sequencing (NGS) methods. Not finding something suitable, the group developed its own LIMS, adLIMS. This 2015 research paper by Calabria et al., published in
BMC Bioinformatics, covers the academic aspects of the information collection and development process.
Posted on December 9, 2015
By John Jones
Journal articles

This 2014 research published in
BMC Bioinformatics sees Grimes and Ji presenting their "highly configurable and extensible" web-based laboratory information management system (LIMS) for next generation DNA sequencing, MendeLIMS. The group concludes that the software is "an invaluable tool for the management of our clinical sequencing studies," primarily due to its ability to reduce sample tracking errors and give "a comprehensive view of data being sequenced."
Posted on December 2, 2015
By John Jones
Journal articles

In this 2014 article published in
BMC Bioinformatics, Dander et al., not content with the disparity among existing cancer treatment and bioinformatics platforms, discussed the results of creating Personalized Oncology Suite (POS). The web-based, scalable software system "integrates clinical data, NGS data and whole-slide bioimages from tissue sections" and, as the team concludes, "can be used not only in the context of cancer immunology but also in other studies in which NGS data and images of tissue sections are generated."
Posted on November 24, 2015
By John Jones
Journal articles

Munkhdalai et al. express their belief that named entity recognition (NER) is vital to future text mining efforts in the biochemical sciences in this 2015 paper published in
Journal of Cheminformatics. As such, the researchers set about to create a scalable biomedical NER system utilizing natural language processing (NLP) tasks. The group concluded that their work has yielded a well-performing "integrated system that can be applied for chemical and drug NER or biomedical NER."
Posted on November 17, 2015
By John Jones
Journal articles

In this 2015 open-access research paper published in
PeerJ Computer Science, Ganzinger and Knaup present a reference model for developing information technology infrastructure for biomedical research networks. The researchers found that the model to be useful when "used by research networks as a basis for a resource efficient acquisition of their project specific requirements."
Posted on November 14, 2015
By John Jones
Journal articles

In 2013, Barker et al. concluded their testing of a home-grown open-access undergraduate bioinformatics course called
4273Ï€ Bioinformatics for Biologists. Happy with the results of their course, they group published a paper about it in
BMC Bioinformatics, concluding their Raspberry Pi-based "4273
Ï€ [operating system] is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost."
Posted on November 14, 2015
By John Jones
Journal articles

The topic of teaching methodologies in bioinformatics at the pre-university level is brought up by Barker et al. in this 2015 journal article published in the
International Journal of STEM Education. Using Rasperry Pi and 4273
Ï€ in their implementation, the group found "our preliminary study supports the feasibility of bringing university-level, practical bioinformatics activities to school pupils."
Posted on November 14, 2015
By John Jones
Journal articles

Zheng et al. "developed an online machine learning based information extraction system called IDEAL-X" in 2015. They published the results of their experience in the
Journal of Pathology Informatics, concluding "[b]y combining iterative online learning and adaptive controlled vocabularies, IDEAL-X can deliver highly adaptive and accurate data extraction to support patient search."
Posted on November 14, 2015
By John Jones
Journal articles

This 2014 article published by
JMIR Medical Informatics sees Kruse et al. attempting to identify internal and external factors that affect health information technology (HIT) adoption. The group concludes that "[c]ommonalities exist in the literature for internal organizational and external environmental factors associated with the adoption of the EHR and/or CPOE."
Posted on October 13, 2015
By John Jones
Journal articles

Published for the first time on LIMSwiki, this 2015 article from Dr. John Joyce explains how he developed a general starting procedure for screening free and open-source applications to determine which one is best for use. In the article summary, Joyce states that the end result is a high-level survey tool "synthesized [from] a general survey process to allow us to quickly assess the status of any given type of FLOSS applications, allowing us to triage them and identify the most promising candidates for in-depth evaluation."
Posted on October 5, 2015
By John Jones
Journal articles

In 2013, Deroulers et al. published in the journal
Diagnostic Pathology their research and development story of open-source software tools for quickly opening large pathology images. The group concludes that their new tools "open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster."
Posted on September 22, 2015
By John Jones
Journal articles

This 2014 article by Hersh et al. represents an effort to better describe the competencies of clinical informatics as they relate to curriculum development at the Oregon Health & Science University. The group concludes with "a substantial number of informatics competencies and a large body of associated knowledge that the 21st century clinician needs to learn and apply," though not without recognizing their effectiveness must still be evaluated.