Efficient sample tracking with OpenLabFramework

Nanomedicine researchers at the University of Southern Denmark required a "dedicated LIMS for the management of large vector construct and cell line libraries." With no other open-source options available, the team developed their own, OpenLabFramework. Documenting their process, List et al. conclude "OLF can be deployed using different database management systems either locally, to a server, or to the cloud," and "[t]he incorporation of modern technologies, such as mobile devices and printing of barcode labels may increase productivity even further."

Design, implementation and operation of a multimodality research imaging informatics repository

In this research paper, Nguyen et al. of Monash University in Australia "describes the design, implementation and operation of a multi-modality research imaging data management system that manages imaging data obtained from biomedical imaging scanners" at their facility. Faced with limitations of existing image management software and frameworks, the group custom built a system "based on DaRIS and XNAT has been designed and implemented to enable researchers to acquire, manage and analyse large, longitudinal biomedical imaging datasets."

Djeen (Database for Joomla!’s Extensible Engine): A research information management system for flexible multi-technology project administration

Stahl et al. and Universités de Montpellier needed a system that could "streamline [biological] data storage and annotation collaboratively." Not finding a system to their liking, the group developed a Joomla!-based LIMS called Djeen and published their research on the development process in 2013. The group concludes: "Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group."

SeqWare Query Engine: Storing and searching sequence data in the cloud

In 2010, O'Connor et al. published a paper on their experience developing and implementing a cloud-based query engine that can support thousands of genome datasets. Still actively developed as of 2015, the SeqWare Query Engine "can load and query variants (SNVs, indels, translocations, etc) with a rich level of annotations including coverage and functional consequences." The research team concluded in their paper that the software was then "capable of supporting large scale genome sequencing research projects involving hundreds to thousands of genomes as well as future large scale clinical deployments utilizing advanced sequencer technology that will soon involve tens to hundreds of thousands of genomes."

SaDA: From sampling to data analysis—An extensible open source infrastructure for rapid, robust and automated management and analysis of modern ecological high-throughput microarray data

In 2015, Singh et al. published their notes and reflections on developing SaDA, designed "for storing, retrieving and analyzing data originated from microorganism monitoring experiments." The group developed the software after discovering a lack of free, open-source software for microarray data management and analysis. The group concluded that "the platform has the potential to become an appropriate tool for a wide range of users focused not only in water based environmental research but also in other studies aimed at exploring and analyzing complex ecological habitats."

Launching genomics into the cloud: Deployment of Mercury, a next generation sequence analysis pipeline

Seeking to overcome some of the challenges of massively parallel DNA sequencing, Reid et al. developed the Mercury analysis pipeline and deployed it on Amazon Web Services. Publishing their results in 2014 in BMC Bioinformatics, the group demonstrated the success of "a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples."

Benefits of the community for partners of open source vendors

Appearing in the magazine Open Source Business Resource (today called Technology Innovation Management Review) in 2011, this non-journal article by Sandro Groganz describes how "[o]pen source vendors can benefit from business ecosystems that form around their products," using open-source OXID eShop and its vendor OXID eSales as a representative example. Groganz concludes that "an open source community is not a state but a process," one that "creates a foundation for long-term growth and sustainability" when appropriately embraced.

adLIMS: A customized open source software that allows bridging clinical and basic molecular research studies

The San Raffaele Telethon Institute for Gene Therapy wanted a new LIMS to manage the data coming from their PCR techniques and next generation sequencing (NGS) methods. Not finding something suitable, the group developed its own LIMS, adLIMS. This 2015 research paper by Calabria et al., published in BMC Bioinformatics, covers the academic aspects of the information collection and development process.

MendeLIMS: A web-based laboratory information management system for clinical genome sequencing

This 2014 research published in BMC Bioinformatics sees Grimes and Ji presenting their "highly configurable and extensible" web-based laboratory information management system (LIMS) for next generation DNA sequencing, MendeLIMS. The group concludes that the software is "an invaluable tool for the management of our clinical sequencing studies," primarily due to its ability to reduce sample tracking errors and give "a comprehensive view of data being sequenced."

Personalized Oncology Suite: Integrating next-generation sequencing data and whole-slide bioimages

In this 2014 article published in BMC Bioinformatics, Dander et al., not content with the disparity among existing cancer treatment and bioinformatics platforms, discussed the results of creating Personalized Oncology Suite (POS). The web-based, scalable software system "integrates clinical data, NGS data and whole-slide bioimages from tissue sections" and, as the team concludes, "can be used not only in the context of cancer immunology but also in other studies in which NGS data and images of tissue sections are generated."

Incorporating domain knowledge in chemical and biomedical named entity recognition with word representations

Munkhdalai et al. express their belief that named entity recognition (NER) is vital to future text mining efforts in the biochemical sciences in this 2015 paper published in Journal of Cheminformatics. As such, the researchers set about to create a scalable biomedical NER system utilizing natural language processing (NLP) tasks. The group concluded that their work has yielded a well-performing "integrated system that can be applied for chemical and drug NER or biomedical NER."

Requirements for data integration platforms in biomedical research networks: A reference model

In this 2015 open-access research paper published in PeerJ Computer Science, Ganzinger and Knaup present a reference model for developing information technology infrastructure for biomedical research networks. The researchers found that the model to be useful when "used by research networks as a basis for a resource efficient acquisition of their project specific requirements."

4273π: Bioinformatics education on low cost ARM hardware

In 2013, Barker et al. concluded their testing of a home-grown open-access undergraduate bioinformatics course called 4273π Bioinformatics for Biologists. Happy with the results of their course, they group published a paper about it in BMC Bioinformatics, concluding their Raspberry Pi-based "4273π [operating system] is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost."

University-level practical activities in bioinformatics benefit voluntary groups of pupils in the last 2 years of school

The topic of teaching methodologies in bioinformatics at the pre-university level is brought up by Barker et al. in this 2015 journal article published in the International Journal of STEM Education. Using Rasperry Pi and 4273π in their implementation, the group found "our preliminary study supports the feasibility of bringing university-level, practical bioinformatics activities to school pupils."

Support patient search on pathology reports with interactive online learning based data extraction

Zheng et al. "developed an online machine learning based information extraction system called IDEAL-X" in 2015. They published the results of their experience in the Journal of Pathology Informatics, concluding "[b]y combining iterative online learning and adaptive controlled vocabularies, IDEAL-X can deliver highly adaptive and accurate data extraction to support patient search."

Factors associated with adoption of health information technology: A conceptual model based on a systematic review

This 2014 article published byJMIR Medical Informatics sees Kruse et al. attempting to identify internal and external factors that affect health information technology (HIT) adoption. The group concludes that "[c]ommonalities exist in the literature for internal organizational and external environmental factors associated with the adoption of the EHR and/or CPOE."

Generalized procedure for screening free software and open-source software applications

Published for the first time on LIMSwiki, this 2015 article from Dr. John Joyce explains how he developed a general starting procedure for screening free and open-source applications to determine which one is best for use. In the article summary, Joyce states that the end result is a high-level survey tool "synthesized [from] a general survey process to allow us to quickly assess the status of any given type of FLOSS applications, allowing us to triage them and identify the most promising candidates for in-depth evaluation."

Analyzing huge pathology images with open source software

In 2013, Deroulers et al. published in the journal Diagnostic Pathology their research and development story of open-source software tools for quickly opening large pathology images. The group concludes that their new tools "open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster."

Beyond information retrieval and electronic health record use: Competencies in clinical informatics for medical education

This 2014 article by Hersh et al. represents an effort to better describe the competencies of clinical informatics as they relate to curriculum development at the Oregon Health & Science University. The group concludes with "a substantial number of informatics competencies and a large body of associated knowledge that the 21st century clinician needs to learn and apply," though not without recognizing their effectiveness must still be evaluated.

Basics of case report form designing in clinical research

In this 2014 article published in Perspectives in Clinical Research, Bellary et al. argue that the development of electronic case report forms (eCRFs) requires clear standardization, organization, and implementation. The team concludes that "it is important to have design principles in mind well in advance before CRF designing is initiated" or risk poorly collected data and lack of user-friendliness.