Development of an electronic information system for the management of laboratory data of tuberculosis and atypical mycobacteria at the Pasteur Institute in Côte d’Ivoire

In this 2019 paper, Koné et al. of the Pasteur Institute of Côte d’Ivoire provide insight into their self-developed laboratory information system (LIS) specifically designed to meet the needs of clinicians treating patients infected with Mycobacterium tuberculosis. After discussing its design, architecture, installation, training sessions, and assessment, the group describes system launch and how its laboratorians perceived the change from paper to digital. With some discussion, they conclude they have improved, more real-time “indicators on the follow-up of samples, the activity carried out in the laboratory, and the state of resistance to antituberculosis treatments” with the conversion.

Codesign of the Population Health Information Management System to measure reach and practice change of childhood obesity programs

Attempting to implement a regional public health initiative affecting thousands of children is daunting enough, but collecting, analyzing, and reporting critical data that shows efficacy can be even more challenging. This 2018 article published in Public Health Research & Practice demonstrates one approach to such an endeavor in New South Wales Australia. Green et al. discuss the design and implementation of their Population Health Information Management System (PHIMS) to integrate and act upon data associated with not one but two related public health programs targeting the prevention of childhood obesity. The article also discusses some of the challenges with the project, from funding and training all 15 New South Wales local health districts to ensuring support across all the districts for consistent operation and security despite differing IT infrastructures. They conclude that despite the challenges, their award-winning PHIMS solution has been vital to the two programs’ success.

Open data in scientific communication

In this brief paper published in Folia Forestalia Polonica, Series A – Forestry, Dorota Grygoruk of Poland’s Forest Research Institute presents the development of the open data concept within the context of Poland and other countries, while also addressing how data sharing and management is challenged by the paradigm. Grygoruk first defines the open data and open access concepts and then describes how policy in Poland and the European Union has been adopted to specify those concepts within institutions. The author then analyzes the challenges of implementing data sharing inherent to research data management, including within the context of forestry informatics. The conclusion? The “organizational and technological solutions that enable analysis” are increasingly vital, and ” it becomes necessary for research institutions to implement data management policies,” including data sharing policies.

Simulation of greenhouse energy use: An application of energy informatics

In the inaugural issue of the journal Energy Informatics, Watson et al. of the University of Georgia – Athens provide research and insight into how databases, data streams, and schedulers can be joined with an information system to drive more cost-effective energy production for greenhouses. Combining past research and new technologies, the authors turn their sights to food security and the importance of developing more efficient systems for greater sustainability. They conclude that an energy informatics framework applied to controlled-environment agriculture can significantly reduce energy usage for lighting, though “engaging growers will be critical to adoption of information-systems-augmented adaptive lighting.”

Learning health systems need to bridge the “two cultures” of clinical informatics and data science

In this brief collaborative article by various researchers in the United Kingdom, a statement of fact is quickly set out for the reader: health data science and clinical informatics have a considerable gap between each other that must be addressed. Wasting no time, Scott et al. dig into the U.K. context of “the operational realities of health data quality and the implications for data science.” Collected clinical data is “problematic,” they claim, and clinical informaticians don’t always link the “two cultures” of using 1. clinical data and knowledge as a primary tool to 2. improve human health outcomes. They close by recognizing existing efforts to bridge the gap between the two cultures and make recommendations of their own such as recognizing “the interdisciplinary nature of biomedical informatics” and a need for “a significant expansion of clinical informatics capacity and capability.”

The problem with dates: Applying ISO 8601 to research data management

Kristin Briney, Data Services Librarian at the University of Wisconsin – Milwaukee, gives a brief commentary on the perils of managing research data with inconsistent or non-standardized date formats. Tapping into the stories of statisticians and ecologists, Briney notes that despite being a more western, Gregorian-based system, the international standard ISO 8601 provides benefits of consistency, formatting, extensibility, and sorting. And while ISO 8601 doesn’t play nicely with Microsoft Excel, the author provides several ways around the problem. She concludes that “ISO 8601 is a natural partner for research data management” and encourages other researchers to adopt the standard.

Health sciences libraries advancing collaborative clinical research data management in universities

What can medical librarians do to better support patrons? How can clinical medicine and research librarians work together to foster an environment of improved research cycles and patient outcomes? Bardyn et al. address these concerns and others through a demonstration of what the University of Washington’s Translational Research and Information Lab (TRAIL) program has accomplished since its inception. The authors introduce basic concepts in clinical and translational research and then provide background and methodology for how they improved researcher-focused spaces, clinical research support services, and research data management services. They conclude that “initiatives like TRAIL are vital to supporting universities’ clinical data research efforts,” noting that “[i]n uniting leading on-campus health sciences organizations, such initiatives build off the strengths of each partner” and encourage new skill sets to be developed to support cross-discipline research on campus.

Privacy preservation techniques in big data analytics: A survey

So much data is being collected from healthcare recipients, online banking users, online shoppers, and more, and what’s worse is we’re often letting companies do it without reading the terms of use, according to Rao et al. in this 2018 paper. So what can be done to better preserve our data privacy? The authors first look at four major threats to our data privacy (surveillance, disclosure, discrimination, and personal embracement and abuse), then delve into seven different techniques for preserving the privacy of our data. Giving their pros and cons, the authors finally propose a hybrid solution revolving around the concept of the “data lake” and privacy preserving algorithms.

A review of the role of public health informatics in healthcare

This brief article published in Journal of Taibah University Medical Sciences in 2017 looks at public health informatics (PHI) from the perspective of a researcher in the Kingdom of Saudi Arabia. Aziz discusses the concept of PHI and then looks at the various surveillance systems within PHI. Later he delves into the challenges provided by paper-based systems and how electronic systems can alleviate them. He closes with a discussion of PHI in the Kingdom of Saudi Arabia and concludes that various “applications and initiatives are currently available to meet the growing needs for faster and accurate data collection methods” in the country, as well as around the world.

The development and application of bioinformatics core competencies to improve bioinformatics training and education

In this 2018 article by Mulder et al., a broad collective of knowledge and experience is brought together to better shape the competencies required for a modern bioinformatics education program and their training contexts. Need is immense, yet methodologies are diverse, necessitating cooperation to refine core competencies for different groups. The authors describe the development of these competencies and then provide practical use cases for them. They conclude the competencies “provide a basis for the community of bioinformatics educators, despite widely divergent goals and student populations, to draw upon their common experiences in designing, refining, and evaluating their own training programs.” However, they also caution that they shouldn’t be viewed as “a prescription for a specific set of curricula or curricular standards.”

Approaches to medical decision-making based on big clinical data

This paper by Malykh and Rudetskiy “discusses different approaches to building a clinical decision support system based on big data,” with a focus on non-biased processing methods and their comparative assessments. After an in-depth analysis of methods and objectives, the authors present their findings from the clinical decision support data and their significance. They conclude that case-based and precedent-based approaches each have their advantages–including more accurate recommendations and faster system speeds–but are not without disadvantages. The authors suggest future research is needed to address “problems with optimization of provided metrics, compression of state descriptions, and construction of training procedures.”

A new numerical method for processing longitudinal data: Clinical applications

When it comes to longitudinal data, what analysis methods are we using today? How can they be applied to clinical data? In this 2018 paper, Stura et al. look at, for example, repeated data from measuring patient reactions and behaviors to a therapy. yet when analyzing this type of data problems arise; “more robust statistical methods” are required. The authors combine several methods to develop a “numerical tool based on optimization methods coupled with interpolation techniques.” They conclude that it provides several benefits, including output displayed as “a (continuous) growth curve, allowing the analysis of each growth function independently of the others. “

Big data management for healthcare systems: Architecture, requirements, and implementation

In this 2018 paper, El aboudi and Benhilma discuss the data management architectures of healthcare from the perspective of Northern Africa. As part of their discussion, the authors propose “an extensible big data architecture based on both stream computing and batch computing in order to enhance further the reliability of healthcare systems by generating real-time alerts and making accurate predictions on patient health condition.” With such an architecture, they conclude that, when implemented well, the healthcare system may be “capable of handling the high amount of data generated by different medical sources in real time.”

CÆLIS: Software for assimilation, management, and processing data of an atmospheric measurement network

In this 2018 paper published in Geoscientific Instrumentation, Methods and Data Systems, Fuertes et al. describe and demonstrate the uses of CÆLIS, software designed to simplify the processing, management, and use of atmospheric particulate data. After describing the architecture and database model, the authors describe its functionality and real-world applications. The authors conclude that the automation the software brought to aerosol measurement and analysis has been significant, in that software “has reduced the number of human errors and allowed one to perform more in-depth and exhaustive analysis” while also allowing users to “perform queries and extract data in a fast and very flexible way.”

Support Your Data: A research data management guide for researchers

In this 2018 paper published in Research Ideas and Outcomes, Borghi et al. of the University of California Curation Center discuss their suite of research data management (RDM) tools, Support Your Data. The tools “include a rubric designed to enable researchers to self-assess their current data management practices and a series of short guides which provide actionable information about how to advance practices as necessary or desired.” Based on three key RDM trends, the researchers felt a need to provide a complementary set of tools for researchers to better address those trends. The conclude by offering several use cases for the tools and planning “next steps” for improving the tools.

Big data in the era of health information exchanges: Challenges and opportunities for public health

Baseman et al. “conducted an assessment of big data that is available to a [public health agency]—laboratory test results and clinician-generated notifiable condition report data—through its participation in a [health information exchange]” and published their results in Informatics. They identified five major challenges to “secondary use of HIE data for meeting public health communicable disease surveillance needs” and then find ways to turn those challenges into opportunities for the public health system, ultimately optimizing it through various forms of big data analysis and management.

How could the ethical management of health data in the medical field inform police use of DNA?

This brief article published in Frontiers in Public Health takes a look at the collective management of genetic analysis techniques and “the ethico-legal frameworks” associated with forensic science and biomedicine. Krikorian and Vailly introduce the ethics and data collection methods of genetic material by police, noting how with new techniques the ethics have changed. Then they discuss the legal and political ramifications that go along with the ethics. They conclude that questions persist “about the conditions for the existence or for the absence of political controversies that call for further sociological investigations about the framing of the issue and the social and political logic at play.” Additionally, they note “the need for promoting dialogue among the various professionals using this technology in police work” as well as “with healthcare professionals.”

Promoting data sharing among Indonesian scientists: A proposal of a generic university-level research data management plan (RDMP)

In this 2018 paper by Irawan and Rachmi, a proposed research data management plan (RDMP) for the university ecosystem is proposed. After introducing the concept of an RDMP and the layout, the authors describe seven major components to their plan, in the form of an assessment form: data collection; documentation and metadata; storage and backup; preservation; sharing and re-use; responsibilities and resources; and ethics and legal compliance. They conclude that the assessment form can help researchers “to describe the setting of their research and data management requirements from a potential funder … [and] also develop a more detailed RDMP to cater to a specific project’s environment.”

systemPipeR: NGS workflow and report generation environment

In this 2016 paper published in BMC Bioinformatics, Backman and Girke discuss the R/Bioconductor package systemPipeR. Recognizing that the analysis of next-generation sequencing (NGS) data remains a significant challange, the authors turned to the R programming language and the Bioconductor environment to make workflows that were “time-efficient and reproducible.” After giving some background and then discussing the development and implementation, they conclude that systemPipeR helps researchers “reduce the complexity and time required to translate NGS data into interpretable research results, while a built-in reporting feature improves reproducibility.”

A data quality strategy to enable FAIR, programmatic access across large, diverse data collections for high performance data analysis

A data quality strategy (DQS) is useful for researchers, organizations, and others, primarily because it allows them “to establish a level of assurance, and hence confidence, for [their] user community and key stakeholders as an integral part of service provision.” Evans et al. of the Australian National University, recognizing this importance, discuss the implementation of their DQS at the Australian National Computational Infrastructure (NCI), detailing their strategy and providing examples in this 2017 paper. They conclude that “[a]pplying the DQS means that scientists spend less time reformatting and wrangling the data to make it suitable for use by their applications and workflows—especially if their applications can read standardized interfaces.”

Page 1 of 10123Next ›Last »