Interfacing Reality Overview


Interfacing Reality Overview

By Venkatesan Mukunth

(Manager Software Group, Agaram Instruments (P) Ltd, Chennai, India. Mukunth@agaramindia.com)

 

Instrument Interfacing has been the talk of the lab informatics for several years now. Many a times interfacing is the key subject when a LIMS project is initiated and during the finalisation of the project it is normally pushed to phase 2 of the project. According to a recent survey done by SDI (Strategic Directions International Inc) 43% of respondents did not interface any instrument to LIMS.

 

What could be the real reason for such a tremendous resistance or inability to achieve better coverage for interfacing?

 

In a broader sense:

Most LIMS packages do not have interfacing capability or come with limited interfacing capability which do not answer for the real world situation. LIMS client licensing is far more expensive to buy for interfacing purposes.

 

Several third party LIMS interfacing solutions have practical implementation, usability problem. Cost of implementing such systems are high and mostly requires services from the supplier

 

Custom interfaces are very expensive and simply not worthwhile developing due to the fact that these interfaces get outdated quickly due to continued laboratory practice improvements, instrument change, instrument software upgrade and regulatory issues. Custom interfaces are good for a lab, which does regular and non-varying outputs from instruments. This is normally not the case in most of the situations.

 

Need of the hour:

Instrument interfacing can no longer be data IN/data OUT application. Interfacing needs to be more mature, support open architecture, configurable, flexible and should use structured data storage to interact with LIMS and should support workflow needs of the laboratory

 

What are the types of solution available in market?

 

The low end products are quite generic and provide a keystroke (DDE) type of interface to popular applications like Excel, Access. They are limited to just providing the data from a RS232 port to another application, which can receive keyboard entry.

 

In the real sense these applications could only write to applications which are active and working in foreground and is also dependent on the user to manually point to location where the data needs to be captured, like pointing the cursor to a cell in excel or MS Access manually. There is no security for such open applications, total manual attention is needed, not a good solution for voluminous data, authenticity of data cannot be proven, chances of human error is quite high as the user has to physically point at locations to capture the data and simply there is no automation involved here.

 

Then came interfacing solutions that were capable of collecting data from RS232 ports and ASCII files, parse the data and save the extracted results data as CSV, or properly delimited files, some of them could also send this parsed data to other database applications seamlessly. These solutions did not use a relational structure for data storage and retrieval but were using flat files.

 

The focus of these solutions were only on the parser technology or device driver development, they could not handle test workflow related problems, simply there was no automation of data capture to specific samples and were limiting the user to work in a method defined by these applications, the traceability of data was not possible, again they were simply creating more flat files to be handled and managed. These solutions were mere data IN data OUT solutions and did not focus on issues in-between data IN and before data OUT. The in-between issues are the real-world laboratory workflow issues.

 

 

What is a test workflow related problem?

 

Assume a simple test like moisture analysis. A simple workflow for such a test could be

 

  1. Weigh the empty container
  2. Weigh the container with sample
  3. Dry the container with sample say for 2 hours
  4. Weigh the containers again
  5. Calculate loss on drying

 

What could be the practical issues for such a simple study?

 

  1. The analyst should have the ability to perform analysis on any sample in any order
  2. The analyst should be able to capture the data automatically without having to point at specific cells for data capture
  3. The workflow for capture of data should be pre-definable and the system should behave as per the test requirement (i.e. The first weight transferred will be the weight of the “empty container” and the second weight would be the “sample + container” once this is collected automatically proceed to sample 1 in the list for collecting data for step 4.
  4. If necessary switch to any specific SampleID in the list to collect data
  5. Handle situations when a repeat analysis needs to be performed when the results are not satisfactory or not according to the prescribed limits or SOP
  6. The system should be able to handle emergency situations like collecting data for a priority sample, change the order of capture temporarily and then proceed with the pre-defined workflow
  7. Make decisions in complex situations which involve collection of data based on results collected previously

 

How can we solve such practical issues?

 

1.      The system should have SampleID as the key for targeting the result captured from an instrument

2.      The system should be able to generate SampleIDs by itself, should be able to accept the SampleID information from LIMS or extract a specific piece of data from the instrument output and consider it as SampleID

3.      The system should be capable of populating the results to specific SampleID in all the above cases automatically and should have capability to target the result to any SampleID manually by simply selecting the CurrentSampleID for an instrument.

4.      The system should have workflow based capturing capability which should automate analysis and data capture based on the test workflow without the user having to look into the application while performing the actual study in an instrument.

5.      The system should be able to switch to any SampleID that the user wishes to capture the data for and to enable analysis in any order and help in practical situations which require shift from routine workflow like handling priority samples, correcting mistakes made due to unforeseen circumstances

 

 

Complex inter-relationship of data and validation issues:

 

There is always need to compare, calculate, validate results from multiple instruments, different samples analysed on the same or different instruments.

 

System should be able to perform cross checking, calculations, validations based on data collected from multiple samples, compare results and make decisions for workflow of the test, ultimately provide access to information collected by different users, different instruments, different samples across the network to enable him or the system to make decision for calculation, validation of data, flagging of data, transfer of data to LIMS.

 

Review and Validation Issues:

 

There is always a need to review or re-do the data capture, re-extraction of information to confirm the validity of system under practical conditions.

 

System should have the ability to store and restore the raw-data captured from instrument output. This will really help in validating the system and performing PQ (Performance Qualification) over a period of time and thus giving a chance to validate and re-validate the system that is deployed.

 

Data Transfer Issues:

 

The simplest mode of transfer to LIMS is creating a flat file that is delimited and separated properly. Existing systems can perform seamless transfer to LIMS but again the data is transferred from same old flat files to LIMS and do not have true bi-directional capabilities.

 

The need of the hour is a seamless bi-directional communication with a LIMS system. This can be achieved only by storing all data in a relational structure

 

 

Data Storage, Retrieval and Archival:

 

Storage of data as flat files poses a security, retrieval problem, file servers need to be deployed to achieve psuedo client/server architecture and exchange of data between instruments, samples, users is really a big issue

 

Under such circumstances it is always better to store the data in a relational database (SQL/ Oracle), which gives better security, retrievability, availability and long-term archival capability.

 

 

Deliverable Interface:

 

LogiLabPRO was designed and developed to address all of the above practical issues and requirements. LogiLabPRO works with SQL/Oracle as backend. It is designed as a client/server 2-tier application with database residing a central server. Clients can create their own list of instruments, methods, parsers and LIMS connectivity and can capture and process data from RS232, TCP/IP and ASCII file reporting instruments. LogiLabPRO works on windows (NT, 2000,XP) and can also be deployed as a thin-client solution based on Citix® Metaframe XP. The database structure and relationship is open to end users for complete solution deployment.

 

Core Technology:

LogiLabPRO after capturing the instrument data stores the raw data as BLOBs inside the database parses the data and extracts into user defined ‘field’ names. Fields are freely available for placing inside a protected user designed spreadsheet for capturing, viewing, validating and processing. The data capture is completely independent of front end like cell positioning, capture to specific SampleID.

 

 

Data Structure:

Data hierarchy is client and instrument centered. Each instrument can have several methods to handle different output created by the same instrument say different samples can create different outputs.

 

LIMSfinder > Picture2″ hspace=0 src=”http://w3markets.smugmug.com/photos/2938083-M.jpg” width=569 border=0>

Instruments
Methods
Sample Split
Fields
Header Parser
Sample Parser

 

 

Multiple sample result output can be split as single samples and can have a parser for the header and the results. The output of the whole process is a set of “Fields”. Methods can be linked with “Sample Names”. Each sample can be associated with a method and “Sheet” for data capture. Each sheet can be populated with fields from different instruments giving a single point viewing of data from multiple instruments when you want to view/handle relationship between different instrument data.

 

Sample Logging:

 

Samples can be logged into LogiLabPRO using built-in login method with several logging possibilities like manual, external file based, external application based, barcode based, login from LIMS database to LogiLabPRO database. Login procedures can also be totally automated by writing a simple script to login from the LIMS database.

 

 

Data Capture:

 

Once the samples are logged data can be captured from instruments. There is a definite mechanism to populate the results to appropriate SampleID.

Case 1:

When the instrument output does not contain the SampleID information results are populated for CurrentSampleID chosen in the selection box for each instrument, CurrentSampleID can also be automatically incremented which eliminates the user from setting it for each sample. The workflow setup allows setting up of data capture pattern for a specific test. The system will simply follow the workflow rules to capture data. Workflow rules can also be broken during data capture to temporarily handle priority samples, repeat samples. During analysis Samples can also be logged into the sheet manually by simply giving a unique ID or by typing an ID, which already exists in the list of samples, which LogiLabPRO will automatically understand this as a repeat sample.

 

Case 2:

Whenever the instrument data contains SampleID information there will be no need for any manual selection within the application. Also a barcode reader could be used as CurrentSampleID identifier, which will inform LogiLabPRO about the CurrentSampleID information.

 

Data processing and data transfer to LIMS:

Once the data is captured calculations can be triggered. Simple spreadsheet based calculations to complex inter sample, inter instrument calculations can be performed by triggering a script code. The scripting tool can also be used for validation of results for transfer to LIMS. Scripting gives total flexibility of handling data transfer from and to LIMS and several procedures can be written to handle complex business rules for specific tests.

 

Raw data and meta-data:

Raw data is always stored without modification and can always be retrieved with respect to each SampleID. It is also possible to attach and automatically capture other meta-data created by instruments along with each SampleID. There is a watchdog service to monitor folders for specific files/types, which will automatically capture the file for CurrentSampleID.

 

 

Query and Reporting:

The database can be queried to create your own report. Queries can be created by users and stored for execution. A generic crystal report file template will display the report for all your queries. Additional reports can be created using Crystal reports.

 

Compliance:

 

LogiLabPRO has all the necessary functionality like electronic records protection, audit trail of all user actions that create/edit/delete electronic records, a dedicated user manager and user rights manager allows management of 21 CFR Part 11 compliance