Invitation to Participate in the Comparison of Image Analysis Algorithms for Intracellular Screening

 

Introduction

We derive quantitative data from images, but our approach to assessing how well we are doing it is emotional at best. The goal of this undertaking is to put different software offerings on a quantitative scale for a direct comparison of results and quality.

 

Purpose

There are many important features by which image analysis algorithms and systems can be compared: speed, flexibility, user interaction, and price. We are attempting here to assess only one feature – the ability of the algorithm to extract from images such numerical data that is the most sensitive to the assayed biological process and the least sensitive to artifacts and all other sources of variability.

 

Assays

For this comparison we chose two types of assays that undoubtedly require intracellular imaging: cytoplasm to nucleus translocation (CNT) and Transfluor™. We currently have available two CNT image sets and two Transfluor™ image sets. If you have interesting image sets, especially of the same plate scanned on different instruments, we would gladly include it in the comparison. If the comparison proves to be useful we will expand it to other assays.

 

Organization of Data

The image set should comprise a dilution series of an effector and at least three replica wells (or replica images within the same well, or both) for each concentration. The images should be in a common file format. Sufficient annotation must be provided in addition to the images to properly analyze and report the data (i.e. doses of the effector, replicas). People who want to take part in the comparison can request CDs with images from the SIG chairman. These CDs are free for the participants.

 

Who Can Participate

Anybody who can analyze images of cytoplasm to nucleus translocation assay and/or of Transfluor assay can take part in the comparison. This includes developers of software and users.

 

Image Analysis

Participants run the algorithms on their own machines and provide results either on the cell level, or on the field level as appropriate for the algorithm. Run times and machine specifications can also be included. Algorithms should be described or referenced (e.g., for commercial algorithms the version number is provided). All user-adjustable parameters are disclosed, so anybody could run the algorithms on the same image sets and verify the results. All images in a set must be run with the same parameters, however it is allowed and even encouraged to run the algorithm with different reasonable parameters and report all results. We are relying on people honestly adhering to these rules, but there is no mechanism to enforce them.

 

Results

Acceptance of this invitation indicates agreement to make your results public. Originally we were planning that the SIG chairman would collect the results, calculate statistical quality metrics (e.g. Z- and V-factors, EC50s with confidence intervals) and arrange results for presentation at the SBS website and/or for publication in a magazine. While we still hope to be able to do this in the future, a more practical approach has emerged. Participants in the algorithm comparison publish their results independently referencing this image library. We will provide on this page links to such publications and presentations.

 

Kurt Scudder and Ilya Ravkin, SIG Co-chairmen

 

D&IA_SIG Home