BIOINFORMATICS ORIGINAL PAPER Doi:10.1093/Bioinformatics/Btm145
Total Page:16
File Type:pdf, Size:1020Kb
Vol. 23 no. 13 2007, pages 1648–1657 BIOINFORMATICS ORIGINAL PAPER doi:10.1093/bioinformatics/btm145 Data and text mining An efficient method for the detection and elimination of systematic error in high-throughput screening Vladimir Makarenkov1,*, Pablo Zentilli1, Dmytro Kevorkov1, Andrei Gagarin1, Nathalie Malo2,3 and Robert Nadon2,4 1Department d’informatique, Universite´ du Que´ bec a` Montreal, C.P.8888, s. Centre Ville, Montreal, QC, Canada, H3C 3P8, 2McGill University and Genome Quebec Innovation Centre, 740 Dr. Penfield Ave., Montreal, QC, Canada, H3A 1A4, 3Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, 1020 Pine Av. West, Montreal, QC, Canada, H3A 1A4 and 4Department of Human Genetics, McGill University, 1205 Dr. Penfield Ave., N5/13, Montreal, QC, Canada, H3A 1B1 Received on December 7, 2006; revised on February 22, 2007; accepted on April 10, 2007 Advance Access publication April 26, 2007 Associate Editor: Jonathan Wren ABSTRACT experimental artefacts that might confound with important Motivation: High-throughput screening (HTS) is an early-stage biological or chemical effects. The description of several process in drug discovery which allows thousands of chemical methods for quality control and correction of HTS data can compounds to be tested in a single study. We report a method for be found in Brideau et al. (2003), Gagarin et al. (2006a), Gunter correcting HTS data prior to the hit selection process (i.e. selection et al. (2003), Heuer et al. (2003), Heyse (2002), Kevorkov and of active compounds). The proposed correction minimizes the Makarenkov (2005), Makarenkov et al. (2006), Malo et al. impact of systematic errors which may affect the hit selection in (2006) and Zhang et al. (1999, 2000). HTS. The introduced method, called a well correction, proceeds by Various sources of systematic errors can affect experimental correcting the distribution of measurements within wells of a given HTS data, and thus introduce a bias into the hit selection HTS assay. We use simulated and experimental data to illustrate the process (Heuer et al., 2003), including: advantages of the new method compared to other widely-used Systematic errors caused by aging, reagent evaporation or methods of data correction and hit selection in HTS. cell decay which can be recognized as smooth trends in the Contact: [email protected] plate means/medians. Supplementary information: Supplementary data are available at Bioinformatics online. Errors in liquid handling and malfunction of pipettes which can generate localized deviations from expected values. Variation in incubation time, time drift in measuring different wells or different plates and reader effects which 1 INTRODUCTION can be recognized as smooth attenuations of measurements over an assay. High-throughput screening (HTS) is a modern technology used for the identification of pharmacologically active compounds Random errors produce noise that cause minor variation of the (i.e. hits). In screening laboratories, testing more than 100 000 hit distribution surface. Systematic errors generate repeatable compounds a day has become routine. Automated mass local artifacts and smooth global drifts, which become more screening for pharmacologically active compounds is now noticeable when computing a hit distribution surface widely distributed. It serves for the identification of chemical (Kevorkov and Makarenkov, 2005). Often systematic errors compounds as starting points for optimization (primary create border, row or columns effects, resulting in the screening), for the determination of activity, specificity, measurements in certain rows or columns that are system- physiological and toxicological properties of large libraries atically over or underestimated (Brideau et al., 2003). This (secondary screening) and for the verification of structure- article introduces a new method allowing one to minimize the activity hypotheses in focused libraries (tertiary screening) impact of systematic error on the hit selection process. We (Heyse, 2002). The lack of standardized data validation and propose to examine the hit distribution of raw data and fit the quality assurance processes has been recognized as one of the data variation within each well to correct the data at hand. The major hurdles for successful implementing high-throughput comparison of the new method to other data correction experimental technologies (Kaul, 2005). Therefore, automated techniques used in HTS is described in the Simulations section. quality assessment and data correction systems need to be The latter section is followed by an application example, where applied to biochemical data in order to recognize and eliminate we carried out the identification of active compounds in the raw and well-corrected HTS assay generated at McMaster *To whom correspondence should be addressed. University. 1648 ß The Author 2007. Published by Oxford University Press. All rights reserved. For Permissions, please email: [email protected] Detection and elimination of systematic error in HTS 2 MATERIALS AND METHODS and data tested under different conditions, all measurements should be normalized. 2.1 Experimental data (3) Data correction of all samples. This step can be conducted using In this article, we examine an experimental data set generated at the the median polish procedure (Tukey, 1977), the background HTS Laboratory of McMaster University. This test assay was proposed correction procedure (Kevorkov and Makarenkov, 2005), or the as a benchmark for the McMaster Data mining and docking well correction method discussed in this article. The B-score competition (http://hts.mcmaster.ca/Downloads/82BFBEB4-F2A4- transformation procedure (Brideau et al., 2003; Malo et al., 4934-B6A8-804CAD8E25A0.html; Elowe et al., 2005). It consists of a 2006), additionally correcting for row and column biases, can screen of compounds that inhibits the Escherichia coli dihydrofolate also be carried out. The comparison of the data correction reductase. Each compound was screened twice: two copies of 625 plates techniques is presented in the Simulation study section. were run through the screening machines. This gives 1250 plates in total, each having wells arranged in eight rows and 12 columns (the Also, background plates (i.e. control plates) can be inserted columns 1 and 12 containing controls were not considered in this throughout a screen. Background plates are separate plates containing study). The assay conditions reported in Elowe et al. (2005) were the only control wells and no screening compounds. They are particularly following: assays were carried out at 25 C and performed in duplicate. useful for calculating the background levels of an assay and help Each 200 ml reaction mixture contained 40 mM NADPH, 30 mM DHF, determine whether an assay has sufficient signal which can be reliably 5 nM DHFR, 50 mM Tris (pH 7.5), 0.01% (w/v) Triton and 10 mM detected (Fassina, 2006). Such additional plates enables one to create -mercaptoethanol. Test compounds from the screening library were background signatures that lead to plate-based correction factors on, added to the reaction before initiation by enzyme and at a final per well, per row or per column, basis for all other assay plates. The use concentration of 10 mM. The Supplementary Materials section contains of background plates gets around the main assumption made for the more detail on the screening method and plate layout (Fig. 1S) for this well correction procedure: when examined across plates, wells should assay. not systematically contain compounds from the same family (see the description of the well correction method subsequently). The main 2.2 Data preprocessing and correction in HTS inconvenience of this method is that it does not take into account possible errors that might occur in the plates processed between two The analysis of experimental HTS data requires preprocessing to ensure background plates. the statistical meaningfulness and accuracy of the data analysis and the Note that for certain methods, the corrected data can be easily hit selection. Ideally, inactive samples should have similar mean values denormalized in order to obtain a data set scaled as the original one. and generate a constant surface. In a real case, however, random errors Analysis of the hit distribution surface of the corrected data can then produce random noise. For a large number of plates considered, the be carried out using the 2-contingency test (see the results in the noise residuals should compensate each other in the computation of section 3.2). mean values. Systematic repeatable artifacts become more visible as the number of plates increases (Kevorkov and Makarenkov, 2005). The following steps can be carried out to pre-process experimental 2.3 Hit selection process HTS data: In the HTS workflow, the bias correction process is followed by hit selection (i.e. inference of active compounds). The selection of hits in (1) Hit and outlier elimination (optional). This elimination can be the corrected data is often done using a preselected threshold carried out in order to reduce the influence of hits and outliers on (e.g. x À 3SD, in case of an inhibition assay). the plates’ means and SDs (standard deviations). It can be Hit selection is a process that should not only consider statistical particularly important when analyzing screens with few (5100) treatment of HTS data. It should also be used in conjunction with the plates. structure-activity-relationships