Two-Sample Testing Using Deep Learning

Total Page:16

File Type:pdf, Size:1020Kb

Two-Sample Testing Using Deep Learning Two-sample Testing Using Deep Learning Matthias Kirchler1;2 Shahryar Khorasani1 Marius Kloft2;3 Christoph Lippert1;4 1Hasso Plattner Institute for Digital Engineering, University of Potsdam, Germany 2Technical University of Kaiserslautern, Germany 3University of Southern California, Los Angeles, United States 4Hasso Plattner Institute for Digital Health at Mount Sinai, New York, United States Abstract Two-sample tests are a pillar of applied statistics and a standard method for analyzing empirical data in the We propose a two-sample testing procedure sciences, e.g., medicine, biology, psychology, and so- based on learned deep neural network repre- cial sciences. In machine learning, two-sample tests sentations. To this end, we define two test have been used to evaluate generative adversarial net- statistics that perform an asymptotic loca- works (Bińkowski et al., 2018), to test for covariate tion test on data samples mapped onto a shift in data (Zhou et al., 2016), and to infer causal hidden layer. The tests are consistent and relationships (Lopez-Paz and Oquab, 2016). asymptotically control the type-1 error rate. There are two main types of two-sample tests: paramet- Their test statistics can be evaluated in lin- ric and non-parametric ones. Parametric two-sample ear time (in the sample size). Suitable data tests, such as the Student’s t-test, make strong assump- representations are obtained in a data-driven tions on the distribution of the data (e.g. Gaussian). way, by solving a supervised or unsupervised This allows us to compute p-values in closed form. How- transfer-learning task on an auxiliary (poten- ever, parametric tests may fail when their assumptions tially distinct) data set. If no auxiliary data on the data distribution are invalid. Non-parametric is available, we split the data into two chunks: tests, on the other hand, make no distributional as- one for learning representations and one for sumptions and thus could potentially be applied in a computing the test statistic. In experiments wider range of application scenarios. Computing non- on audio samples, natural images and three- parametric test statistics, however, can be costly as it dimensional neuroimaging data our tests yield may require applying re-sampling schemes or comput- significant decreases in type-2 error rate (up ing higher-order statistics. to 35 percentage points) compared to state- of-the-art two-sample tests such as kernel- A non-parametric test that gained a lot of attention methods and classifier two-sample tests.∗ in the machine-learning community is the kernel two- sample test and its test statistic: the maximum mean discrepancy (MMD). MMD computes the average dis- 1 INTRODUCTION tance of the two samples mapped into the reproducing kernel Hilbert space (RKHS) of a universal kernel (e.g., For almost a century, statistical hypothesis testing Gaussian kernel). MMD critically relies on the choice has been one of the main methodologies in statistical of the feature representation (i.e., the kernel function) inference (Neyman and Pearson, 1933). A classic prob- and thus might fail for complex, structured data such lem is to validate whether two sets of observations are as sequences or images, and other data where deep drawn from the same distribution (null hypothesis) or learning excels. not (alternative hypothesis). This procedure is called Another non-parametric two-sample test is the classifier two-sample test. two-sample test (C2ST). C2ST splits the data into two ∗We provide code at https://github.com/mkirchler/ chunks, training a classifier on one part and evaluating deep-2-sample-test it on the remaining data. If the classifier predicts significantly better than chance, the test rejects the Proceedings of the 23rdInternational Conference on Artificial null hypothesis. Since a part of the data set needs to Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. be put aside for training, not the full data set is used PMLR: Volume 108. Copyright 2020 by the author(s). for computing the test statistic, which limits the power Two-sample Testing Using Deep Learning of the method. Furthermore, the performance of the a two-sample test on data drawn from p and q, i.e. method depends on the selection of the train-test split. we test the null hypothesis H0 : p = q against the alternative H : p 6= q. p0 and q0 are assumed to be in In this work, we propose a two-sample testing proce- 1 some sense similar to p and q, respectively, and act as dure that uses deep learning to obtain a suitable data auxiliary task for tuning the test (the case of p = p0 and representation. It first maps the data onto a hidden- q = q0 is perfectly valid, in which case this is equivalent layer of a deep neural network that was trained (in an to a data splitting technique). unsupervised or supervised fashion) on an independent, 0 auxiliary data set, and then it performs a location test. We have access to four (independent) sets Xn; Yn; Xn0 , 0 0 0 Thus we are able to work on any kind of data that neu- and Yn0 of observations drawn from p; q; p , and q , d ral networks can work on, such as audio, images, videos, respectively. Here Xn = fX1;:::;Xng ⊂ R and Xi ∼ 0 time-series, graphs, and natural language. We propose p for all i (analogue definitions hold for Yn; Xn0 , and 0 two test statistics that can be evaluated in linear time Yn0 ). Empirical averages with respect to a function f (in the number of observations), based on MMD and 1 Pn are denoted by f(Xn) := n i=1 f(Xi). Fisher discriminant analysis, respectively. We derive asymptotic distributions of both test statistics. Our We investigate function classes of deep ReLU networks theoretical analysis proves that the two-sample test with a final tanh activation function: d H procedure asymptotically controls the type-1 error rate, TF N := tanh ◦WD−1 ◦ σ ◦ ::: ◦ σ ◦ W1 : R ! R has asymptotically vanishing type-2 error rate and is W 2 RH×d;W 2 RH×H for j = 2;:::;D − 1; robust both with respect to transfer learning and ap- 1 j 9 proximate training. D−1 Y = jjW jj ≤ β ;D ≤ D We empirically evaluate the proposed methodology in j F ro N N j=1 ; a variety of applications from the domains of computa- tional musicology, computer vision, and neuroimaging. Here, the activation functions tanh and σ(z) := In these experiments, the proposed deep two-sample ReLU(z) = max(0; z) are applied elementwise, jj · jjF ro tests consistently outperform the closest competing is the Frobenius norm, H = d + 1 is the width and method (including deep kernel methods and C2STs) by DN and βN are depth and weight restrictions onto the up to 35 percentage points in terms of the type-2 error networks. This can be understood as the mapping onto rate, while properly controlling the type-1 error rate. the last hidden layer of a neural network concatenated with a tanh activation. 2 PROBLEM STATEMENT & NOTATION 3 DEEP TWO-SAMPLE TESTING In this section, we propose two-sample testing based on We consider non-parametric two-sample statistical test- two novel test statistics, the Deep Maximum Mean ing, that is, to answer the question whether two samples Discrepancy (DMMD) and the Deep Fisher Dis- are drawn from the same (unknown) distribution or not. criminant Analysis (DFDA). The test asymptoti- We distinguish between the case that the two samples cally controls the type-1 error rate, and it is consistent are drawn from the same distribution (the null hypoth- (i.e., the type-2 error rate converges to 0). Further- esis, denoted by H ) and the case that the samples 0 more, we will show that consistency is preserved under are drawn from different distributions (the alternative both transfer learning on a related task, as well as only hypothesis H ). 1 approximately solving the training step. We differentiate between type-1 errors (i.e,rejecting the null hypothesis although it holds) and type-2 errors (i.e., 3.1 Proposed Two-sample Test not rejecting H0 although it does not hold). We strive for both the type-1 error rate to be upper bounded by Our proposed test consists of the following two steps. some significance level α, and the type-2 error rate to 1. We train a neural network over an auxiliary training converge to 0 for unlimited data. The latter property is data set. 2. We then evaluate the maximum mean called consistency and means that with sufficient data, discrepancy test statistic (Gretton et al., 2012a) (or the test can reliably distinguish between any pair of a variant of it) using as kernel the mapping from the probability distributions. input domain onto the network’s last hidden layer. 0 0 d Let p; q; p and q be probability distributions on R 3.1.1 Training Step with common dominating Borel measure µ. We abuse 0 0 0 notation somewhat and denote the densities with re- Let the training data be Xn0 and Ym0 . Denote N = n + spect to µ also by p; q; p0 and q0. We want to perform m0. We run a (potentially inexact) training algorithm Kirchler, Khorasani, Kloft, Lippert to find φN 2 T F N with: Interpretation as Empirical Risk Minimization 0 0 0 0 0 0 0 1 If we identify X with (Z ; 1) and Y with (Z 0 ; −1) n m i i i n +i 1 X 0 X 0 in a regression setting, this is equivalent to an (inexact) @ φN (Xi) − φN (Yi )A + η N empirical risk minimization with loss function L(t; t^) = i=1 i=1 1 − tt^: 0 0 0 1 n m 1 X 0 X 0 N N ≥ max @ φ(Xi) − φ(Yi )A : 1 X 0 0 1 X 0 > 0 φ2T F N N max tiφ(Zi) = max max tiw φ(Zi); i=1 i=1 φ N φ jjw||≤1 N i=1 i=1 Here, η ≥ 0 is a fixed leniency parameter (independent which is equivalent to of N); finding true global optima in neural networks is a hard problem, and an η > 0 allows us to settle N 0 > 1 X 0 > 0 with good-enough, local solutions.
Recommended publications
  • Appendix a Basic Statistical Concepts for Sensory Evaluation
    Appendix A Basic Statistical Concepts for Sensory Evaluation Contents This chapter provides a quick introduction to statistics used for sensory evaluation data including measures of A.1 Introduction ................ 473 central tendency and dispersion. The logic of statisti- A.2 Basic Statistical Concepts .......... 474 cal hypothesis testing is introduced. Simple tests on A.2.1 Data Description ........... 475 pairs of means (the t-tests) are described with worked A.2.2 Population Statistics ......... 476 examples. The meaning of a p-value is reviewed. A.3 Hypothesis Testing and Statistical Inference .................. 478 A.3.1 The Confidence Interval ........ 478 A.3.2 Hypothesis Testing .......... 478 A.3.3 A Worked Example .......... 479 A.1 Introduction A.3.4 A Few More Important Concepts .... 480 A.3.5 Decision Errors ............ 482 A.4 Variations of the t-Test ............ 482 The main body of this book has been concerned with A.4.1 The Sensitivity of the Dependent t-Test for using good sensory test methods that can generate Sensory Data ............. 484 quality data in well-designed and well-executed stud- A.5 Summary: Statistical Hypothesis Testing ... 485 A.6 Postscript: What p-Values Signify and What ies. Now we turn to summarize the applications of They Do Not ................ 485 statistics to sensory data analysis. Although statistics A.7 Statistical Glossary ............. 486 are a necessary part of sensory research, the sensory References .................... 487 scientist would do well to keep in mind O’Mahony’s admonishment: statistical analysis, no matter how clever, cannot be used to save a poor experiment. The techniques of statistical analysis, do however, It is important when taking a sample or designing an serve several useful purposes, mainly in the efficient experiment to remember that no matter how powerful summarization of data and in allowing the sensory the statistics used, the inferences made from a sample scientist to make reasonable conclusions from the are only as good as the data in that sample.
    [Show full text]
  • One Sample T Test
    Copyright c October 17, 2020 by NEH One Sample t Test Nathaniel E. Helwig University of Minnesota 1 How Beer Changed the World William Sealy Gosset was a chemist and statistician who was employed as the Head Exper- imental Brewer at Guinness in the early 1900s. As a part of his job, Gosset was responsible for taking samples of beer and performing various analyses to ensure that the beer was of the proper composition and quality. While at Guinness, Gosset taught himself experimental design and statistical analysis (which were new and exciting fields at the time), and he even spent some time in 1906{1907 studying in Karl Pearson's laboratory. Gosset's work often involved collecting a small number of samples, and testing hypotheses about the mean of the population from which the samples were drawn. For example, in the brewery, Gosset may want to test whether the population mean amount of some ingredient (e.g., barley) was equal to the intended value. And, on the farm, Gosset may want to test how different farming techniques and growing conditions affect the barley yields. Through this work, Gos- set noticed that the normal distribution was not adequate for testing hypotheses about the mean in small samples of data with an unknown variance. 2 2 1 Pn As a reminder, if X ∼ N(µ, σ ), thenx ¯ ∼ N(µ, σ =n) wherex ¯ = n i=1 xi, which implies x¯ − µ Z = p ∼ N(0; 1) σ= n i.e., the Z test statistic follows a standard normal distribution. However, in practice, the 2 2 1 Pn 2 population variance σ is often unknown, so the sample variance s = n−1 i=1(xi − x¯) must be used in its place.
    [Show full text]
  • 1 a Novel Joint Location-‐Scale Testing Framework for Improved Detection Of
    A novel joint location-scale testing framework for improved detection of variants with main or interaction effects David Soave,1,2 Andrew D. Paterson,1,2 Lisa J. Strug,1,2 Lei Sun1,3* 1Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, M5T 3M7, Canada; 2 Program in Genetics and Genome Biology, Research Institute, The Hospital for Sick Children, Toronto, Ontario, M5G 0A4, Canada; 3Department of Statistical Sciences, University of Toronto, Toronto, Ontario, M5S 3G3, Canada. *Correspondence to: Lei Sun Department of Statistical Sciences 100 St George Street University of Toronto Toronto, ON M5S 3G3, Canada. E-mail: [email protected] 1 Abstract Gene-based, pathway and other multivariate association methods are now commonly implemented in whole-genome scans, but paradoxically, do not account for GXG and GXE interactions which often motivate their implementation in the first place. Direct modeling of potential interacting variables, however, can be challenging due to heterogeneity, missing data or multiple testing. Here we propose a novel and easy-to-implement joint location-scale (JLS) association testing procedure that can account for compleX genetic architecture without eXplicitly modeling interaction effects, and is suitable for large-scale whole-genome scans and meta-analyses. We focus on Fisher’s method and use it to combine evidence from the standard location/mean test and the more recent scale/variance test (JLS- Fisher), and we describe its use for single-variant, gene-set and pathway association analyses. We support our findings with analytical work and large-scale simulation studies. We recommend the use of this analytic strategy to rapidly and correctly prioritize susceptibility loci across the genome for studies of compleX traits and compleX secondary phenotypes of `simple’ Mendelian disorders.
    [Show full text]
  • Basic Statistical Concepts for Sensory Evaluation
    BASIC STATISTICAL CONCEPTS FOR SENSORY EVALUATION It is important when taking a sample or designing an experiment to remember that no matter how powerful the statistics used, the infer­ ences made from a sample are only as good as the data in that sample. No amount of sophisticated statistical analysis will make good data out of bad data. There are many scientists who try to disguise badly constructed experiments by blinding their readers with a com­ plex statistical analysis."-O'Mahony (1986, pp. 6, 8) INTRODUCTION The main body of this book has been concerned with using good sensory test methods that can generate quality data in well-designed and well-exe­ cuted studies. Now we turn to summarize the applications of statistics to sensory data analysis. Although statistics are a necessary part of sensory research, the sensory scientist would do well to keep in mind O'Mahony's admonishment: Statistical anlaysis, no matter how clever, cannot be used to save a poor experiment. The techniques of statistical analysis do, how­ ever, serve several useful purposes, mainly in the efficient summarization of data and in allowing the sensory scientist to make reasonable conclu- 647 648 SENSORY EVALUATION OF FOOD: PRINCIPLES AND PRACTICES sions from the information gained in an experiment One of the most im­ portant conclusions is to help rule out the effects of chance variation in producing our results. "Most people, including scientists, are more likely to be convinced by phenomena that cannot readily be explained by a chance hypothesis" (Carver, 1978, p. 587). Statistics function in three important ways in the analysis and interpre­ tation of sensory data.
    [Show full text]
  • Nonparametric Statistics
    14 Nonparametric Statistics • These methods required that the data be normally CONTENTS distributed or that the sampling distributions of the 14.1 Introduction: Distribution-Free Tests relevant statistics be normally distributed. 14.2 Single-Population Inferences Where We're Going 14.3 Comparing Two Populations: • Develop the need for inferential techniques that Independent Samples require fewer or less stringent assumptions than the 14.4 Comparing Two Populations: Paired methods of Chapters 7 – 10 and 11 (14.1) Difference Experiment • Introduce nonparametric tests that are based on ranks 14.5 Comparing Three or More Populations: (i.e., on an ordering of the sample measurements Completely Randomized Design according to their relative magnitudes) (14.2–14.7) • Present a nonparametric test about the central 14.6 Comparing Three or More Populations: tendency of a single population (14.2) Randomized Block Design • Present a nonparametric test for comparing two 14.7 Rank Correlation populations with independent samples (14.3) • Present a nonparametric test for comparing two Where We've Been populations with paired samples (14.4) • Presented methods for making inferences about • Present a nonparametric test for comparing three or means ( Chapters 7 – 10 ) and for making inferences more populations using a designed experiment about the correlation between two quantitative (14.5–14.6) variables ( Chapter 11 ) • Present a nonparametric test for rank correlation (14.7) 14-1 Copyright (c) 2013 Pearson Education, Inc M14_MCCL6947_12_AIE_C14.indd 14-1 10/1/11 9:59 AM Statistics IN Action How Vulnerable Are New Hampshire Wells to Groundwater Contamination? Methyl tert- butyl ether (commonly known as MTBE) is a the measuring device, the MTBE val- volatile, flammable, colorless liquid manufactured by the chemi- ues for these wells are recorded as .2 cal reaction of methanol and isobutylene.
    [Show full text]
  • The NPAR1WAY Procedure This Document Is an Individual Chapter from SAS/STAT® 14.2 User’S Guide
    SAS/STAT® 14.2 User’s Guide The NPAR1WAY Procedure This document is an individual chapter from SAS/STAT® 14.2 User’s Guide. The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2016. SAS/STAT® 14.2 User’s Guide. Cary, NC: SAS Institute Inc. SAS/STAT® 14.2 User’s Guide Copyright © 2016, SAS Institute Inc., Cary, NC, USA All Rights Reserved. Produced in the United States of America. For a hard-copy book: No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written permission of the publisher, SAS Institute Inc. For a web download or e-book: Your use of this publication shall be governed by the terms established by the vendor at the time you acquire this publication. The scanning, uploading, and distribution of this book via the Internet or any other means without the permission of the publisher is illegal and punishable by law. Please purchase only authorized electronic editions and do not participate in or encourage electronic piracy of copyrighted materials. Your support of others’ rights is appreciated. U.S. Government License Rights; Restricted Rights: The Software and its documentation is commercial computer software developed at private expense and is provided with RESTRICTED RIGHTS to the United States Government. Use, duplication, or disclosure of the Software by the United States Government is subject to the license terms of this Agreement pursuant to, as applicable, FAR 12.212, DFAR 227.7202-1(a), DFAR 227.7202-3(a), and DFAR 227.7202-4, and, to the extent required under U.S.
    [Show full text]
  • Chapter 10 Analysis of Variance
    Chapter 10 Analysis of Variance Chapter Table of Contents Introduction ..........................205 One-Way Analysis of Variance ................209 Nonparametric One-Way Analysis of Variance .......215 Factorial Analysis of Variance .................219 Linear Models .........................224 References ............................231 204 Chapter 10. Analysis of Variance SAS OnlineDoc: Version 8 Chapter 10 Analysis of Variance Introduction Analysis of variance is a technique for exploring the variation of a continuous response variable (dependent variable). The response variable is measured at different levels of one or more classification variables (independent variables). The variation in the response due to the classification variables is computed, and you can test this vari- ation against the residual error to determine the significance of the classification effects. Figure 10.1. Analysis of Variance Menu The Analyst Application provides several types of analyses of vari- ance (ANOVA). The One-Way ANOVA task compares the means of the response variable over the groups defined by a single classi- fication variable. See the section “One-Way Analysis of Variance” beginning on page 209 for more information. 206 Chapter 10. Analysis of Variance The Nonparametric One-Way ANOVA task performs tests for loca- tion and scale differences over the groups defined by a single clas- sification variable. Eight nonparametric tests are offered. See the section “Nonparametric One-Way Analysis of Variance” beginning on page 215 for more information. The Factorial ANOVA task compares the means of the response vari- able over the groups defined by one or more classification variables. This type of analysis is useful when you have multiple ways of clas- sifying the response values.
    [Show full text]
  • Statistical Inference with Paired Observations and Independent Observations in Two Samples
    Statistical inference with paired observations and independent observations in two samples Benjamin Fletcher Derrick A thesis submitted in partial fulfilment of the requirements of the University of the West of England, Bristol, for the degree of Doctor of Philosophy Applied Statistics Group, Engineering Design and Mathematics, Faculty of Environment and Technology, University of the West of England, Bristol February, 2020 Abstract A frequently asked question in quantitative research is how to compare two samples that include some combination of paired observations and unpaired observations. This scenario is referred to as ‘partially overlapping samples’. Most frequently the desired comparison is that of central location. Depend- ing on the context, the research question could be a comparison of means, distributions, proportions or variances. Approaches that discard either the paired observations or the independent observations are customary. Existing approaches evoke much criticism. Approaches that make use of all of the available data are becoming more prominent. Traditional and modern ap- proaches for the analyses for each of these research questions are reviewed. Novel solutions for each of the research questions are developed and explored using simulation. Results show that proposed tests which report a direct measurable difference between two groups provide the best solutions. These solutions advance traditional methods in this area that have remained largely unchanged for over 80 years. An R package is detailed to assist users to per- form these new tests in the presence of partially overlapping samples. Acknowledgments I pay tribute to my colleagues in the Mathematics Cluster and the Applied Statistics Group at the University of the West of England, Bristol.
    [Show full text]
  • 1. Preface 2. Introduction 3. Sampling Distribution
    Summary Papers- Applied Multivariate Statistical Modeling- Sampling Distribution -Chapter Three Author: Mohammed R. Dahman. Ph.D. in MIS ED.10.18 License: CC-By Attribution 4.0 International Citation: Dahman, M. R. (2018, October 22). AMSM- Sampling Distribution- Chapter Three. https://doi.org/10.31219/osf.io/h5auc 1. Preface Introduction of differences between population parameters and sample statistics are discussed. Followed with a comprehensive analysis of sampling distribution (i.e. definition, and properties). Then, we have discussed essential examiners (i.e. distribution): Z distribution, Chi square distribution, t distribution and F distribution. At the end, we have introduced the central limit theorem and the common sampling strategies. Please check the dataset from (Dahman, 2018a), chapter one (Dahman, 2018b), and chapter two (Dahman, 2018c). 2. Introduction As a starting point I have to draw your attention to the importance of this unit. The bad news is that I have to use some rigid mathematic and harsh definitions. However, the good news is that I assure you by the end of this chapter you will be able to understand every word, in addition to all the math formulas. From chapter two (Dahman, 2018a), we have understood the essential definitions of population, sample, and distribution. Now I feel very much comfortable to share this example with you. Let’s say I have a population (A). and I’m able to draw from this population any sample (s). Facts are: 1. The size of (A) is (N); the mean is (µ); the variance is (ퟐ); the standard deviation is (); the proportion (횸); correlation coefficient ().
    [Show full text]
  • Simultaneously Testing for Location and Scale Parameters of Two Multivariate Distributions
    Revista Colombiana de Estadstica July 2019, Volume 42, Issue 2, pp. 185 to 208 DOI: http://dx.doi.org/10.15446/rce.v42n2.70815 Simultaneously Testing for Location and Scale Parameters of Two Multivariate Distributions Prueba simultánea de ubicación y parámetros de escala de dos distribuciones multivariables Atul Chavana, Digambar Shirkeb Department of Statistics, Shivaji University, Kolhapur, India Abstract In this article, we propose nonparametric tests for simultaneously testing equality of location and scale parameters of two multivariate distributions by using nonparametric combination theory. Our approach is to combine the data depth based location and scale tests using combining function to construct a new data depth based test for testing both location and scale pa- rameters. Based on this approach, we have proposed several tests. Fisher’s permutation principle is used to obtain p-values of the proposed tests. Per- formance of proposed tests have been evaluated in terms of empirical power for symmetric and skewed multivariate distributions and compared to the existing test based on data depth. The proposed tests are also applied to a real-life data set for illustrative purpose. Key words: Combining function; Data depth; Permutation test; Two-sample test. Resumen En este artículo, proponemos pruebas no paramétricas para probar si- multáneamente la igualdad de ubicación y los parámetros de escala de dos distribuciones multivariantes mediante la teoría de combinaciones no para- métricas. Nuestro enfoque es combinar las pruebas de escala y ubicación basadas en la profundidad de los datos utilizando la función de combinación para construir una nueva prueba basada en la profundidad de los datos para probar los parámetros de ubicación y escala.
    [Show full text]
  • Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette
    Sensors 2013, 13, 14041-14054; doi:10.3390/s131014041 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette Wenzhu Huang, Wentao Zhang * and Fang Li Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China; E-Mails: [email protected] (W.H.); [email protected] (F.L.) * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +86-10-8230-4229; Fax: +86-10-8230-4082. Received: 16 August 2013; in revised form: 22 September 2013 / Accepted: 11 October 2013 / Published: 17 October 2013 Abstract: This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.
    [Show full text]
  • Better-Than-Chance Classification for Signal Detection
    Better-Than-Chance Classification for Signal Detection Jonathan D. Rosenblatt Department of IE&M and Zlotowsky Center for Neuroscience, Ben Gurion University of the Negev, Israel. Yuval Benjamini Department of Statistics, Hebrew University, Israel Roee Gilron Movement Disorders and Neuromodulation Center, University of California, San Francisco. Roy Mukamel School of Psychological Science Tel Aviv University, Israel. Jelle Goeman Department of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands. December 15, 2017 Abstract arXiv:1608.08873v2 [stat.ME] 14 Dec 2017 The estimated accuracy of a classifier is a random quantity with variability. A common practice in supervised machine learning, is thus to test if the estimated accuracy is significantly better than chance level. This method of signal detection is particularly popular in neu- roimaging and genetics. We provide evidence that using a classifier’s accuracy as a test statistic can be an underpowered strategy for find- ing differences between populations, compared to a bona-fide statisti- cal test. It is also computationally more demanding than a statistical 1 Classification for Signal Detection 1 INTRODUCTION test. Via simulation, we compare test statistics that are based on clas- sification accuracy, to others based on multivariate test statistics. We find that probability of detecting differences between two distributions is lower for accuracy based statistics. We examine several candidate causes for the low power of accuracy tests. These causes include: the discrete nature of the accuracy test statistic, the type of signal accu- racy tests are designed to detect, their inefficient use of the data, and their regularization. When the purposes of the analysis is not signal detection, but rather, the evaluation of a particular classifier, we sug- gest several improvements to increase power.
    [Show full text]