Adaptive Learning Methods and Their Use in Flaw Classification Sriram Chavali Iowa State University

Total Page:16

File Type:pdf, Size:1020Kb

Adaptive Learning Methods and Their Use in Flaw Classification Sriram Chavali Iowa State University Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1996 Adaptive learning methods and their use in flaw classification Sriram Chavali Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Structures and Materials Commons Recommended Citation Chavali, Sriram, "Adaptive learning methods and their use in flaw classification" (1996). Retrospective Theses and Dissertations. 111. https://lib.dr.iastate.edu/rtd/111 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. r r Adaptive learning methods and their use in flaw classification r by r Sriram Chavali r A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of r MASTER OF SCIENCE r Department: Aerospace Engineering and Engineering Mechanics Major: Aerospace Engineering r1 r Major Professor: Dr. Lester W. Schmerr r r r r r r Iowa State University Ames, Iowa r 1996 r Copyright© Sriram Chavali, 1996. All rights reserved. r( r 11 r Graduate College r Iowa State University rI I This is to certify that the Master's thesis of Sriram Chavali has met the thesis requirements of Iowa State University r r r r r r rt r r r rm I l rl r r ll1 r {l1m'l r 1 TABLE OF CONTENTS {1m 1, I )11m ACKNOWLEDGMENTS Vlll I i ABSTRACT ..... lX r 1 INTRODUCTION 1.1 ND E Flaw Classification . 1 1.2 Background ..... 3 i( 1.3 Scope of the Thesis . 4 r 2 CLUSTERING 5 2.1 Introduction . 5 2.2 Similarity Measures 6 2.3 Hierarchical Clustering . 8 r 2.3.1 Agglomerative hierarchical clustering . 9 2.3.2 Implementation of clustering in CLASS 12 fl11l i 2.4 Results ...................... 13 l 3 BACKPROPAGATION NEURAL NETWORKS . 17 r 3.1 Introduction ......... 17 3.2 Overview of Neural Networks 17 r 3.3 Historical Perspective . 18 3.4 Single Layer Perceptron 19 r 3.5 Multi-Layer Perceptron 23 3.6 Determining a Neuron's Output . 25 r 3. 7 Learning . 29 3.7.1 Adjusting the Weights of the output layer 29 r 3.7.2 Adjusting the Weights of the hidden layers 31 3.8 Momentum ..................... 33 r I r IV r 3.9 Batching ............................. 33 3.10 Advantages and Disadvantages of Backpropagation Learning 35 r 3.11 Implementation of the backpropagation neural network algorithm in CLASS . 35 3.12 Results ........... 37 3.12.1 The parity problem 38 r 4 FEATURE DOMAINS ... 43 4.1 Sampling Discrete Time Signals . 43 4.2 Frequency Domain . 44 4.2.1 Fourier series F' 4.2.2 The discrete Fourier transform 46 I I 4.2.3 Computation of the discrete Fourier transform 48 4.2.4 The Cooley-Tukey FFT algorithm ·• 49 4.3 Cepstrum Analysis ............ =ll rm 4.4 Implementation of Frequency Domain and Cepstral Domain in CLASS 54 lI 5 CONCLUSIONS AND FUTURE WORK 5.) r APPENDIX User Manual for CLASS 57 BIBLIOGRAPHY .............. 62 f1m I ~ It f-1 \ I r r ! r rm l I r r v rt LIST OF TABLES ~ I I Table 2.1 Clustering algorithm results indicating the number of correctly classified samples 15 rmI ! Table 3.1 XOR decision table ............................... 23 ~ \ Table 3.2 Backpropagation neural network training on XOR with a = 0.3 and 1J = 0.7 37 I Table 3.3 Backpropagation neural network training on XOR with a = 0.7 and 1J = 0.3 38 Table 3.4 Backpropagation neural network training on XOR with a = 0.5 and 17 = 0.5 39 Table 3.5 Parity problem of size 3 39 Table 3.6 Parity problem of size 4 40 rl Table 3.7 Backpropagation neural network training on parity problem of size 3 with a = {m!!l I ! 0.3 and 1J = 0.7 . 41 Table 3.8 Backpropagation neural network training on parity problem of size 4 with a = 0.3 and 1J = 0.7 . 41 r ( {11\m vi ) I l rI rm LIST OF FIGURES Figure 1.1 Flow chart for classification 2 Figure 2.1 Clustering of points in Feature space 6 Figure 2.2 The effect of distance threshold on clustering 7 Figure 2.3 A dendogram for hierarchical clustering .... 10 r Figure 2.4 A flowchart of agglomerative hierarchical clustering 11 Figure 2.5 Clustering algorithm folder ............ •. 14 r Figure 2.6 Clustering algorithm report generated by CLASS 16 Figure 3 .1 A graphical representation of an artificial neural network 18 r Figure 3.2 A taxonomy of six neural nets 19 Figure 3.3 A simple neuron ....... 20 Figure 3.4 Linear inseparablilty of the XOR problem 22 Figure 3.5 A three layer perceptron 24 Figure 3.6 A Sigmoid function ... 26 Figure 3. 7 Types of decision regions that can be formed by single- and multi-layer percep­ trons with one and two layers of hidden units and two inputs. Shading denotes decision regions for class A. All the nodes use nonlinear activation functions. 27 Figure 3.8 An artificial neuron with an activation function ................ 28 Figure 3.9 Training a Weight in the output layer. The subscripts p and q refer to a specific i' i neuron. The subscripts j and k refer to a layer. 30 Figure 3.10 Training a Weight in a hidden layer. The subscripts m and prefer to a specific r neuron. The subscripts i , j and k refer to a layer ...... 32 Figure 3.11 Flow chart illustrating the backpropagation training process 34 pml l Figure 3.12 Backpropagation neural network folder .... 36 ( Figure 3.13 BPNN algorithm report generated by CLASS 42 r r Vll Figure 4.1 Aliasing in a discrete time signal . 44 r Figure 4.2 Flow graph of the decimation in time decomposition of an N point DFT into two If point OFT for N = 8. ... 49 Fl I ( Figure 4.3 Flow graph of basic butterfly computation 50 Figure 4.4 Flow graph of Equations 4.28 and 4.29 .. 51 r;:;J I l Figure 4.5 Tree diagram depicting bit reversed sorting . 52 Figure 4.6 Flow graph of 8 point discrete Fourier tt·ansform 53 r Figure A.l Class folders . 58 Figure A.2 Clustering algorithm folder . 59 rL Figure A.3 Backpropagation neural network folder 61 r r r \. F I L r F l r L r r r r r l Vlll F I f9l I ACKNOWLEDGMENTS r I I would like to thank my advisor Dr. Lester Schmerr for his guidance, direction and support of my research. He introduced me to the fields of Nondestructive evaluation and Artificial intelligence and was always there to help me with my numerous questions and doubts. I would also like to thank Dr. Chien-Ping Chiou for being patient with all my questions during the last year. I would also like to thank all the professors who taught me in their classes during the last two years. Finally, I would like to thank my LaTex guru, Dr. John Samsundar. But for John's suggestions and help, I would have had a tough time getting around the idiosyncrasies of LaTex. I would like to thank my officemates, Dr. Terry Lerch, Raju Vuppala and Dr. Alex Sedov. They were always there to help me in doing the small things which are so important. I would like to thank my brother Srikanth, who helped me stay focussed during the last year. I would also like to thank all my friends for their support and help. Finally, I would like to thank my parents for the encouragement, support and ideals that they imparted to me. p;;l I F I ~ I rw' l r i IX ~ I ABSTRACT F" I An important goal of nondestructive evaluation is the detection and classification of flaws in materi­ als. This process of 'flaw classification' involves the transformation of the 'raw' data into other domains, the extraction of features in those domains, and the use of those features in a classification algorithm that determines the class to which the flaw belongs. In this work, we describe a flaw classification software system, CLASS and the updates made to it. Both a hierarchical clustering algorithm and a backpropagation neural network algorithm were implemented -and integrated with CLASS. A fast Fourier transform routine was also added to CLASS in order to enable the use of frequency domain and cepstral domain features. This extended version of CLASS is a very user friendly software, which requires the user to have little knowledge of the actual learning algorithms. CLASS can be easily extended further, if needed, in rwr the future. I r F ! !r 1 r r" I 1 INTRODUCTION rm 1.1 NDE Flaw Classification [ Non-destructive evaluation (NDE) methods place various forms of energy into materials and try to examine the materials without harming them or affecting their performance. Examples of NDE methods and the types of energy they use are: Ultrasound-acoustic energy, Eddy currents (electrical energy), and r X-rays (penetrating radiation). One important application of these NDE methods is to find, classify and characterize flaws based in their response to the types of energy present. Here we are interested r only in the process of classifying flaws which involves, for example, such tasks as distinguishing cracks from non-crack-like flaws.
Recommended publications
  • The Wavelet Tutorial Second Edition Part I by Robi Polikar
    THE WAVELET TUTORIAL SECOND EDITION PART I BY ROBI POLIKAR FUNDAMENTAL CONCEPTS & AN OVERVIEW OF THE WAVELET THEORY Welcome to this introductory tutorial on wavelet transforms. The wavelet transform is a relatively new concept (about 10 years old), but yet there are quite a few articles and books written on them. However, most of these books and articles are written by math people, for the other math people; still most of the math people don't know what the other math people are 1 talking about (a math professor of mine made this confession). In other words, majority of the literature available on wavelet transforms are of little help, if any, to those who are new to this subject (this is my personal opinion). When I first started working on wavelet transforms I have struggled for many hours and days to figure out what was going on in this mysterious world of wavelet transforms, due to the lack of introductory level text(s) in this subject. Therefore, I have decided to write this tutorial for the ones who are new to the topic. I consider myself quite new to the subject too, and I have to confess that I have not figured out all the theoretical details yet. However, as far as the engineering applications are concerned, I think all the theoretical details are not necessarily necessary (!). In this tutorial I will try to give basic principles underlying the wavelet theory. The proofs of the theorems and related equations will not be given in this tutorial due to the simple assumption that the intended readers of this tutorial do not need them at this time.
    [Show full text]
  • Learning, Selection and Coding of New Block Transforms in and for the Optimization Loop of Video Coders Saurabh Puri
    Learning, selection and coding of new block transforms in and for the optimization loop of video coders Saurabh Puri To cite this version: Saurabh Puri. Learning, selection and coding of new block transforms in and for the optimization loop of video coders. Computer Science [cs]. Université Bretagne Loire; Université de Nantes; LS2N, Université de Nantes, 2017. English. tel-01779566 HAL Id: tel-01779566 https://tel.archives-ouvertes.fr/tel-01779566 Submitted on 26 Apr 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Thèse de Doctorat Saurabh PURI Mémoire présenté en vue de l’obtention du grade de Docteur de l’Université de Nantes sous le sceau de l’Université Bretagne Loire École doctorale : Sciences et technologies de l’information, et mathématiques Discipline : Traitement du signal et des images, section CNU 27 Unité de recherche : LABORATOIRE DES SCIENCES DU NUMÉRIQUE DE NANTES (LS2N) Soutenance prévue le 09 November 2017 Learning, selection and coding of new block transforms in and for the optimization loop of video coders JURY Présidente : Mme Christine GUILLEMOT, Directrice de Recherche, INRIA Rennes Rapporteurs : M. Olivier DEFORGES, Professeur, INSA Rennes M. Marco CAGNAZZO, Associate Professeur, TELECOM-ParisTech Examinateur : M.
    [Show full text]
  • Numerical Methods for Laplace Transform Inversion
    NUMERICAL METHODS FOR LAPLACE TRANSFORM INVERSION Numerical Methods and Algorithms VOLUME 5 Series Editor: Claude Brezinski Université des Sciences et Technologies de Lille, France NUMERICAL METHODS FOR LAPLACE TRANSFORM INVERSION By ALAN M. COHEN Cardiff University Library of Congress Control Number: 2006940349 ISBN-13: 978-0-387-28261-9 e-ISBN-13: 978-0-387-68855-8 Printed on acid-free paper. AMS Subject Classifications: 44A10, 44-04, 65D30, 65D32, 65Bxx 2007 Springer Science+Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 9 8 7 6 5 4 3 2 1 springer.com Contents Preface . viii Acknowledgements . xii Notation . xiii 1 Basic Results 1 1.1 Introduction . 1 1.2 Transforms of Elementary Functions . 2 1.2.1 Elementary Properties of Transforms . 3 1.3 Transforms of Derivatives and Integrals . 5 1.4 Inverse Transforms . 8 1.5 Convolution . 9 1.6 The Laplace Transforms of some Special Functions .
    [Show full text]
  • Read Book Integral Transform Techniques for Greens Function
    INTEGRAL TRANSFORM TECHNIQUES FOR GREENS FUNCTION 2ND EDITION PDF, EPUB, EBOOK Kazumi Watanabe | 9783319345871 | | | | | Integral Transform Techniques for Greens Function 2nd edition PDF Book An integral transform "maps" an equation from its original "domain" into another domain. Cold Books. Finite Laplace Transforms. Vikram Jain Books. Millions of books are added to our site everyday and when we find one that matches your search, we'll send you an e-mail. Sanctum Books. Applications of Laplace Transforms. This physics kernel is the kernel of integral transform. Jacobi and Gegenbauer Transforms. For other uses, see Transformation mathematics. A7 Mittag Leffler Function. Tables of Integral Transforms. Chapter 1 Fundamentals. Fractional Malliavin Stochastic Variations. Added to Your Shopping Cart. Legendre Transforms. Mean value theorem Rolle's theorem. The equation cast in terms of complex frequency is readily solved in the complex frequency domain roots of the polynomial equations in the complex frequency domain correspond to eigenvalues in the time domain , leading to a "solution" formulated in the frequency domain. Are you a frequent reader or book collector? Hilbert and Stieltjes Transforms. E-Book Rental Days. From Wikipedia, the free encyclopedia. Keeping the style, content, and focus that made the first edition a bestseller, Integral Transforms and their Applications, Second Edition stresses the development of analytical skills rather than the importance of more abstract formulation. However, for each quantum system, there is a different kernel. There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. The general theory of such integral equations is known as Fredholm theory.
    [Show full text]
  • Small-Size FDCT/IDCT Algorithms with Reduced Multiplicative
    ISSN 0735-2727, Radioelectronics and Communications Systems, 2019, Vol. 62, No. 11, pp. 559–576. © Allerton Press, Inc., 2019. Russian Text © The Author(s), 2019, published in Izvestiya Vysshikh Uchebnykh Zavedenii, Radioelektronika, 2019, Vol. 62, No. 11, pp. 662–677. Small-Size FDCT/IDCT Algorithms with Reduced Multiplicative Complexity Aleksandr Cariow1*, Marta Makowska1**, and Pawe³ Strzelec1*** 1West Pomeranian University of Technology, Szczecin, Poland *ORCID: 0000-0002-4513-4593, e-mail: [email protected] **e-mail: [email protected] ***e-mail: [email protected] Received June 26, 2019 Revised November 6, 2019 Accepted November 8, 2019 Abstract—Discrete orthogonal transforms including the discrete Fourier transform, the discrete Walsh transform, the discrete Hartley transform, the discrete Slant transform, etc. are extensively used in radio-electronic and telecommunication systems for data processing and transmission. The popularity of using these transform is explained by the presence of fast algorithms that minimize the computational and hardware complexity of their implementation. A special place in the list of transforms is occupied by the forward and inverse discrete cosine transforms (FDCT and IDCT respectively). This article proposes a set of parallel algorithms for the fast implementation of FDCT/IDCT. The effectiveness of the proposed solutions is justified by the possibility of the factorization of the FDCT/IDCT matrices, which leads to a decrease in computational and implementation complexity. Some fully parallel FDCT/IDCT algorithms for small lengths N = 2, 3, 4, 5, 6, 7 are presented. DOI: 10.3103/S0735272719110025 1. INTRODUCTION Traditional types of discrete orthogonal transforms such as discrete Fourier transform, discrete Hartley transform, discrete Walsh and Haar transform, Slant transform, discrete cosine transform (DCT) are important data processing tools [1].
    [Show full text]