Transfer Learning for Cell Image Reconstruction Andrei Raureanu University of Twente P.O

Total Page:16

File Type:pdf, Size:1020Kb

Transfer Learning for Cell Image Reconstruction Andrei Raureanu University of Twente P.O Transfer Learning For Cell Image Reconstruction Andrei Raureanu University of Twente P.O. Box 217, 7500AE Enschede The Netherlands [email protected] ABSTRACT results in some scan patches being blurry due to incorrect Whole slide scanners are a useful tool for doing cell anal- focus of the lens. This also happens even if the microscope ysis in an efficient manner. A big problem with this is has an automated focus system. The blurry areas in the that scanning is not always perfect, which results in arti- scan then need to be inspected manually which makes it facts such as blur in some parts of the scan. This causes tedious and time consuming. further problems for the specialists using the scanner, as Various algorithms and systems have been proposed to they have to manually inspect the blurry areas in ques- deal with this issue [9, 15, 12]. There are deep learning tion and give an objective conclusion. To solve this issue, solutions that detect blurry areas and segment them [11] two Convolutional Autoencoders models are designed and which is relevant as those areas are the most in need of de- implemented to reconstruct cell slide images. The per- blur. This allows the microscope to re-scan just the blurry formance of the models to remove blur from sections of area specifically without doing a full scan. Unfortunately cell slides will then be investigated. The robustness of even rescanning a small section requires heavy time-loads. the Autoencoders is also tested on cell images generated In recent years, Autoencoders have gained a lot of pop- artificially that have had Gaussian blur applied to them. ularity. This is due to their ability to reconstruct input Both trained models successfully deblur cell images with containing artifacts into its clear counterpart. Because of minor performance decreases when the blur is caused by this, the use of Autoencoders looks to be very suitable for the camera lens focusing below the focal plane. The re- the scope of this project. Another advantage of Autoen- constructions of synthetic cells is also achievable with only coders is not needing labeled data for training which also a 15% performance decrease when deblurring cell images makes it ideal for unsupervised learning. It is also im- with high amounts of Gaussian blur applied to them. portant to note that sometimes getting big quantities of labeled data for a specific model training can be difficult. Keywords In this research, two convolutional Autoencoder are trained denoising, Autoencoder, neural network, cell imaging, ma- using two different methods to reconstruct blurry cell im- chine learning, image reconstruction ages into its non-blurry counterpart. The focus will be made on out-of-focus blur anomalies as these are the most 1. INTRODUCTION common problem in cell imaging. In the biomedical industry, whole slide scanners are one of the many tools to analyse cells for research purposes or 2. PROBLEM STATEMENT disease diagnosis. The underlying process of scanners, cell Research has been done on ways to detect areas with imaging, consists of staining the cell dish with a fluores- blurry anomalies in cell images for re-scanning purposes [17]. cent substance, taking pictures of different areas using a Some examples even use deep learning for this [11]. De- microscope with high magnification (20x - 40x) and then spite that, there have been very few attempts at "fixing" patching all of them together to create one high-resolution the out-of-focus image directly. This paper presents two scan. The scan image can then be further used for analysis systems that are trained to reconstruct out-of-focus cell of the cells such as classification or counting. images, thus de-blurring them. The two systems are also able to reconstruct images captured by other scanners However, this technique cannot deliver high-quality im- other than the one they were trained on. ages all the time. It is estimated that 5% of the scans present artifacts [14]. One of the most common problems 2.1 Research Questions encountered in cell imaging is out-of-focus blur [5]. This is In retrospect of the problem statement, the following re- caused by the fact that cells are not on the same level of the search questions are presented: Z-axis on the chamber slide, meaning that individual cells can be higher or lower from each other on the dish. This • RQ1: To what extent can an Autoencoder recon- struct out-of-focus cell images? Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies • RQ2: How robust is the Autoencoder for out-of- are not made or distributed for profit or commercial advantage and that focus reconstruction when presented with cell images copies bear this notice and the full citation on the first page. To copy oth- erwise, or republish, to post on servers or to redistribute to lists, requires recorded with a different microscope? prior specific permission and/or a fee. th nd • RQ2.1: To what extent can it reconstruct out-of- 35 Twente Student Conference on IT July 2 , 2021, Enschede, The Netherlands. focus cell images recorded with a different micro- Copyright 2021, University of Twente, Faculty of Electrical Engineer- scope? ing, Mathematics and Computer Science. 1 Figure 1. An example of a pair of two cell images resulted from Hoechst staining. a) shows an in-focus cell image while b) shows the same cell slide but this time out-of-focus due to incorrect microscope calibration. 3. RELATED WORK the blur from an image by sharpening it. For this they use There has been substantial research done in the detection convolutional layers inside the Autoencoder to downsam- of out-of-focus images in whole slide scanning. Senaras et ple the training data, extract the relevant features from it al. [11] presented a deep learning solution to detect and and then convert the image to a sharper version of itself. segment blurry areas in digital whole slide images under From what has been gathered so far, there is a solid foun- the form of a convolutional neural network. In another dation of artifact recognition and classification along with paper, Yang et al. [17] argued that relative differences be- previous use cases of Autoencoders in image related tasks, tween two or more images in general lead to inconsisten- but not enough work on using Autoencoders on cell im- cies, which is why they created a deep neural network to ages specifically for deblurring. In this paper two different predict an absolute measure of quality based on level of Autoencoders are proposed for dealing with this problem. blur. What is different from Senaras' [11] model is that their training data was synthetically defocused to 11 ab- solute defocus levels. 4. METHODS For this research, two Autoencoder architectures have been Besides deep learning solutions, there have also been algo- designed and implemented. They will be compared be- rithms and systems proposed for detection. One such ex- tween each other using the metrics mentioned to see which ample of these is the system designed by Zerbe et al. [18]. approach works best. The first Autoencoder model is They make use of the image analysis software ImageJ and trained using the whole image from the training set, mean- Tenenbaum gradient to assess and classify image patch ing no segmentation is used. The second Autoencoder sharpness based on the degree of blur present. The full however, is trained on a data set derived from the original microscope image is split into image patches. Then, us- Human U2OS cell data set, each image being segmented ing batch processing through the use of multiple comput- into 36 patches that correspond with an area of the origi- ing nodes, each image patch is classified. Finally, all the nal image. Each image patch is then used as training data patches are aggregated in a sharpness map. for the second Autoencoder. In a similar fashion, Shakhawat et al. [12] propose a method Moving forward into the paper, the first Autoencoder shall to find the origin of the artifact, meaning whether the be referred to as "whole image Autoencoder" and the sec- distortion happened during the slide preparation stage or ond Autoencoder design as "split image Autoencoder". scanning, but with the help of a support vector machine The diagrams for both Autoencoders can be found in Ap- to grade the quality of the region containing artifacts from pendix C1 and C2 respectively. the whole cell image scan. There are records of Autoencoders being used for image 4.1 Whole image Autoencoder processing. Gupta et al. [6] perform motion blur removal The whole image Autoencoder starts with an Input layer using Coupled Autoencoders. They use two Autoencoders, of 128x128x1, which corresponds to the image shape of the the first one being the source and the second being the tar- training set. Afterwards, two pairs of Conv2D and Max- get. The first one makes use of a blurry image, which is Pooling2D layers are added, with a BatchNormalization formed out of the clear image with a blur kernel applied to layer inbetween them. The filter size of the first Conv2D it, and the second Autoencoder uses the clear image sam- layer is 32 with kernel size of (3, 3) while the second ple. The coupling then learns the mapping of the blurry Conv2D layer has a filter size of 64 also with a kernel image to the clear representation. This is done by hav- size of (3, 3). Both Conv2D layers have the same ReLu ing the first Autoencoder learn the latent representation activation function with padding "same".
Recommended publications
  • Deblured Gaussian Blurred Images
    JOURNAL OF COMPUTING, VOLUME 2, ISSUE 4, APRIL 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 33 Deblured Gaussian Blurred Images Mr. Salem Saleh Al-amri1, Dr. N.V. Kalyankar2 and Dr. Khamitkar S.D 3 Abstract—This paper attempts to undertake the study of Restored Gaussian Blurred Images. by using four types of techniques of deblurring image as Wiener filter, Regularized filter ,Lucy Richardson deconvlutin algorithm and Blind deconvlution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image with Different values of Size and Alfa and then corrupted by Gaussian noise. The same is applied to the remote sensing image and they are compared with one another, So as to choose the base technique for restored or deblurring image.This paper also attempts to undertake the study of restored Gaussian blurred image with no any information about the Point Spread Function (PSF) by using same four techniques after execute the guess of the PSF, the number of iterations and the weight threshold of it. To choose the base guesses for restored or deblurring image of this techniques. Index Terms—Blur/Types of Blur/PSF; Deblurring/Deblurring Methods. —————————— —————————— 1 INTRODUCTION he restoration image is very important process in the image entire image. Tprocessing to restor the image by using the image processing This type of blurring can be distribution in horizontal and vertical techniques to easily understand this image without any artifacts direction and can be circular averaging by radius
    [Show full text]
  • A Comparative Study of Different Noise Filtering Techniques in Digital Images
    International Journal of Engineering Research and General Science Volume 3, Issue 5, September-October, 2015 ISSN 2091-2730 A COMPARATIVE STUDY OF DIFFERENT NOISE FILTERING TECHNIQUES IN DIGITAL IMAGES Sanjib Das1, Jonti Saikia2, Soumita Das3 and Nural Goni4 1Deptt. of IT & Mathematics,ICFAI University Nagaland, Dimapur, 797112, India, [email protected] 2Service Engineer, Indian export & import office, Shillong- 793003, Meghalaya, India, [email protected] 3Department of IT, School of Technology, NEHU, Shillong-22, Meghalaya, India, [email protected] 4Department of IT, School of Technology, NEHU, Shillong-22, Meghalaya, India, [email protected] Abstract: Although various solutions are available for de-noising them, a detail study of the research is required in order to design a filter which will fulfil the desire aspects along with handling most of the image filtering issues. In this paper we want to present some of the most commonly used noise filtering techniques namely: Median filter, Gaussian filter, Kuan filter, Morphological filter, Homomorphic Filter, Bilateral Filter and wavelet filter. Median filter is used for reducing the amount of intensity variation from one pixel to another pixel. Gaussian filter is a smoothing filter in the 2D convolution operation that is used to remove noise and blur from image. Kuan filtering technique transforms multiplicative noise model into additive noise model. Morphological filter is defined as increasing idempotent operators and their laws of composition are proved. Homomorphic Filter normalizes the brightness across an image and increases contrast. Bilateral Filter is a non-linear edge preserving and noise reducing smoothing filter for images. Wavelet transform can be applied to image de-noising and it has been extremely successful.
    [Show full text]
  • A Simple Second Derivative Based Blur Estimation Technique Thesis
    A Simple Second Derivative Based Blur Estimation Technique Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By Gourab Ghosh Roy, B.E. Graduate Program in Computer Science & Engineering The Ohio State University 2013 Thesis Committee: Brian Kulis, Advisor Mikhail Belkin Copyright by Gourab Ghosh Roy 2013 Abstract Blur detection is a very important problem in image processing. Different sources can lead to blur in images, and much work has been done to have automated image quality assessment techniques consistent with human rating. In this work a no-reference second derivative based image metric for blur detection and estimation has been proposed. This method works by evaluating the magnitude of the second derivative at the edge points in an image, and calculating the proportion of edge points where the magnitude is greater than a certain threshold. Lower values of this proportion or the metric denote increased levels of blur in the image. Experiments show that this method can successfully differentiate between images with no blur and varying degrees of blur. Comparison with some other state-of-the-art quality assessment techniques on a standard dataset of Gaussian blur images shows that the proposed method gives moderately high performance values in terms of correspondence with human subjective scores. Coupled with the method’s primary aspect of simplicity and subsequent ease of implementation, this makes it a probable choice for mobile applications. ii Acknowledgements This work was motivated by involvement in the personal analytics research group with Dr.
    [Show full text]
  • Histopathology Learning Dataset Augmentation: a Hybrid Approach
    Proceedings of Machine Learning Research { Under Review:1{9, 2021 Full Paper { MIDL 2021 submission HISTOPATHOLOGY LEARNING DATASET AUGMENTATION: A HYBRID APPROACH Ramzi HAMDI1 [email protected] Cl´ement BOUVIER1 [email protected] Thierry DELZESCAUX2 [email protected] C´edricCLOUCHOUX1 [email protected] 1 WITSEE, Neoxia, Paris, France 2 CEA-CNRS-UMR 9199, LMN, MIRCen, Univ. Paris-Saclay, Fontenay-aux-Roses, France Editors: Under Review for MIDL 2021 Abstract In histopathology, staining quantification is a mandatory step to unveil and characterize disease progression and assess drug efficiency in preclinical and clinical settings. Super- vised Machine Learning (SML) algorithms allow the automation of such tasks but rely on large learning datasets, which are not easily available for pre-clinical settings. Such databases can be extended using traditional Data Augmentation methods, although gen- erated images diversity is heavily dependent on hyperparameter tuning. Generative Ad- versarial Networks (GAN) represent a potential efficient way to synthesize images with a parameter-independent range of staining distribution. Unfortunately, generalization of such approaches is jeopardized by the low quantity of publicly available datasets. To leverage this issue, we propose a hybrid approach, mixing traditional data augmentation and GAN to produce partially or completely synthetic learning datasets for segmentation application. The augmented datasets are validated by a two-fold cross-validation analysis using U-Net as a SML method and F-Score as a quantitative criterion. Keywords: Generative Adversarial Networks, Histology, Machine Learning, Whole Slide Imaging 1. Introduction Histology is widely used to detect biological objects with specific staining such as protein deposits, blood vessels or cells.
    [Show full text]
  • Anisotropic Diffusion in Image Processing
    Anisotropic diffusion in image processing Tomi Sariola Advisor: Petri Ola September 16, 2019 Abstract Sometimes digital images may suffer from considerable noisiness. Of course, we would like to obtain the original noiseless image. However, this may not be even possible. In this thesis we utilize diffusion equations, particularly anisotropic diffusion, to reduce the noise level of the image. Applying these kinds of methods is a trade-off between retaining information and the noise level. Diffusion equations may reduce the noise level, but they also may blur the edges and thus information is lost. We discuss the mathematics and theoretical results behind the diffusion equations. We start with continuous equations and build towards discrete equations as digital images are fully discrete. The main focus is on iterative method, that is, we diffuse the image step by step. As it occurs, we need certain assumptions for these equations to produce good results, one of which is a timestep restriction and the other is a correct choice of a diffusivity function. We construct an anisotropic diffusion algorithm to denoise images and compare it to other diffusion equations. We discuss the edge-enhancing property, the noise removal properties and the convergence of the anisotropic diffusion. Results on test images show that the anisotropic diffusion is capable of reducing the noise level of the image while retaining the edges of image and as mentioned, anisotropic diffusion may even sharpen the edges of the image. Contents 1 Diffusion filtering 3 1.1 Physical background on diffusion . 3 1.2 Diffusion filtering with heat equation . 4 1.3 Non-linear models .
    [Show full text]
  • 22Nd International Congress on Acoustics ICA 2016
    Page intentionaly left blank 22nd International Congress on Acoustics ICA 2016 PROCEEDINGS Editors: Federico Miyara Ernesto Accolti Vivian Pasch Nilda Vechiatti X Congreso Iberoamericano de Acústica XIV Congreso Argentino de Acústica XXVI Encontro da Sociedade Brasileira de Acústica 22nd International Congress on Acoustics ICA 2016 : Proceedings / Federico Miyara ... [et al.] ; compilado por Federico Miyara ; Ernesto Accolti. - 1a ed . - Gonnet : Asociación de Acústicos Argentinos, 2016. Libro digital, PDF Archivo Digital: descarga y online ISBN 978-987-24713-6-1 1. Acústica. 2. Acústica Arquitectónica. 3. Electroacústica. I. Miyara, Federico II. Miyara, Federico, comp. III. Accolti, Ernesto, comp. CDD 690.22 ISBN 978-987-24713-6-1 © Asociación de Acústicos Argentinos Hecho el depósito que marca la ley 11.723 Disclaimer: The material, information, results, opinions, and/or views in this publication, as well as the claim for authorship and originality, are the sole responsibility of the respective author(s) of each paper, not the International Commission for Acoustics, the Federación Iberoamaricana de Acústica, the Asociación de Acústicos Argentinos or any of their employees, members, authorities, or editors. Except for the cases in which it is expressly stated, the papers have not been subject to peer review. The editors have attempted to accomplish a uniform presentation for all papers and the authors have been given the opportunity to correct detected formatting non-compliances Hecho en Argentina Made in Argentina Asociación de Acústicos Argentinos, AdAA Camino Centenario y 5006, Gonnet, Buenos Aires, Argentina http://www.adaa.org.ar Proceedings of the 22th International Congress on Acoustics ICA 2016 5-9 September 2016 Catholic University of Argentina, Buenos Aires, Argentina ICA 2016 has been organised by the Ibero-american Federation of Acoustics (FIA) and the Argentinian Acousticians Association (AdAA) on behalf of the International Commission for Acoustics.
    [Show full text]
  • Removing Camera Shake from a Single Photograph
    Removing Camera Shake from a Single Photograph Rob Fergus1 Barun Singh1 Aaron Hertzmann2 Sam T. Roweis2 William T. Freeman1 1MIT CSAIL 2University of Toronto Figure 1: Left: An image spoiled by camera shake. Middle: result from Photoshop “unsharp mask”. Right: result from our algorithm. Abstract depth-of-field. A tripod, or other specialized hardware, can elim- inate camera shake, but these are bulky and most consumer pho- tographs are taken with a conventional, handheld camera. Users Camera shake during exposure leads to objectionable image blur may avoid the use of flash due to the unnatural tonescales that re- and ruins many photographs. Conventional blind deconvolution sult. In our experience, many of the otherwise favorite photographs methods typically assume frequency-domain constraints on images, of amateur photographers are spoiled by camera shake. A method or overly simplified parametric forms for the motion path during to remove that motion blur from a captured photograph would be camera shake. Real camera motions can follow convoluted paths, an important asset for digital photography. and a spatial domain prior can better maintain visually salient im- age characteristics. We introduce a method to remove the effects of Camera shake can be modeled as a blur kernel, describing the cam- camera shake from seriously blurred images. The method assumes era motion during exposure, convolved with the image intensities. a uniform camera blur over the image and negligible in-plane cam- Removing the unknown camera shake is thus a form of blind image era rotation. In order to estimate the blur from the camera shake, deconvolution, which is a problem with a long history in the im- the user must specify an image region without saturation effects.
    [Show full text]
  • Performance Analysis of Intensity Averaging by Anisotropic Diffusion Method for MRI Denoising Corrupted by Random Noise by Ms
    Global Journal of Computer Science and Technology Graphics & Vision Volume 12 Issue 12 Version 1.0 Year 2012 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172 & Print ISSN: 0975-4350 Performance Analysis of Intensity Averaging by Anisotropic diffusion Method for MRI Denoising Corrupted by Random Noise By Ms. Ami vibhakar, Mukesh Tiwari, Jaikaran singh & Sanjay rathore Ragiv Gandhi Technical University Abstract - The two parameters which plays important role in MRI (magnetic resonance imaging), acquired by various imaging modalities are Feature extraction and object recognition. These operations will become difficult if the images are corrupted with noise. Noise in MR image is always random type of noise. This noise will change the value of amplitude and phase of each pixel in MR image. Due to this, MR image gets corrupted and we cannot make perfect diagnostic for a body. So noise removal is essential task for perfect diagnostic. There are different approaches for noise reduction, each of which has its own advantages and limitation. MRI denoising is a difficult task as fine details in medical image containing diagnostic information should not be removed during noise removal process. In this paper, we are representing an algorithm for MRI denoising in which we are using iterations and Gaussian blurring for amplitude reconstruction and image fusion, anisotropic diffusion and FFT for phase reconstruction. We are using the PSNR(Peak signal to noise ration), MSE(Mean square error) and RMSE(Root mean square error) as performance matrices to measure the quality of denoised MRI .The final result shows that this method is effectively removing the noise while preserving the edge and fine information in the images.
    [Show full text]
  • California State University Northridge Noise
    CALIFORNIA STATE UNIVERSITY NORTHRIDGE NOISE REDUCTION AND IMAGE SMOOTHING USING GAUSSIAN BLUR. A graduate project in fulfillment of the requirements For the degree of Masters of Science In Electrical Engineering By Sachin Eswar MAY 2015 The graduate project of Sachin Eswar is approved: _____________________________________ ___________________ Dr. Ramin Roosta, Ph.D. Date _____________________________________ ___________________ Dr. Matthew Radmanesh, Ph.D. Date _____________________________________ ___________________ Dr. Shahnam Mirzaei, Ph.D., Chair Date California State University Northridge ii ACKNOWLEDGEMENT Working on this “Noise Reduction and Image Smoothing using Gaussian Blur” was a source of immense knowledge to me. I would like to express my sincere gratitude to Dr. Shahnam Mirzaei for his encouragement and unlimited support towards completion of my dissertation. His guidance and support are the key points for the completion of my project. I would also like to thank Dr. Ramin Roosta and Dr. Mathew Radmanesh for their extended support, scholarly advice and inspiration in completion of my project successfully. I am grateful to the Department of Electrical and computer engineering for giving me this opportunity and providing me the required knowledge in completing my project. I would also like to thank my family and friends, who encouraged me to extend my reach, and for their love, care and support through all the tough times during my graduation. iii TABLE OF CONTENTS SIGNATURE PAGE: ........................................................................................................
    [Show full text]
  • Scale-Space and Edge Detection Using Anisotropic Diffusion
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 12. NO. 7. JULY 1990 629 Scale-Space and Edge Detection Using Anisotropic Diffusion PIETRO PERONA AND JITENDRA MALIK Abstracf-The scale-space technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the “semantically mean- ingful” edges at coarse scales. In this paper we suggest a new definition of scale-space, and introduce a class of algorithms that realize it using a diffusion process. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing in preference to interregion smoothing. It is shown that the “no new maxima should be generated at coarse scales” property of conventional scale space is pre- served. As the region boundaries in our approach remain sharp, we obtain a high quality edge detector which successfully exploits global information. Experimental results are shown on a number of images. The algorithm involves elementary, local operations replicated over the Fig. 1. A family of l-D signals I(x, t)obtained by convolving the original image making parallel hardware implementations feasible. one (bottom) with Gaussian kernels whose variance increases from bot- tom to top (adapted from Witkin [21]). Zndex Terms-Adaptive filtering, analog VLSI, edge detection, edge enhancement, nonlinear diffusion, nonlinear filtering, parallel algo- rithm, scale-space. with the initial condition I(x, y, 0) = Zo(x, y), the orig- inal image. 1. INTRODUCTION Koenderink motivates the diffusion equation formula- HE importance of multiscale descriptions of images tion by stating two criteria.
    [Show full text]
  • Newport Beach, California
    47th International Congress and Exposition on Noise Control Engineering (INTERNOISE 2018) Impact of Noise Control Engineering Chicago, Illinois, USA 26 - 29 August 2018 Volume 1 of 10 ISBN: 978-1-5108-7303-2 Printed from e-media with permission by: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 Some format issues inherent in the e-media version may also appear in this print version. Copyright© (2018) by Institute of Noise Control Engineering - USA (INCE-USA) All rights reserved. Printed by Curran Associates, Inc. (2018) For permission requests, please contact Institute of Noise Control Engineering - USA (INCE-USA) at the address below. Institute of Noise Control Engineering - USA (INCE-USA) INCE-USA Business Office 11130 Sunrise Valley Drive Suite 350 Reston, VA 20190 USA Phone: +1 703 234 4124 Fax: +1 703 435 4390 [email protected] Additional copies of this publication are available from: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 USA Phone: 845-758-0400 Fax: 845-758-2633 Email: [email protected] Web: www.proceedings.com INTER-NOISE 2018 – Pleanary, Keynote and Technical Sessions Papers are listed by Day, Morning/Afternoon, Session Number and Order of Presentation Select the underlined text (in18_xxxx.pdf) to link to paper Papers without links are available from the American Society of Mechanical Engineers (ASME) - Noise Control and Acoustics Division (NCAD) Sunday – 26 August, 2018 Opening Plenary: Barry Marshall Gibbs Session Chair: Stuart Bolton, Room: Chicago D and E 16:00 in18_4002.pdf Structure-Borne Sound in Buildings: Application of Vibro-Acoustic Methods for Measurement and Prediction .....1 Barry Marshall Gibbs, University of Liverpool Monday Morning – 27 August, 2018 Keynote: Jean-Louis Guyader Session Chair: Steffen Marburg Room: Chicago D 08:00 in18_5001.pdf Building Optimal Vibro-Acoustics Models From Measured Responses .....26 Jean-Louis Guyader, LVA/INSA de Lyon; Michael Ruzek, LAMCOS INSA de Lyon; Charles Pezerat, LAUM Universite du Maine Keynote: Amiya R.
    [Show full text]
  • Detection of Anomalous Noise Events for Real-Time Road
    Noise Mapp. 2018; 5:71–85 Research Article Francesc Alías*, Rosa Ma Alsina-Pagès, Ferran Orga, and Joan Claudi Socoró Detection of Anomalous Noise Events for Real-Time Road-Traflc Noise Mapping: The DYNAMAP’s project case study https://doi.org/10.1515/noise-2018-0006 1 Introduction Received Dec 28, 2017; accepted Jun 26, 2018 Abstract: Environmental noise is increasing year after year, Environmental noise is increasing year after year and be- especially in urban and suburban areas. Besides annoy- coming a growing concern in urban and suburban areas, ance, environmental noise also causes harmful health especially in large cities, since it does not only cause an- effects on people. The Environmental Noise Directive noyance to citizens, but also harmful effects on people. 2002/49/EC (END) is the main instrument of the European Most of them focus on health-related problems [1], being of Union to identify and combat noise pollution, followed particular worry the impact of noise on children [2], whose by the CNOSSOS-EU methodological framework. In com- population group is especially vulnerable. Other investiga- pliance with the END legislation, the European Member tions have also shown the effects of noise pollution in con- States are required to publish noise maps and action plans centration, sleep and stress [3]. Finally, it is worth mention- every five years. The emergence of Wireless Acoustic Sen- ing that noise exposure does not only affect health, but can sor Networks (WASNs) have changed the paradigm to ad- also affect social and economic aspects [4]. dress the END regulatory requirements, allowing the dy- Among noise sources, road-traffic noise is one ofthe namic ubiquitous measurement of environmental noise main noise pollutants in cities.
    [Show full text]