Machine Learning for Blob Detection in High-Resolution 3D Microscopy Images

Total Page:16

File Type:pdf, Size:1020Kb

Machine Learning for Blob Detection in High-Resolution 3D Microscopy Images DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2018 Machine learning for blob detection in high-resolution 3D microscopy images MARTIN TER HAAK KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Machine learning for blob detection in high-resolution 3D microscopy images MARTIN TER HAAK EIT Digital Data Science Date: June 6, 2018 Supervisor: Vladimir Vlassov Examiner: Anne Håkansson Electrical Engineering and Computer Science (EECS) iii Abstract The aim of blob detection is to find regions in a digital image that dif- fer from their surroundings with respect to properties like intensity or shape. Bio-image analysis is a common application where blobs can denote regions of interest that have been stained with a fluorescent dye. In image-based in situ sequencing for ribonucleic acid (RNA) for exam- ple, the blobs are local intensity maxima (i.e. bright spots) correspond- ing to the locations of specific RNA nucleobases in cells. Traditional methods of blob detection rely on simple image processing steps that must be guided by the user. The problem is that the user must seek the optimal parameters for each step which are often specific to that image and cannot be generalised to other images. Moreover, some of the existing tools are not suitable for the scale of the microscopy images that are often in very high resolution and 3D. Machine learning (ML) is a collection of techniques that give computers the ability to ”learn” from data. To eliminate the dependence on user parameters, the idea is applying ML to learn the definition of a blob from labelled images. The research question is therefore how ML can be effectively used to perform the blob detection. A blob detector is proposed that first extracts a set of relevant and non- redundant image features, then classifies pixels as blobs and finally uses a clustering algorithm to split up connected blobs. The detec- tor works out-of-core, meaning it can process images that do not fit in memory, by dividing the images into chunks. Results prove the fea- sibility of this blob detector and show that it can compete with other popular software for blob detection. But unlike other tools, the pro- posed blob detector does not require parameter tuning, making it eas- ier to use and more reliable. Keywords Biomedical Image Analysis; Blob Detection; Machine Learning; 3D; Computer Vision; Image Processing iv Abstract Syftet med blobdetektion är att hitta regioner i en digital bild som skil- jer sig från omgivningen med avseende på egenskaper som intensitet eller form. Biologisk bildanalys är en vanlig tillämpning där blobbar kan beteckna intresseregioner som har färgats in med ett fluorescerande färgämne. Vid bildbaserad in situ-sekvensering för ribonukleinsyra (RNA) är blobbarna lokala intensitetsmaxima (dvs ljusa fläckar) motsvarande platserna för specifika RNA-nukleobaser i celler. Traditionella metoder för blob-detektering bygger på enkla bildbehan- dlingssteg som måste vägledas av användaren. Problemet är att an- vändaren måste hitta optimala parametrar för varje steg som ofta är specifika för just den bilden och som inte kan generaliseras tillandra bilder. Dessutom är några av de befintliga verktygen inte lämpliga för storleken på mikroskopibilderna som ofta är i mycket hög upplösning och 3D. Maskininlärning (ML) är en samling tekniker som ger datorer möj- lighet att “lära sig” från data. För att eliminera beroendet av använ- darparametrar, är tanken att tillämpa ML för att lära sig definitionen av en blob från uppmärkta bilder. Forskningsfrågan är därför hur ML effektivt kan användas för att utföra blobdetektion. En blobdetekteringsalgoritm föreslås som först extraherar en uppsät- tning relevanta och icke-överflödiga bildegenskaper, klassificerar sedan pixlar som blobbar och använder slutligen en klustringsalgoritm för att dela upp sammansatta blobbar. Detekteringsalgoritmen fungerar utanför kärnan, vilket innebär att det kan bearbeta bilder som inte får plats i minnet genom att dela upp bilderna i mindre delar. Resultatet visar att detekteringsalgoritmen är genomförbar och visar att den kan konkurrera med andra populära programvaror för blobdetektion. Men i motsats till andra verktyg behöver den föreslagna detekteringsalgo- ritmen inte justering av sina parametrar, vilket gör den lättare att an- vända och mer tillförlitlig. Nyckelord Biomedicinsk bildanalys; Blobdetektion; Maskininlärning; 3D; Datorseende; Bildbehandling v Acknowledgements First, I would like to express my gratitude towards my examiner As- soc. Prof. Anne Håkansson at the KTH Royal Institute of Technol- ogy for guiding me from the first project proposal all the way to the final deliverable. She was always open to answering the most trouble- some questions or providing critical feedback. Due to her meticulous remarks I was able to reshape and tweak my work in order to achieve the high quality it has now. I would also like to thank my supervisor Jacob Kowalewski at Sin- gle Technologies under whom I performed this research. Not only would he provide me with the required resources at any moment, but he would also not hesitate to free up time for discussion. That I was able to finish the project well within the set time is most likely due to his dependable commitment. Moreover, his ideas and suggestions have strongly contributed to the approach applied in this project. Furthermore, I would like to thank Single Technologies for providing me with a very interesting thesis subject and a pleasant working space. I want to thank my co-workers for the nice chats and the friendly am- bience around the office. Finally, I would like to thank my university supervisor Assoc. Prof. Vladimir Vlassov who provided me with some highly needed hints so that I could proceed with my research. Martin ter Haak Stockholm, May 2018 Contents 0.1 Acronyms and abbreviations ................ ix 1 Introduction 1 1.1 Background .......................... 1 1.2 Problem ............................ 2 1.3 Purpose ............................. 4 1.4 Goals .............................. 4 1.4.1 Benefits, ethics and sustainability .......... 5 1.5 Research methodology .................... 6 1.6 Delimitations ......................... 7 1.7 Outline ............................. 8 2 An introduction to in situ RNA sequencing 9 3 Blob detection 11 3.1 Automatic scale selection ................... 11 3.2 Algorithms ........................... 13 3.2.1 Template matching .................. 13 3.2.2 Thresholding ..................... 14 3.2.3 Local extrema ..................... 16 3.2.4 Differential extrema ................. 16 3.2.5 Machine learning ................... 19 3.2.6 Super-pixel classification ............... 19 4 Machine learning 21 4.1 Classification .......................... 22 4.1.1 Naive Bayes ...................... 22 4.1.2 Logistic regression .................. 23 4.1.3 K-Nearest Neighbour ................. 24 4.1.4 Decision Tree ..................... 25 vi CONTENTS vii 4.1.5 Random Forest .................... 26 4.1.6 AdaBoost ....................... 26 4.1.7 Support Vector Machines .............. 27 4.1.8 Neural network .................... 27 4.1.9 Validation ....................... 29 4.2 Clustering ........................... 30 4.2.1 K-means ........................ 30 4.2.2 Agglomerative clustering .............. 30 4.2.3 MeanShift ....................... 31 4.2.4 Spectral clustering .................. 31 4.2.5 Other clustering algorithms ............. 32 4.2.6 Validation ....................... 32 4.3 Dimensionality reduction .................. 33 4.3.1 Principal Component Analysis (PCA) ....... 33 5 Related work 34 5.1 Blob detection ......................... 34 5.2 Machine learning for biomedical image analysis ..... 35 6 Methodology 38 6.1 Blob detection process .................... 38 6.1.1 Feature extraction ................... 39 6.1.2 Feature compression ................. 40 6.1.3 Pixel classification .................. 40 6.1.4 Pixel clustering .................... 40 6.1.5 Blob extraction .................... 41 6.1.6 Blob filtration ..................... 41 6.1.7 Chunking ....................... 41 6.2 Experiments .......................... 42 6.2.1 A: Feature extraction ................. 42 6.2.2 B: Feature compression ............... 45 6.2.3 C: Pixel classification ................. 45 6.2.4 D: Pixel clustering .................. 49 6.2.5 E: Run on whole image ................ 50 6.2.6 F: Comparison with state-of-the-art ........ 51 6.2.7 Summary ....................... 51 6.3 Data collection ......................... 51 6.3.1 Characteristics .................... 51 6.3.2 Labelling ........................ 53 viii CONTENTS 6.4 Experimental design ..................... 55 6.4.1 Test system ...................... 55 6.4.2 Software ........................ 56 6.4.3 Data analysis ..................... 56 6.4.4 Overall reliability and validity ........... 56 7 Analysis 58 7.1 Results from A: Feature extraction ............. 58 7.2 Results from B: Feature compression and C: Pixel classi- fication ............................. 61 7.3 Results from D: Pixel clustering ............... 66 7.4 Results from E: Run on whole image ............ 70 7.5 Results from F: Comparison with state-of-the-art ..... 71 8 Conclusions 74 8.1 Discussion ........................... 76 8.2 Future work .......................... 78 Bibliography 79 A Experiment F software configurations 95 A.1 Crops .............................. 95 A.2 MFB detector ........................
Recommended publications
  • Management of Large Sets of Image Data Capture, Databases, Image Processing, Storage, Visualization Karol Kozak
    Management of large sets of image data Capture, Databases, Image Processing, Storage, Visualization Karol Kozak Download free books at Karol Kozak Management of large sets of image data Capture, Databases, Image Processing, Storage, Visualization Download free eBooks at bookboon.com 2 Management of large sets of image data: Capture, Databases, Image Processing, Storage, Visualization 1st edition © 2014 Karol Kozak & bookboon.com ISBN 978-87-403-0726-9 Download free eBooks at bookboon.com 3 Management of large sets of image data Contents Contents 1 Digital image 6 2 History of digital imaging 10 3 Amount of produced images – is it danger? 18 4 Digital image and privacy 20 5 Digital cameras 27 5.1 Methods of image capture 31 6 Image formats 33 7 Image Metadata – data about data 39 8 Interactive visualization (IV) 44 9 Basic of image processing 49 Download free eBooks at bookboon.com 4 Click on the ad to read more Management of large sets of image data Contents 10 Image Processing software 62 11 Image management and image databases 79 12 Operating system (os) and images 97 13 Graphics processing unit (GPU) 100 14 Storage and archive 101 15 Images in different disciplines 109 15.1 Microscopy 109 360° 15.2 Medical imaging 114 15.3 Astronomical images 117 15.4 Industrial imaging 360° 118 thinking. 16 Selection of best digital images 120 References: thinking. 124 360° thinking . 360° thinking. Discover the truth at www.deloitte.ca/careers Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities. Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities.
    [Show full text]
  • Scale Invariant Interest Points with Shearlets
    Scale Invariant Interest Points with Shearlets Miguel A. Duval-Poo1, Nicoletta Noceti1, Francesca Odone1, and Ernesto De Vito2 1Dipartimento di Informatica Bioingegneria Robotica e Ingegneria dei Sistemi (DIBRIS), Universit`adegli Studi di Genova, Italy 2Dipartimento di Matematica (DIMA), Universit`adegli Studi di Genova, Italy Abstract Shearlets are a relatively new directional multi-scale framework for signal analysis, which have been shown effective to enhance signal discon- tinuities such as edges and corners at multiple scales. In this work we address the problem of detecting and describing blob-like features in the shearlets framework. We derive a measure which is very effective for blob detection and closely related to the Laplacian of Gaussian. We demon- strate the measure satisfies the perfect scale invariance property in the continuous case. In the discrete setting, we derive algorithms for blob detection and keypoint description. Finally, we provide qualitative justifi- cations of our findings as well as a quantitative evaluation on benchmark data. We also report an experimental evidence that our method is very suitable to deal with compressed and noisy images, thanks to the sparsity property of shearlets. 1 Introduction Feature detection consists in the extraction of perceptually interesting low-level features over an image, in preparation of higher level processing tasks. In the last decade a considerable amount of work has been devoted to the design of effective and efficient local feature detectors able to associate with a given interesting point also scale and orientation information. Scale-space theory has been one of the main sources of inspiration for this line of research, providing an effective framework for detecting features at multiple scales and, to some extent, to devise scale invariant image descriptors.
    [Show full text]
  • Bioimage Analysis Tools
    Bioimage Analysis Tools Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Paul-Gilloteaux, Ulrike Schulze, Simon Norrelykke, Christian Tischer, Thomas Pengo To cite this version: Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Paul-Gilloteaux, et al.. Bioimage Analysis Tools. Kota Miura. Bioimage Data Analysis, Wiley-VCH, 2016, 978-3-527-80092-6. hal- 02910986 HAL Id: hal-02910986 https://hal.archives-ouvertes.fr/hal-02910986 Submitted on 3 Aug 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 2 Bioimage Analysis Tools 1 2 3 4 5 6 Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Pau/-Gilloteaux, - Ulrike Schulze,7 Simon F. Nerrelykke,8 Christian Tischer,9 and Thomas Penqo'" 1 European Molecular Biology Laboratory, Meyerhofstraße 1, 69117 Heidelberg, Germany National Institute of Basic Biology, Okazaki, 444-8585, Japan 2/nstitute for Research in Biomedicine ORB Barcelona), Advanced Digital Microscopy, Parc Científic de Barcelona, dBaldiri Reixac 1 O, 08028 Barcelona, Spain 3German Center of Neurodegenerative
    [Show full text]
  • Abschlussarbeit Im Fachbereich Elektrotechnik & Informatik an Der
    Bachelorthesis Adriana Bostandzhieva Design and Implementation of System for Managing Training Data for Artificial Intelligence Algorithms Fakultät Technik und Informatik Faculty of Engineering and Computer Science Department Informations- und Department of Information and Elektrotechnik Electrical Engineering Adriana Bostandzhieva Design and Implementation of System for Managing Training Data for Artificial Intelligence Algorithms Bachelorthesisbased on the study regulations for the Bachelor of Engineering degree programme Information Engineering at the Department of Information and Electrical Engineering of the Faculty of Engineering and Computer Science of the Hamburg University of Aplied Sciences Supervising examiner : Prof. Dr. -Ing. Lutz Leutelt Second Examiner : Prof. Dr. Klaus Jünemann Day of delivery 3. Juli 2019 Adriana Bostandzhieva Title of the Bachelorthesis Design and Implementation of System for Managing Training Data for Artificial Intelli- gence Algorithms Keywords AI, training data, database, labels, video Abstract This paper is part of a pilot project of the Hamburg University of Applied Sciences. The project aims to utilise object detection algorithms and visual data to analyse complex road scenes. The aim of this thesis is to determine the best tool to use to label data for training artificial intelligence algorithms, to specify what data should be saved and to determine what database is to be used to save the data. The validity of the findings is proved by building a small prototype to showcase integration between the labelling tool and the database. Adriana Bostandzhieva Titel der Arbeit Entwicklung und Aufbau eines System zur Verwaltung von Trainingsdaten für Algo- rithmen der künstlichen Intelligenz Stichworte Trainingsdaten, Datenbanke, Video, KI Kurzzusammenfassung Diese Arbeit ist Teil eines Pilotprojekts der Hochschule für Angewandte Wissenschaf- ten Hamburg.
    [Show full text]
  • Modern Programming Languages CS508 Virtual University of Pakistan
    Modern Programming Languages (CS508) VU Modern Programming Languages CS508 Virtual University of Pakistan Leaders in Education Technology 1 © Copyright Virtual University of Pakistan Modern Programming Languages (CS508) VU TABLE of CONTENTS Course Objectives...........................................................................................................................4 Introduction and Historical Background (Lecture 1-8)..............................................................5 Language Evaluation Criterion.....................................................................................................6 Language Evaluation Criterion...................................................................................................15 An Introduction to SNOBOL (Lecture 9-12).............................................................................32 Ada Programming Language: An Introduction (Lecture 13-17).............................................45 LISP Programming Language: An Introduction (Lecture 18-21)...........................................63 PROLOG - Programming in Logic (Lecture 22-26) .................................................................77 Java Programming Language (Lecture 27-30)..........................................................................92 C# Programming Language (Lecture 31-34) ...........................................................................111 PHP – Personal Home Page PHP: Hypertext Preprocessor (Lecture 35-37)........................129 Modern Programming Languages-JavaScript
    [Show full text]
  • A 3D Interactive Multi-Object Segmentation Tool Using Local Robust Statistics Driven Active Contours
    A 3D interactive multi-object segmentation tool using local robust statistics driven active contours The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Gao, Yi, Ron Kikinis, Sylvain Bouix, Martha Shenton, and Allen Tannenbaum. 2012. A 3D Interactive Multi-Object Segmentation Tool Using Local Robust Statistics Driven Active Contours. Medical Image Analysis 16, no. 6: 1216–1227. doi:10.1016/j.media.2012.06.002. Published Version doi:10.1016/j.media.2012.06.002 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:28548930 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA NIH Public Access Author Manuscript Med Image Anal. Author manuscript; available in PMC 2013 August 01. NIH-PA Author ManuscriptPublished NIH-PA Author Manuscript in final edited NIH-PA Author Manuscript form as: Med Image Anal. 2012 August ; 16(6): 1216–1227. doi:10.1016/j.media.2012.06.002. A 3D Interactive Multi-object Segmentation Tool using Local Robust Statistics Driven Active Contours Yi Gaoa,*, Ron Kikinisb, Sylvain Bouixa, Martha Shentona, and Allen Tannenbaumc aPsychiatry Neuroimaging Laboratory, Brigham & Women's Hospital, Harvard Medical School, Boston, MA 02115 bSurgical Planning Laboratory, Brigham & Women's Hospital, Harvard Medical School, Boston, MA 02115 cDepartments of Electrical and Computer Engineering and Biomedical Engineering, Boston University, Boston, MA 02115 Abstract Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community.
    [Show full text]
  • TANGO Device
    School on TANGO Controls system Basics of TANGO Lorenzo Pivetta Claudio Scafuri Graziano Scalamera http://www.tango-controls.org L.Pivetta, C.Scafuri, G.Scalamera School on TANGO Control System - Trieste 4-8th July 2016 2 Prerequisites To better understand the training a background on the following arguments is desirable: ● Programming language ● Object oriented programming ● Linux/UNIX operating system ● Networking ● Control systems L.Pivetta, C.Scafuri, G.Scalamera School on TANGO Control System - Trieste 4-8th July 2016 3 Outline 1 - What is TANGO? 2 - TANGO architecture Language/OS/Compilers Device hierarchy CORBA and ZeroMQ TANGO domains TANGO device and device server TANGO Database Communication models 3 - TANGO configuration/tools Multicast Jive Polling Starter/Astor Events Pogo Alarms TANGO installation Groups Client basics TANGO ACL Logging system Historical DataBase 4 – Examples Test device L.Pivetta, C.Scafuri, G.Scalamera School on TANGO Control System - Trieste 4-8th July 2016 4 What is TANGO? Scientific workspaces Native client applications In short: Industrial SCADA Control system framework TANGO Based on CORBA and ZMQ C++ TANGO Java Archiving Python System Centralized config. database TANGO TANGO TANGO TANGO Clients binding binding binding binding (CLI/GUI) Software bus for distributed TANGO software bus objects Provides unified interface to Device Device Device Device Device Device Device Server Server Server Server Server Server Server all equipments hiding how they are HV ps + Pylon OPC UA Data Motion TANGO SNMP connected/managed
    [Show full text]
  • Open Source Computer Vision-Based Layer-Wise 3D Printing Analysis
    Open Source Computer Vision-based Layer-wise 3D Printing Analysis Aliaksei L. Petsiuk1 and Joshua M. Pearce1,2,3 1Department of Electrical & Computer Engineering, Michigan Technological University, Houghton, MI 49931, USA 2Department of Material Science & Engineering, Michigan Technological University, Houghton, MI 49931, USA 3Department of Electronics and Nanoengineering, School of Electrical Engineering, Aalto University, Espoo, FI-00076, Finland [email protected], [email protected] Graphical Abstract Highlights • Developed a visual servoing platform using a monocular multistage image segmentation • Presented algorithm prevents critical failures during additive manufacturing • The developed system allows tracking printing errors on the interior and exterior Abstract The paper describes an open source computer vision-based hardware structure and software algorithm, which analyzes layer-wise the 3-D printing processes, tracks printing errors, and generates appropriate printer actions to improve reliability. This approach is built upon multiple- stage monocular image examination, which allows monitoring both the external shape of the printed object and internal structure of its layers. Starting with the side-view height validation, the developed program analyzes the virtual top view for outer shell contour correspondence using the multi-template matching and iterative closest point algorithms, as well as inner layer texture quality clustering the spatial-frequency filter responses with Gaussian mixture models and segmenting structural anomalies with the agglomerative hierarchical clustering algorithm. This allows evaluation of both global and local parameters of the printing modes. The experimentally- verified analysis time per layer is less than one minute, which can be considered a quasi-real-time process for large prints. The systems can work as an intelligent printing suspension tool designed to save time and material.
    [Show full text]
  • A Practical Guide for Improving Transparency and Reproducibility in Neuroimaging Research Krzysztof J
    bioRxiv preprint first posted online Feb. 12, 2016; doi: http://dx.doi.org/10.1101/039354. The copyright holder for this preprint (which was not peer-reviewed) is the author/funder. It is made available under a CC-BY 4.0 International license. A practical guide for improving transparency and reproducibility in neuroimaging research Krzysztof J. Gorgolewski and Russell A. Poldrack Department of Psychology, Stanford University Abstract Recent years have seen an increase in alarming signals regarding the lack of replicability in neuroscience, psychology, and other related fields. To avoid a widespread crisis in neuroimaging research and consequent loss of credibility in the public eye, we need to improve how we do science. This article aims to be a practical guide for researchers at any stage of their careers that will help them make their research more reproducible and transparent while minimizing the additional effort that this might require. The guide covers three major topics in open science (data, code, and publications) and offers practical advice as well as highlighting advantages of adopting more open research practices that go beyond improved transparency and reproducibility. Introduction The question of how the brain creates the mind has captivated humankind for thousands of years. With recent advances in human in vivo brain imaging, we how have effective tools to ​ ​ peek into biological underpinnings of mind and behavior. Even though we are no longer constrained just to philosophical thought experiments and behavioral observations (which undoubtedly are extremely useful), the question at hand has not gotten any easier. These powerful new tools have largely demonstrated just how complex the biological bases of behavior actually are.
    [Show full text]
  • Cellanimation: an Open Source MATLAB Framework for Microscopy Assays Walter Georgescu1,2,3,∗, John P
    Vol. 28 no. 1 2012, pages 138–139 BIOINFORMATICS APPLICATIONS NOTE doi:10.1093/bioinformatics/btr633 Systems biology Advance Access publication November 24, 2011 CellAnimation: an open source MATLAB framework for microscopy assays Walter Georgescu1,2,3,∗, John P. Wikswo1,2,3,4,5 and Vito Quaranta1,3,6 1Vanderbilt Institute for Integrative Biosystems Research and Education, 2Department of Biomedical Engineering, Vanderbilt University, 3Center for Cancer Systems Biology at Vanderbilt, Vanderbilt University Medical Center, Nashville, TN, USA, 4Department of Molecular Physiology and Biophysics, 5Department of Physics and Astronomy, Vanderbilt University and 6Department of Cancer Biology, Vanderbilt University Medical Center, Nashville, TN, USA Associate Editor: Jonathan Wren ABSTRACT 1 INTRODUCTION Motivation: Advances in microscopy technology have led to At present there are a number of microscopy applications on the the creation of high-throughput microscopes that are capable of market, both open-source and commercial, and addressing both generating several hundred gigabytes of images in a few days. very specific and broader microscopy needs. In general, commercial Analyzing such wealth of data manually is nearly impossible and software packages such as Imaris® and MetaMorph® tend to requires an automated approach. There are at present a number include more complete sets of algorithms, covering different kinds of open-source and commercial software packages that allow the of experimental setups, at the cost of ease of customization and user
    [Show full text]
  • Abstractband
    Volume 23 · Supplement 1 · September 2013 Clinical Neuroradiology Official Journal of the German, Austrian, and Swiss Societies of Neuroradiology Abstracts zur 48. Jahrestagung der Deutschen Gesellschaft für Neuroradiologie Gemeinsame Jahrestagung der DGNR und ÖGNR 10.–12. Oktober 2013, Gürzenich, Köln www.cnr.springer.de Clinical Neuroradiology Official Journal of the German, Austrian, and Swiss Societies of Neuroradiology Editors C. Ozdoba L. Solymosi Bern, Switzerland (Editor-in-Chief, responsible) ([email protected]) Department of Neuroradiology H. Urbach University Würzburg Freiburg, Germany Josef-Schneider-Straße 11 ([email protected]) 97080 Würzburg Germany ([email protected]) Published on behalf of the German Society of Neuroradiology, M. Bendszus president: O. Jansen, Heidelberg, Germany the Austrian Society of Neuroradiology, ([email protected]) president: J. Trenkler, and the Swiss Society of Neuroradiology, T. Krings president: L. Remonda. Toronto, ON, Canada ([email protected]) Clin Neuroradiol 2013 · No. 1 © Springer-Verlag ABC jobcenter-medizin.de Clin Neuroradiol DOI 10.1007/s00062-013-0248-4 ABSTRACTS 48. Jahrestagung der Deutschen Gesellschaft für Neuroradiologie Gemeinsame Jahrestagung der DGNR und ÖGNR 10.–12. Oktober 2013 Gürzenich, Köln Kongresspräsidenten Prof. Dr. Arnd Dörfler Prim. Dr. Johannes Trenkler Erlangen Wien Dieses Supplement wurde von der Deutschen Gesellschaft für Neuroradiologie finanziert. Inhaltsverzeichnis Grußwort ............................................................................................................................................................................
    [Show full text]
  • Mimicking Very Efficient Network for Object Detection
    Mimicking Very Efficient Network for Object Detection Quanquan Li1, Shengying Jin2, Junjie Yan1 1SenseTime 2Beihang University [email protected], [email protected], [email protected] Abstract Method MR−2 Parameters test time (ms) Inception R-FCN 7.15 2.5M 53.5 Current CNN based object detectors need initialization 1/2-Inception 7.31 625K 22.8 from pre-trained ImageNet classification models, which are Mimic R-FCN usually time-consuming. In this paper, we present a fully 1/2-Inception finetuned 8.88 625K 22.8 convolutional feature mimic framework to train very effi- from ImageNet cient CNN based detectors, which do not need ImageNet pre-training and achieve competitive performance as the Table 1: The parameters and test time of large and small mod- els. Tested on TITANX with 1000×1500 input. The 1/2-Inception large and slow models. We add supervision from high-level model trained by mimicking outperforms that fine-tuned from Im- features of the large networks in training to help the small ageNet pre-trained model. Moreover, it obtains similar perfor- network better learn object representation. More specifi- mance as the large Inception model with only 1/4 parameters and cally, we conduct a mimic method for the features sampled achieves a 2.5× speed-up. from the entire feature map and use a transform layer to map features from the small network onto the same dimen- sion of the large network. In training the small network, we AlexNet gets 56.8% AP. Due to this phenomenon, nearly optimize the similarity between features sampled from the all the modern detection methods can only train networks same region on the feature maps of both networks.
    [Show full text]