Abstractband

Total Page:16

File Type:pdf, Size:1020Kb

Abstractband Volume 23 · Supplement 1 · September 2013 Clinical Neuroradiology Official Journal of the German, Austrian, and Swiss Societies of Neuroradiology Abstracts zur 48. Jahrestagung der Deutschen Gesellschaft für Neuroradiologie Gemeinsame Jahrestagung der DGNR und ÖGNR 10.–12. Oktober 2013, Gürzenich, Köln www.cnr.springer.de Clinical Neuroradiology Official Journal of the German, Austrian, and Swiss Societies of Neuroradiology Editors C. Ozdoba L. Solymosi Bern, Switzerland (Editor-in-Chief, responsible) ([email protected]) Department of Neuroradiology H. Urbach University Würzburg Freiburg, Germany Josef-Schneider-Straße 11 ([email protected]) 97080 Würzburg Germany ([email protected]) Published on behalf of the German Society of Neuroradiology, M. Bendszus president: O. Jansen, Heidelberg, Germany the Austrian Society of Neuroradiology, ([email protected]) president: J. Trenkler, and the Swiss Society of Neuroradiology, T. Krings president: L. Remonda. Toronto, ON, Canada ([email protected]) Clin Neuroradiol 2013 · No. 1 © Springer-Verlag ABC jobcenter-medizin.de Clin Neuroradiol DOI 10.1007/s00062-013-0248-4 ABSTRACTS 48. Jahrestagung der Deutschen Gesellschaft für Neuroradiologie Gemeinsame Jahrestagung der DGNR und ÖGNR 10.–12. Oktober 2013 Gürzenich, Köln Kongresspräsidenten Prof. Dr. Arnd Dörfler Prim. Dr. Johannes Trenkler Erlangen Wien Dieses Supplement wurde von der Deutschen Gesellschaft für Neuroradiologie finanziert. Inhaltsverzeichnis Grußwort ............................................................................................................................................................................. 3 Vorträge ............................................................................................................................................................................... 4 Poster................................................................................................................................................................................... 44 Autorenverzeichnis ............................................................................................................................................................. 109 Abstracts 3 Liebe Kolleginnen und Kollegen, liebe Freunde der Neuroradiologie, herzlich willkommen zur neuroRAD 2013 in Köln, der 48. Jahrestagung der Deutschen Gesellschaft für Neuroradiologie (DGNR), die sich mittlerweile zur größten neuroradiologischen Tagung in Europa entwickelt hat. Die diesjährige Jahrestagung der Deutschen Gesellschaft für Neuroradiologie wird gemeinsam mit unserer Partnergesell- schaft, der Österreichischen Gesellschaft für Neuroradiologie (ÖGNR), veranstaltet. Mit ihrem charakteristischen Mix aus Fortbildung, wissenschaftlichen Präsentationen, Workshops, Satelliten-Symposien und Keynote-Vorträgen bietet Ihnen die neuroRAD 2013 ein abwechslungsreiches Programm in der inspirierenden Atmo- sphäre des Kölner Gürzenich. Weit mehr als 200 Abstracteinreichungen versprechen ein wissenschaftliches Programm auf höchstem Niveau. Ein Schwerpunktthema ist wie in den Vorjahren die Interventionelle Neuroradiologie und hier insbesondere die mechani- sche Rekanalisationstherapie des akuten Schlaganfalls. Neben technischen Aspekten und interdisziplinärer Diskussion wird hier der Fokus auf den (kommenden) randomisierten Studien liegen. Ein Update zur endovaskulären Aneurysmatherapie ergänzt diesen Themenschwerpunkt. Ein weiteres Schwerpunktthema ist die sich rasant weiterentwickelnde Medizintechnik und multimodale Bildgebung. Ein Augenmerk liegt hier auf der Wertigkeit innovativer Techniken wie der Hochfeld-MRT und hybriden Bildgebungsverfahren in unserer klinischen Entscheidungsfindung. Ergänzt wird das Programm durch die Themen entzündliche und metabolische ZNS-Erkrankungen und Epilepsie. Besonders freuen wir uns auf hochkarätige, international renommierte Referenten. Ein Highlight des Kongresses ist hier sicherlich der Vortrag von Anne G. Osborn (Salt Lake City) – weitere Keynote-Speaker sind u.a. D. Prayer (Wien), Ch. Strother (Madison), J. Byrne (Oxford) und F. Barkhof (Amsterdam). Last but not least, legen wir besonderen Wert auf qualitativ hochwertige und zugleich unterhaltsame Fortbildung für unsere MTRAs, deren Fähigkeiten erheblich zu Qualität und Mehrwert der Neuroradiologie beitragen. Auch hier können wir Ihnen ein abwechslungsreiches Programm versprechen. Die Deutsche und Österreichische Gesellschaft für Neuroradiologie freuen sich, Sie vom 10. bis 12. Oktober 2013 in Köln begrüßen zu dürfen. Arnd Dörfler Michael Knauth Johannes Trenkler Kongresspräsident der DGNR Präsident der DGNR Kongresspräsident der ÖGNR 1 3 4 Abstracts Vorträge 0009 Periinterventionelle Subarachnoidalblutungen während mechanischer Rekanalisierungen mit Stentretrievern *U. Yilmaz1, S. Walter2, H. Körner1, P. Papanagiotou3, C. Roth3, A. Simgen1, S. Behnke2, A. Ragoschke-Schumm2, K. Fassbender2, W. Reith1 1Universitätsklinikum des Saarlandes, Klinik für diagnos- tische und interventionelle Neuroradiologie, Homburg, Deutschland 2Universitätsklinikum des Saarlandes, Klinik für Neurolo- gie, Homburg, Deutschland 3Klinikum Bremen Mitte, Klinik für diagnostische und interventionelle Neuroradiologie, Bremen, Deutschland Fragestellung: Die mechanische Rekanalisierung mit Stentretrievern hat sich in den letzten Jahren zu einer viel- versprechenden Methode mit den höchsten Rekanalisie- rungsraten der zur Verfügung stehenden Prozeduren beim Abb. 1 ischämischen Schlaganfall entwickelt. Allerdings werden in postinterventionellen CCT-Kontrollen in bis zu 16 % der 0018 Fälle Subarachnoidalblutungen gefunden, ohne dass angio- Limitierte 2- und 3-Schicht Perfusions-CT im Vergleich graphisch Perforationen nachweisbar waren. Über den Ein- zur Ganzhirn-Perfusion in der Indikationsstellung zur fluss dieser Blutungen auf den klinischen Verlauf ist wenig Thrombolyse im erweiterten Zeitfenster bei akuter bekannt. zerebraler Ischämie Methoden: Es wurden die postinterventionellen CCTs von *T. Engelhorn1, M. Knott1, S. Kloska1, T. Struffert1, 217 konsekutiven Patienten analysiert, die aufgrund akuter S. Schwab2, M. Köhrmann2, A. Dörfler1 Verschlüsse intrakranieller Arterien endovaskulär mit Stent- retrievern in unserer Klinik behandelt wurden. 1Universitätsklinikum Erlangen, Abteilung für Neuroradio- Ergebnisse: Subarachnoidalblutungen wurden in postinter- logie, Erlangen, Deutschland ventionellen CCTs von 5,5 % der Patienten nachgewiesen. 7 2Universitätsklinikum Erlangen, Neurologische Klinik, Fälle wurden in die weitere Analyse eingeschlossen und mit Erlangen, Deutschland einer Kontrollgruppe verglichen. Die Analyse der angiogra- phischen Daten zeigte keine signifikanten Unterschiede in Fragestellung: Ausgewählte Patienten mit zerebraler Isch- den Längen der Prozeduren oder der Anzahl der durchge- ämie können auch noch im erweiterten Zeitfenster jenseits führten Rekanalisierungsmanöver. Auch das neurologische von 4,5 h von einer i. v. Thrombolyse profitieren. Vorausset- Outcome bis zur Entlassung wies keine signifikanten Unter- zung ist ein positives MRT- oder CT-Mismatch (PM). Viele schiede auf. Sekundäre symptomatische Blutungen traten der aktuell betriebenen 4- bis 64-Zeilen CT-Geräte bieten jeweils in einem Fall in beiden Gruppen auf. Lediglich die aber nur die Möglichkeit einer limitierten 2- bzw. 3-Schicht Rate asymptomatischer Blutungen in den ersten 24 h nach Perfusionsuntersuchung. Ziel dieser Studie war daher der der Prozedur war in der Gruppe mit periinterventionellen Vergleich der 2- und 3-Schicht- mit der Ganzhirn-Bildge- SABs signifikant erhöht (57 vs. 0%, P = 0,018). bung zur Indikationsstellung einer Thrombolyse im erwei- Schlussfolgerungen: Periinterventionelle Subarachnoidal- terten Zeitfenster. blutungen, die ohne angiographischen Nachweis von Per- Methoden: Retrospektiv wurden 125 Patienten mit einem forationen während mechanischer Rekanalisierungen mit supratentoriellen Perfusionsdefizit ausgewählt, die im Stentretrievern auftreten, zeigten in dieser retrospektiven erweiterten Zeitfenster ein Schlaganfall-MRT erhalten hat- Fall-Kontroll-Studie keinen signifikanten Einfluss auf das ten. Aus dem PWI-Datensatz wurden 2 bzw. 3 Schichten neurologische Outcome der betroffenen Patienten. so ausgewählt, wie sie typischerweise in der limitierten CT- Perfusion akquiriert werden. Ausgewertet wurden dann auf allen Schichten Infarktvolumen (DWI) und Volumen der 1 3 Abstracts 5 Penumbra (PWI, Time-to-Peak). Ein PM bestand, wenn das Volumen der Penumbra > 20 % größer als das infarzierte Volumen war. Die Ergebnisse der Ganzhirn- wurden mit denen der 2- und 3-Schicht-Bildgebung verglichen. Ergebnisse: Die Ganzhirn-Bildgebung zeigte bei allen Patienten ein Perfusionsdefizit und bei 70,4 % ein PM. Die 2- bzw. 3-Schicht-Bildgebung zeigte bei 72 % bzw. 76 % der Patienten ein Perfusionsdefizit und bei 52 % bzw. 54 % ein PM (Sensitivität = 73,9 % bzw. 76,1 %; P > 0,23). Gegenüber der 2- und 3-Schicht- zeigte die Ganzhirn-Bildgebung bei signifikant mehr Patienten ein Perfusionsdefizit und ein PM (P < 0,01). Schlussfolgerung: Im Vergleich zur Ganzhirn-Perfusions- bildgebung ist davon auszugehen, dass bei Verwendung von CT-Geräten mit limitierter Perfusions-Schichtzahl bei bis zu 30 % der Schlaganfallpatienten im erweiterten Zeitfenster die Ischämie und bei über 25 % der Patienten das PM nicht erkannt wird. Abb. 1 Korrelation der Medianwerte des CBF aller 13 Schweine. Die horizontalen und vertikalen Balken zeigen die
Recommended publications
  • Management of Large Sets of Image Data Capture, Databases, Image Processing, Storage, Visualization Karol Kozak
    Management of large sets of image data Capture, Databases, Image Processing, Storage, Visualization Karol Kozak Download free books at Karol Kozak Management of large sets of image data Capture, Databases, Image Processing, Storage, Visualization Download free eBooks at bookboon.com 2 Management of large sets of image data: Capture, Databases, Image Processing, Storage, Visualization 1st edition © 2014 Karol Kozak & bookboon.com ISBN 978-87-403-0726-9 Download free eBooks at bookboon.com 3 Management of large sets of image data Contents Contents 1 Digital image 6 2 History of digital imaging 10 3 Amount of produced images – is it danger? 18 4 Digital image and privacy 20 5 Digital cameras 27 5.1 Methods of image capture 31 6 Image formats 33 7 Image Metadata – data about data 39 8 Interactive visualization (IV) 44 9 Basic of image processing 49 Download free eBooks at bookboon.com 4 Click on the ad to read more Management of large sets of image data Contents 10 Image Processing software 62 11 Image management and image databases 79 12 Operating system (os) and images 97 13 Graphics processing unit (GPU) 100 14 Storage and archive 101 15 Images in different disciplines 109 15.1 Microscopy 109 360° 15.2 Medical imaging 114 15.3 Astronomical images 117 15.4 Industrial imaging 360° 118 thinking. 16 Selection of best digital images 120 References: thinking. 124 360° thinking . 360° thinking. Discover the truth at www.deloitte.ca/careers Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities. Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities.
    [Show full text]
  • A Flexible Image Segmentation Pipeline for Heterogeneous
    A exible image segmentation pipeline for heterogeneous multiplexed tissue images based on pixel classication Vito Zanotelli & Bernd Bodenmiller January 14, 2019 Abstract Measuring objects and their intensities in images is basic step in many quantitative tissue image analysis workows. We present a exible and scalable image processing pipeline tai- lored to highly multiplexed images. This pipeline allows the single cell and image structure segmentation of hundreds of images. It is based on supervised pixel classication using Ilastik to the distill the segmentation relevant information from the multiplexed images in a semi- supervised, automated fashion, followed by standard image segmentation using CellProler. We provide a helper python package as well as customized CellProler modules that allow for a straight forward application of this workow. As the pipeline is entirely build on open source tool it can be easily adapted to more specic problems and forms a solid basis for quantitative multiplexed tissue image analysis. 1 Introduction Image segmentation, i.e. division of images into meaningful regions, is commonly used for quan- titative image analysis [2, 8]. Tissue level comparisons often involve segmentation of the images in macrostructures, such as tumor and stroma, and calculating intensity levels and distributions in such structures [8]. Cytometry type tissue analysis aim to segment the images into pixels be- longing to the same cell, with the goal to ultimately identify cellular phenotypes and celltypes [2]. Classically these approaches are mostly based on a single, hand selected nuclear marker that is thresholded to identify cell centers. If available a single membrane marker is used to expand the cell centers to full cells masks, often using watershed type algorithms.
    [Show full text]
  • Protocol of Image Analysis
    Protocol of Image Analysis - Step-by-step instructional guide using the software Fiji, Ilastik and Drishti by Stella Gribbe 1. Installation The open-source software Fiji, Ilastik and Drishti are needed in order to perform image analysis as described in this protocol. Install the software for each program on your system, using the html- addresses below. In your web browser, open the Fiji homepage for downloads: https://fiji.sc/#download. Select the software option suited for your operating system and download it. In your web browser, open the Ilastik homepage for 'Download': https://www.ilastik.org/download.html. For this work, Ilastik version 1.3.2 was installed, as it was the most recent version. Select the Download option suited for your operating system and follow the download instructions on the webpage. In your web browser, open the Drishti page on Github: https://github.com/nci/drishti/releases. Select the versions for your operating system and download 'Drishti version 2.6.3', 'Drishti version 2.6.4' and 'Drishti version 2.6.5'. 1 2. Pre-processing Firstly, reduce the size of the CR.2-files from your camera, if the image files are exceedingly large. There is a trade-off between image resolution and both computation time and feasibility. Identify your necessary minimum image resolution and compress your image files to the largest extent possible, for instance by converting them to JPG-files. Large image data may cause the software to crash, if your internal memory capacity is too small. Secondly, convert your image files to inverted 8-bit grayscale JPG-files and adjust the parameter brightness and contrast to enhance the distinguishability of structures.
    [Show full text]
  • Automated Segmentation and Single Cell Analysis for Bacterial Cells
    Automated Segmentation and Single Cell Analysis for Bacterial Cells -Description of the workflow and a step-by-step protocol- Authors: Michael Weigert and Rolf Kümmerli 1 General Information This document presents a new workflow for automated segmentation of bacterial cells, andthe subsequent analysis of single-cell features (e.g. cell size, relative fluorescence values). The described methods require the use of three freely available open source software packages. These are: 1. ilastik: http://ilastik.org/ [1] 2. Fiji: https://fiji.sc/ [2] 3. R: https://www.r-project.org/ [4] Ilastik is an interactive, machine learning based, supervised object classification and segmentation toolkit, which we use to automatically segment bacterial cells from microscopy images. Segmenta- tion is the process of dividing an image into objects and background, a bottleneck in many of the current approaches for single cell image analysis. The advantage of ilastik is that the segmentation process does not require priors (e.g. information oncell shape), and can thus be used to analyze any type of objects. Furthermore, ilastik also works for low-resolution images. Ilastik involves a user supervised training process, during which the software is trained to reliably recognize the objects of interest. This process creates a ”Object Prediction Map” that is used to identify objects inimages that are not part of the training process. The training process is computationally intensive. We thus recommend to use of a computer with a multi-core CPU (ilastik supports hyper threading) and at least 16GB of RAM-memory. Alternatively, a computing center or cloud computing service could be used to speed up the training process.
    [Show full text]
  • Machine Learning on Real and Artificial SEM Images
    A Workflow for Characterizing Nanoparticle Monolayers for Biosensors: Machine Learning on Real and Artificial SEM Images Adam Hughes1, Zhaowen Liu2, Mayam Raftari3, and M. E. Reeves4 1-4The George Washington University, USA ABSTRACT A persistent challenge in materials science is the characterization of a large ensemble of heterogeneous nanostructures in a set of images. This often leads to practices such as manual particle counting, and s sampling bias of a favorable region of the “best” image. Herein, we present the open-source software, t imaging criteria and workflow necessary to fully characterize an ensemble of SEM nanoparticle images. n i Such characterization is critical to nanoparticle biosensors, whose performance and characteristics r are determined by the distribution of the underlying nanoparticle film. We utilize novel artificial SEM P images to objectively compare commonly-found image processing methods through each stage of the workflow: acquistion, preprocessing, segmentation, labeling and object classification. Using the semi- e r supervised machine learning application, Ilastik, we demonstrate the decomposition of a nanoparticle image into particle subtypes relevant to our application: singles, dimers, flat aggregates and piles. We P outline a workflow for characterizing and classifying nanoscale features on low-magnification images with thousands of nanoparticles. This work is accompanied by a repository of supplementary materials, including videos, a bank of real and artificial SEM images, and ten IPython Notebook tutorials to reproduce and extend the presented results. Keywords: Image Processing, Gold Nanoparticles, Biosensor, Plasmonics, Electron Microscopy, Microsopy, Ilastik, Segmentation, Bioengineering, Reproducible Research, IPython Notebook 1 INTRODUCTION Metallic colloids, especially gold nanoparticles (AuNPs) continue to be of high interdisciplinary in- terest, ranging in application from biosensing(Sai et al., 2009)(Nath and Chilkoti, 2002) to photo- voltaics(Shahin et al., 2012), even to heating water(McKenna, 2012).
    [Show full text]
  • Ilastik: Interactive Machine Learning for (Bio)Image Analysis
    ilastik: interactive machine learning for (bio)image analysis Stuart Berg1, Dominik Kutra2,3, Thorben Kroeger2, Christoph N. Straehle2, Bernhard X. Kausler2, Carsten Haubold2, Martin Schiegg2, Janez Ales2, Thorsten Beier2, Markus Rudy2, Kemal Eren2, Jaime I Cervantes2, Buote Xu2, Fynn Beuttenmueller2,3, Adrian Wolny2, Chong Zhang2, Ullrich Koethe2, Fred A. Hamprecht2, , and Anna Kreshuk2,3, 1HHMI Janelia Research Campus, Ashburn, Virginia, USA 2HCI/IWR, Heidelberg University, Heidelberg, Germany 3European Molecular Biology Laboratory, Heidelberg, Germany We present ilastik, an easy-to-use interactive tool that brings are computed and passed on to a powerful nonlinear algo- machine-learning-based (bio)image analysis to end users with- rithm (‘the classifier’), which operates in the feature space. out substantial computational expertise. It contains pre-defined Based on examples of correct class assignment provided by workflows for image segmentation, object classification, count- the user, it builds a decision surface in feature space and ing and tracking. Users adapt the workflows to the problem at projects the class assignment back to pixels and objects. In hand by interactively providing sparse training annotations for other words, users can parametrize such workflows just by a nonlinear classifier. ilastik can process data in up to five di- providing the training data for algorithm supervision. Freed mensions (3D, time and number of channels). Its computational back end runs operations on-demand wherever possible, allow- from the necessity to understand intricate algorithm details, ing for interactive prediction on data larger than RAM. Once users can thus steer the analysis by their domain expertise. the classifiers are trained, ilastik workflows can be applied to Algorithm parametrization through user supervision (‘learn- new data from the command line without further user interac- ing from training data’) is the defining feature of supervised tion.
    [Show full text]
  • Image Search Engine Resource Guide
    Image Search Engine: Resource Guide! www.PyImageSearch.com Image Search Engine Resource Guide Adrian Rosebrock 1 Image Search Engine: Resource Guide! www.PyImageSearch.com Image Search Engine: Resource Guide Share this Guide Do you know someone who is interested in building image search engines? Please, feel free to share this guide with them. Just send them this link: http://www.pyimagesearch.com/resources/ Copyright © 2014 Adrian Rosebrock, All Rights Reserved Version 1.2, July 2014 2 Image Search Engine: Resource Guide! www.PyImageSearch.com Table of Contents Introduction! 4 Books! 5 My Books! 5 Beginner Books! 5 Textbooks! 6 Conferences! 7 Python Libraries! 8 NumPy! 8 SciPy! 8 matplotlib! 8 PIL and Pillow! 8 OpenCV! 9 SimpleCV! 9 mahotas! 9 scikit-learn! 9 scikit-image! 9 ilastik! 10 pprocess! 10 h5py! 10 Connect! 11 3 Image Search Engine: Resource Guide! www.PyImageSearch.com Introduction Hello! My name is Adrian Rosebrock from PyImageSearch.com. Thanks for downloading this Image Search Engine Resource Guide. A little bit about myself: I’m an entrepreneur who has launched two successful image search engines: ID My Pill, an iPhone app and API that identifies your prescription pills in the snap of a smartphone’s camera, and Chic Engine, a fashion search engine for the iPhone. Previously, my company ShiftyBits, LLC. has consulted with the National Cancer Institute to develop image processing and machine learning algorithms to automatically analyze breast histology images for cancer risk factors. I have a Ph.D in computer science, with a focus in computer vision and machine learning, from the University of Maryland, Baltimore County where I spent three and a half years studying.
    [Show full text]
  • Automated Tissue Image Analysis Using Pattern Recognition
    Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1175 Automated Tissue Image Analysis Using Pattern Recognition JIMMY AZAR ACTA UNIVERSITATIS UPSALIENSIS ISSN 1651-6214 ISBN 978-91-554-9028-7 UPPSALA urn:nbn:se:uu:diva-231039 2014 Dissertation presented at Uppsala University to be publicly examined in Häggsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, Monday, 20 October 2014 at 09:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Marco Loog (Delft University of Technology, Pattern Recognition & Bioinformatics Group). Abstract Azar, J. 2014. Automated Tissue Image Analysis Using Pattern Recognition. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1175. 106 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9028-7. Automated tissue image analysis aims to develop algorithms for a variety of histological applications. This has important implications in the diagnostic grading of cancer such as in breast and prostate tissue, as well as in the quantification of prognostic and predictive biomarkers that may help assess the risk of recurrence and the responsiveness of tumors to endocrine therapy. In this thesis, we use pattern recognition and image analysis techniques to solve several problems relating to histopathology and immunohistochemistry applications. In particular, we present a new method for the detection and localization of tissue microarray cores in an automated manner and compare it against conventional approaches. We also present an unsupervised method for color decomposition based on modeling the image formation process while taking into account acquisition noise. The method is unsupervised and is able to overcome the limitation of specifying absorption spectra for the stains that require separation.
    [Show full text]
  • Integrating Information from Diverse Microscope Images: Learning and Using Generative Models of Cell Organization Robert F
    March 9, 2018 Integrating Information from Diverse Microscope Images: Learning and Using Generative Models of Cell Organization Robert F. Murphy Ray & Stephanie Lane Professor of Computational Biology and Professor of Biological Sciences, Biomedical Engineering and Machine Learning External Senior Fellow, Freiburg Institute for Advanced Studies Honorary Professor, Faculty of Biology, University of Freiburg, Germany An NIH Biomedical Technology Research Center Classic problem in cell and developmental systems biology • How do we learn and represent – sizes and shapes of different cell types – number, sizes, shapes, positions of organelles – the distribution of proteins across organelles – how organelles depend upon each other – how any of these vary • from cell to cell • from cell type to cell type • during development • in presence of perturbagens Subcellular Location Subcellular Location Subcellular Location Subcellular Location Classic approach • Do biochemical or imaging experiments, capture relationships in words – “secretory vesicles bind to microtubules” • Two problems – Difficult to establish these relationships from images – Does not adequately describe them • Can we do better via machine learning? Cellular Pattern Recognition • Describe cell patterns using numerical features • Do classification, etc. to assign terms • First described in Boland, Markey & Murphy (1998) and Boland & Murphy (200 • Later popularized in packages such as CellProfiler, WND-CHARM, Ilastik, CellCognition, etc. Drawback • Image features are typically not
    [Show full text]
  • Advanced Light Microscopy Core Facilities: Balancing Service, Science and Career
    MICROSCOPY RESEARCH AND TECHNIQUE 79:463–479 (2016) Advanced Light Microscopy Core Facilities: Balancing Service, Science and Career ELISA FERRANDO-MAY,1* HELLA HARTMANN,2 JURGEN€ REYMANN,3 NARIMAN ANSARI,4 NADINE UTZ,1 HANS-ULRICH FRIED,5 CHRISTIAN KUKAT,6 JAN PEYCHL,7 CHRISTIAN LIEBIG,8 STEFAN TERJUNG,9 VIBOR LAKETA,10 ANJE SPORBERT,11 STEFANIE WEIDTKAMP-PETERS,12 ASTRID SCHAUSS,13 14 15 WERNER ZUSCHRATTER, SERGIY AVILOV, AND THE GERMAN BIOIMAGING NETWORK 1Department of Biology, University of Konstanz, Bioimaging Center, Universitatsstrasse€ 10, Konstanz, 78464, Germany 2Technical University Dresden, Center for Regenerative Therapies, Light Microscopy Facility, Fetscherstraße 105, Dresden, 01307, Germany 3Heidelberg University, BioQuant, ViroQuant-CellNetworks RNAi Screening Facility, Im Neuenheimer Feld 267, & Heidelberg Center for Human Bioinformatics, IPMB, Im Neuenheimer Feld 364, Heidelberg, 69120, Germany 4Goethe University Frankfurt am Main, Buchmann Institute for Molecular Life Sciences, Physical Biology Group, Max-von-Laue-Str. 15, Frankfurt am Main 60438, Germany 5Deutsches Zentrum fur€ Neurodegenerative Erkrankungen, Core Facility and Services, Light Microscopy Facility, Ludwig-Erhard- Allee 2, Bonn, 53175, Germany 6Max Planck Institute for Biology of Ageing, FACS & Imaging Core Facility, Joseph-Stelzmann-Str. 9b, Koln,€ 50931, Koln,€ Germany 7Max Planck Institute for Molecular Cell Biology and Genetics, Light Microscopy Facility, Pfotenhauerstr. 108, Dresden, 01307, Germany 8Max Planck Institute for Developmental Biology,
    [Show full text]
  • Machine Learning for Blob Detection in High-Resolution 3D Microscopy Images
    DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2018 Machine learning for blob detection in high-resolution 3D microscopy images MARTIN TER HAAK KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Machine learning for blob detection in high-resolution 3D microscopy images MARTIN TER HAAK EIT Digital Data Science Date: June 6, 2018 Supervisor: Vladimir Vlassov Examiner: Anne Håkansson Electrical Engineering and Computer Science (EECS) iii Abstract The aim of blob detection is to find regions in a digital image that dif- fer from their surroundings with respect to properties like intensity or shape. Bio-image analysis is a common application where blobs can denote regions of interest that have been stained with a fluorescent dye. In image-based in situ sequencing for ribonucleic acid (RNA) for exam- ple, the blobs are local intensity maxima (i.e. bright spots) correspond- ing to the locations of specific RNA nucleobases in cells. Traditional methods of blob detection rely on simple image processing steps that must be guided by the user. The problem is that the user must seek the optimal parameters for each step which are often specific to that image and cannot be generalised to other images. Moreover, some of the existing tools are not suitable for the scale of the microscopy images that are often in very high resolution and 3D. Machine learning (ML) is a collection of techniques that give computers the ability to ”learn” from data. To eliminate the dependence on user parameters, the idea is applying ML to learn the definition of a blob from labelled images.
    [Show full text]
  • Outline of Machine Learning
    Outline of machine learning The following outline is provided as an overview of and topical guide to machine learning: Machine learning – subfield of computer science[1] (more particularly soft computing) that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".[2] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[3] Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions. Contents What type of thing is machine learning? Branches of machine learning Subfields of machine learning Cross-disciplinary fields involving machine learning Applications of machine learning Machine learning hardware Machine learning tools Machine learning frameworks Machine learning libraries Machine learning algorithms Machine learning methods Dimensionality reduction Ensemble learning Meta learning Reinforcement learning Supervised learning Unsupervised learning Semi-supervised learning Deep learning Other machine learning methods and problems Machine learning research History of machine learning Machine learning projects Machine learning organizations Machine learning conferences and workshops Machine learning publications
    [Show full text]