Computational Methods for Quantitative Single-Cell Time-Lapse Microscopy

Total Page:16

File Type:pdf, Size:1020Kb

Computational Methods for Quantitative Single-Cell Time-Lapse Microscopy Research Collection Doctoral Thesis Computational methods for quantitative single-cell time-lapse microscopy Author(s): Hilsenbeck, Oliver Publication Date: 2019 Permanent Link: https://doi.org/10.3929/ethz-b-000335898 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library DISS. ETH NO. 25681 Computational methods for quantitative single-cell time-lapse microscopy A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented by OLIVER HILSENBECK M.Sc., Technische Universität München and Ludwig-Maximilians-Universität München born on 07.08.1987 citizen of Germany accepted on the recommendation of Prof. Dr. Timm Schroeder, examiner Prof. Dr. Jörg Stelling, co-examiner Prof. Dr. Tanja Stadler, co-examiner 2019 Contents Summary ................................................................................................................................................. 4 Zusammenfassung ................................................................................................................................... 5 Chapter 1: Introduction .......................................................................................................................... 6 1.1 Motivation ..................................................................................................................................... 7 1.2 Scope and structure of this thesis .................................................................................................. 8 Chapter 2: fastER: a user-friendly tool for ultrafast and robust cell segmentation in large-scale microscopy .............................................................................................................................................. 9 2.1 Abstract ....................................................................................................................................... 10 2.2 Introduction ................................................................................................................................. 11 2.3 Materials and methods ................................................................................................................. 11 2.4 Results ......................................................................................................................................... 19 2.5 Discussion ................................................................................................................................... 25 2.6 Acknowledgements ..................................................................................................................... 26 Chapter 3: Software tools for single-cell tracking and quantification of cellular and molecular properties ............................................................................................................................................... 28 3.1 Software tools for single-cell tracking and quantification ........................................................... 29 3.2 Acknowledgements ..................................................................................................................... 35 3.3 Authors contributions .................................................................................................................. 35 Chapter 4: An open file format for cell lineage trees with continuous tracking and quantification data ............................................................................................................................................................... 37 4.1 An open file format for cell lineage trees .................................................................................... 38 Chapter 5: A probabilistic distance measure to locate errors in cell tracking data ............................... 42 5.1 Abstract ....................................................................................................................................... 43 5.2 Introduction ................................................................................................................................. 44 5.3 Methods ....................................................................................................................................... 45 5.4 Results ......................................................................................................................................... 54 5.5 Discussion and outlook ............................................................................................................... 56 5.6 Acknowledgments ....................................................................................................................... 56 Chapter 6: Discussion and conclusion .................................................................................................. 58 6.1 Discussion and conclusion .......................................................................................................... 59 Appendix A: Supplementary material to chapter 2 .............................................................................. 62 A.1 Additional algorithmic details .................................................................................................... 63 A.2 Additional evaluation details ...................................................................................................... 69 A.3 Population dynamics of in-vitro blood formation ...................................................................... 74 A.4 Supplementary figures ................................................................................................................ 75 A.5 Supplementary movies ............................................................................................................... 77 2 Appendix B: Supplementary material to chapter 3 .............................................................................. 79 B.1 Supplementary notes ................................................................................................................... 80 B.2 Supplementary tables .................................................................................................................. 84 B.3 Supplementary figures ................................................................................................................ 91 B.4 Supplementary videos ................................................................................................................. 98 Appendix C: Supplementary material to chapter 4 ............................................................................ 100 C.1 Tree file format ......................................................................................................................... 101 Acknowledgements ............................................................................................................................. 107 References ........................................................................................................................................... 108 3 Summary Quantitative long-term single-cell time-lapse microscopy is crucial to advance our understanding of cell behavior and its molecular control. Such experiments produce huge amounts of imaging data and thus require robust and efficient algorithms for automated analysis. Automated detection of cell outlines (cell segmentation) and cell tracking, however, are often challenging because high cell densities, cell-to-cell variability, complex cellular shapes, varying image illumination and low signal-to-noise ratios cannot always be avoided. To our knowledge, existing methods are mostly limited to specific analysis problems, lack usability and are either not robust enough or computationally expensive, thus limiting their application to large-scale microscopy. In addition, no method exists that produces perfect results – manual inspection and correction of automatically generated analysis results is therefore crucial to avoid wrong conclusions. Existing tools, however, provide mostly no or only very limited functionality for this purpose. In this cumulative thesis, we present fastER, a trainable tool for automated cell segmentation that is orders of magnitude faster than existing methods while producing state-of-the-art segmentation quality. It supports various cell types and image acquisition modalities but is easy- to-use even for non-experts. In addition, we present tTt and qTfy, which enable robust and efficient quantitative analysis of long-term single-cell imaging data. tTt allows efficient manual inspection and correction of existing tracking data (e.g. from auto-tracking tools) as well as fully manual analysis. The cell lineage trees exported by tTt can then be quantitatively analyzed with qTfy. Furthermore, we developed an open file format for cell lineage trees, including arbitrary quantification data on the level of track points, cells or trees. This is crucial to facilitate data exchange between different analysis tool and laboratories. Finally, we present a probabilistic distance measure to annotate existing tracking data with confidence levels so that errors can be located and corrected more efficiently with tTt. Quantitative evaluation results of their classification performance seem promising, but more tests are required to understand how useful this tool will be in
Recommended publications
  • Management of Large Sets of Image Data Capture, Databases, Image Processing, Storage, Visualization Karol Kozak
    Management of large sets of image data Capture, Databases, Image Processing, Storage, Visualization Karol Kozak Download free books at Karol Kozak Management of large sets of image data Capture, Databases, Image Processing, Storage, Visualization Download free eBooks at bookboon.com 2 Management of large sets of image data: Capture, Databases, Image Processing, Storage, Visualization 1st edition © 2014 Karol Kozak & bookboon.com ISBN 978-87-403-0726-9 Download free eBooks at bookboon.com 3 Management of large sets of image data Contents Contents 1 Digital image 6 2 History of digital imaging 10 3 Amount of produced images – is it danger? 18 4 Digital image and privacy 20 5 Digital cameras 27 5.1 Methods of image capture 31 6 Image formats 33 7 Image Metadata – data about data 39 8 Interactive visualization (IV) 44 9 Basic of image processing 49 Download free eBooks at bookboon.com 4 Click on the ad to read more Management of large sets of image data Contents 10 Image Processing software 62 11 Image management and image databases 79 12 Operating system (os) and images 97 13 Graphics processing unit (GPU) 100 14 Storage and archive 101 15 Images in different disciplines 109 15.1 Microscopy 109 360° 15.2 Medical imaging 114 15.3 Astronomical images 117 15.4 Industrial imaging 360° 118 thinking. 16 Selection of best digital images 120 References: thinking. 124 360° thinking . 360° thinking. Discover the truth at www.deloitte.ca/careers Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities. Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities.
    [Show full text]
  • Abschlussarbeit Im Fachbereich Elektrotechnik & Informatik an Der
    Bachelorthesis Adriana Bostandzhieva Design and Implementation of System for Managing Training Data for Artificial Intelligence Algorithms Fakultät Technik und Informatik Faculty of Engineering and Computer Science Department Informations- und Department of Information and Elektrotechnik Electrical Engineering Adriana Bostandzhieva Design and Implementation of System for Managing Training Data for Artificial Intelligence Algorithms Bachelorthesisbased on the study regulations for the Bachelor of Engineering degree programme Information Engineering at the Department of Information and Electrical Engineering of the Faculty of Engineering and Computer Science of the Hamburg University of Aplied Sciences Supervising examiner : Prof. Dr. -Ing. Lutz Leutelt Second Examiner : Prof. Dr. Klaus Jünemann Day of delivery 3. Juli 2019 Adriana Bostandzhieva Title of the Bachelorthesis Design and Implementation of System for Managing Training Data for Artificial Intelli- gence Algorithms Keywords AI, training data, database, labels, video Abstract This paper is part of a pilot project of the Hamburg University of Applied Sciences. The project aims to utilise object detection algorithms and visual data to analyse complex road scenes. The aim of this thesis is to determine the best tool to use to label data for training artificial intelligence algorithms, to specify what data should be saved and to determine what database is to be used to save the data. The validity of the findings is proved by building a small prototype to showcase integration between the labelling tool and the database. Adriana Bostandzhieva Titel der Arbeit Entwicklung und Aufbau eines System zur Verwaltung von Trainingsdaten für Algo- rithmen der künstlichen Intelligenz Stichworte Trainingsdaten, Datenbanke, Video, KI Kurzzusammenfassung Diese Arbeit ist Teil eines Pilotprojekts der Hochschule für Angewandte Wissenschaf- ten Hamburg.
    [Show full text]
  • A 3D Interactive Multi-Object Segmentation Tool Using Local Robust Statistics Driven Active Contours
    A 3D interactive multi-object segmentation tool using local robust statistics driven active contours The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Gao, Yi, Ron Kikinis, Sylvain Bouix, Martha Shenton, and Allen Tannenbaum. 2012. A 3D Interactive Multi-Object Segmentation Tool Using Local Robust Statistics Driven Active Contours. Medical Image Analysis 16, no. 6: 1216–1227. doi:10.1016/j.media.2012.06.002. Published Version doi:10.1016/j.media.2012.06.002 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:28548930 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA NIH Public Access Author Manuscript Med Image Anal. Author manuscript; available in PMC 2013 August 01. NIH-PA Author ManuscriptPublished NIH-PA Author Manuscript in final edited NIH-PA Author Manuscript form as: Med Image Anal. 2012 August ; 16(6): 1216–1227. doi:10.1016/j.media.2012.06.002. A 3D Interactive Multi-object Segmentation Tool using Local Robust Statistics Driven Active Contours Yi Gaoa,*, Ron Kikinisb, Sylvain Bouixa, Martha Shentona, and Allen Tannenbaumc aPsychiatry Neuroimaging Laboratory, Brigham & Women's Hospital, Harvard Medical School, Boston, MA 02115 bSurgical Planning Laboratory, Brigham & Women's Hospital, Harvard Medical School, Boston, MA 02115 cDepartments of Electrical and Computer Engineering and Biomedical Engineering, Boston University, Boston, MA 02115 Abstract Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community.
    [Show full text]
  • Open Source Computer Vision-Based Layer-Wise 3D Printing Analysis
    Open Source Computer Vision-based Layer-wise 3D Printing Analysis Aliaksei L. Petsiuk1 and Joshua M. Pearce1,2,3 1Department of Electrical & Computer Engineering, Michigan Technological University, Houghton, MI 49931, USA 2Department of Material Science & Engineering, Michigan Technological University, Houghton, MI 49931, USA 3Department of Electronics and Nanoengineering, School of Electrical Engineering, Aalto University, Espoo, FI-00076, Finland [email protected], [email protected] Graphical Abstract Highlights • Developed a visual servoing platform using a monocular multistage image segmentation • Presented algorithm prevents critical failures during additive manufacturing • The developed system allows tracking printing errors on the interior and exterior Abstract The paper describes an open source computer vision-based hardware structure and software algorithm, which analyzes layer-wise the 3-D printing processes, tracks printing errors, and generates appropriate printer actions to improve reliability. This approach is built upon multiple- stage monocular image examination, which allows monitoring both the external shape of the printed object and internal structure of its layers. Starting with the side-view height validation, the developed program analyzes the virtual top view for outer shell contour correspondence using the multi-template matching and iterative closest point algorithms, as well as inner layer texture quality clustering the spatial-frequency filter responses with Gaussian mixture models and segmenting structural anomalies with the agglomerative hierarchical clustering algorithm. This allows evaluation of both global and local parameters of the printing modes. The experimentally- verified analysis time per layer is less than one minute, which can be considered a quasi-real-time process for large prints. The systems can work as an intelligent printing suspension tool designed to save time and material.
    [Show full text]
  • Abstractband
    Volume 23 · Supplement 1 · September 2013 Clinical Neuroradiology Official Journal of the German, Austrian, and Swiss Societies of Neuroradiology Abstracts zur 48. Jahrestagung der Deutschen Gesellschaft für Neuroradiologie Gemeinsame Jahrestagung der DGNR und ÖGNR 10.–12. Oktober 2013, Gürzenich, Köln www.cnr.springer.de Clinical Neuroradiology Official Journal of the German, Austrian, and Swiss Societies of Neuroradiology Editors C. Ozdoba L. Solymosi Bern, Switzerland (Editor-in-Chief, responsible) ([email protected]) Department of Neuroradiology H. Urbach University Würzburg Freiburg, Germany Josef-Schneider-Straße 11 ([email protected]) 97080 Würzburg Germany ([email protected]) Published on behalf of the German Society of Neuroradiology, M. Bendszus president: O. Jansen, Heidelberg, Germany the Austrian Society of Neuroradiology, ([email protected]) president: J. Trenkler, and the Swiss Society of Neuroradiology, T. Krings president: L. Remonda. Toronto, ON, Canada ([email protected]) Clin Neuroradiol 2013 · No. 1 © Springer-Verlag ABC jobcenter-medizin.de Clin Neuroradiol DOI 10.1007/s00062-013-0248-4 ABSTRACTS 48. Jahrestagung der Deutschen Gesellschaft für Neuroradiologie Gemeinsame Jahrestagung der DGNR und ÖGNR 10.–12. Oktober 2013 Gürzenich, Köln Kongresspräsidenten Prof. Dr. Arnd Dörfler Prim. Dr. Johannes Trenkler Erlangen Wien Dieses Supplement wurde von der Deutschen Gesellschaft für Neuroradiologie finanziert. Inhaltsverzeichnis Grußwort ............................................................................................................................................................................
    [Show full text]
  • A Flexible Image Segmentation Pipeline for Heterogeneous
    A exible image segmentation pipeline for heterogeneous multiplexed tissue images based on pixel classication Vito Zanotelli & Bernd Bodenmiller January 14, 2019 Abstract Measuring objects and their intensities in images is basic step in many quantitative tissue image analysis workows. We present a exible and scalable image processing pipeline tai- lored to highly multiplexed images. This pipeline allows the single cell and image structure segmentation of hundreds of images. It is based on supervised pixel classication using Ilastik to the distill the segmentation relevant information from the multiplexed images in a semi- supervised, automated fashion, followed by standard image segmentation using CellProler. We provide a helper python package as well as customized CellProler modules that allow for a straight forward application of this workow. As the pipeline is entirely build on open source tool it can be easily adapted to more specic problems and forms a solid basis for quantitative multiplexed tissue image analysis. 1 Introduction Image segmentation, i.e. division of images into meaningful regions, is commonly used for quan- titative image analysis [2, 8]. Tissue level comparisons often involve segmentation of the images in macrostructures, such as tumor and stroma, and calculating intensity levels and distributions in such structures [8]. Cytometry type tissue analysis aim to segment the images into pixels be- longing to the same cell, with the goal to ultimately identify cellular phenotypes and celltypes [2]. Classically these approaches are mostly based on a single, hand selected nuclear marker that is thresholded to identify cell centers. If available a single membrane marker is used to expand the cell centers to full cells masks, often using watershed type algorithms.
    [Show full text]
  • About This Document Opencv and Matlab Mex Files Recommended Knowledge General Concepts Mex Files
    1 About this document ABOUT THIS DOCUMENT This document was created as part of a final project for a BSc degree at the Academic College of Tel- Aviv Yaffo, by Rachely Esman and Yoad Snapir, under the supervision of Tal Hassner. As part of the project, we used Intel's OpenCV library, calling its functions from MATLAB. We found the subject tricky and thus decided to share the experience of our work with others describing the common pitfalls. You can use this document freely according to the copyrights noted below but under your sole responsibility. Rachely and Yoad. OPENCV AND MATLAB MEX FILES This guide will provide a quick walk through on compiling C++ OpenCV modules as MEX runtime files of MATLAB under: MS Visual Studio 2005 MATLAB 7.5.0 Windows 32bit And will probably be clear enough for other platforms / versions. RECOMMENDED KNOWLEDGE Good understanding of C++ Compilation and linkage concepts. DLL general usage Visual Studio 2005 environment Basic MATLAB and MEX files knowledge GENERAL CONCEPTS MEX FILES MEX files are actually simple .DLL files compiled with extension .MEXW32 using ordinary compilation tools provided with VS2005. Those .DLL files have a fixed entry point "mexFunction" with a fixed signature. (List of IN/OUT parameters) Follow this link to get an overview of MEX files and information on using them: http://www.mathworks.com/support/tech-notes/1600/1605.html#intro MEX files are compiled (and linked) from within the MATLAB environment using the following syntax: mex 'srcfile1.cpp' 'srcfile2.cpp' 'objfile1.obj' … Before compiling, you need to setup the compilation options using the command: Copyright © 2008 R.
    [Show full text]
  • Protocol of Image Analysis
    Protocol of Image Analysis - Step-by-step instructional guide using the software Fiji, Ilastik and Drishti by Stella Gribbe 1. Installation The open-source software Fiji, Ilastik and Drishti are needed in order to perform image analysis as described in this protocol. Install the software for each program on your system, using the html- addresses below. In your web browser, open the Fiji homepage for downloads: https://fiji.sc/#download. Select the software option suited for your operating system and download it. In your web browser, open the Ilastik homepage for 'Download': https://www.ilastik.org/download.html. For this work, Ilastik version 1.3.2 was installed, as it was the most recent version. Select the Download option suited for your operating system and follow the download instructions on the webpage. In your web browser, open the Drishti page on Github: https://github.com/nci/drishti/releases. Select the versions for your operating system and download 'Drishti version 2.6.3', 'Drishti version 2.6.4' and 'Drishti version 2.6.5'. 1 2. Pre-processing Firstly, reduce the size of the CR.2-files from your camera, if the image files are exceedingly large. There is a trade-off between image resolution and both computation time and feasibility. Identify your necessary minimum image resolution and compress your image files to the largest extent possible, for instance by converting them to JPG-files. Large image data may cause the software to crash, if your internal memory capacity is too small. Secondly, convert your image files to inverted 8-bit grayscale JPG-files and adjust the parameter brightness and contrast to enhance the distinguishability of structures.
    [Show full text]
  • Anastasia Tyurina [email protected]
    1 Anastasia Tyurina [email protected] Summary A specialist in applying or creating mathematical methods to solving problems of developing technologies. A rare expert in solving problems starting from the stage of a stated “word problem” to proof of concept and production software development. Such successful uses of an educational background in mathematics, intellectual courage, and tenacious character include: • developed a unique method of statistical analysis of spectral composition in 1D and 2D stochastic processes for quality control in ultra-precision mirror polishing • developed novel methods of detection, tracking and classification of small moving targets for aerial IR and EO sensors. Used SIFT, and SIRF features, and developed innovative feature-signatures of motion of interest. • developed image processing software for bioinformatics, point source (diffraction objects) detection semiconductor metrology, electron microscopy, failure analysis, diagnostics, system hardware support and pattern recognition • developed statistical software for surface metrology assessment, characterization and generation of statistically similar surfaces to assist development of new optical systems • documented, published and patented original results helping employers technical communications • supported sales with prototypes and presentations • worked well with people – colleagues, customers, researchers, scientists, engineers Tools MATLAB, Octave, OpenCV, ImageJ, Scion Image, Aphelion Image, Gimp, PhotoShop, C/C++, (Visual C environment), GNU development tools, UNIX (Solaris, SGI IRIX), Linux, Windows, MS DOS. Positions and Experience Second Star Algonumerixs – 2008-present, founder and CEO http://www.secondstaralgonumerix.com/ 1) Developed a method of statistical assessment, characterisation and generation of random surface metrology for sper precision X-ray mirror manufacturing in collaboration with Lawrence Berkeley National Laboratory of University of California Berkeley.
    [Show full text]
  • Automated Segmentation and Single Cell Analysis for Bacterial Cells
    Automated Segmentation and Single Cell Analysis for Bacterial Cells -Description of the workflow and a step-by-step protocol- Authors: Michael Weigert and Rolf Kümmerli 1 General Information This document presents a new workflow for automated segmentation of bacterial cells, andthe subsequent analysis of single-cell features (e.g. cell size, relative fluorescence values). The described methods require the use of three freely available open source software packages. These are: 1. ilastik: http://ilastik.org/ [1] 2. Fiji: https://fiji.sc/ [2] 3. R: https://www.r-project.org/ [4] Ilastik is an interactive, machine learning based, supervised object classification and segmentation toolkit, which we use to automatically segment bacterial cells from microscopy images. Segmenta- tion is the process of dividing an image into objects and background, a bottleneck in many of the current approaches for single cell image analysis. The advantage of ilastik is that the segmentation process does not require priors (e.g. information oncell shape), and can thus be used to analyze any type of objects. Furthermore, ilastik also works for low-resolution images. Ilastik involves a user supervised training process, during which the software is trained to reliably recognize the objects of interest. This process creates a ”Object Prediction Map” that is used to identify objects inimages that are not part of the training process. The training process is computationally intensive. We thus recommend to use of a computer with a multi-core CPU (ilastik supports hyper threading) and at least 16GB of RAM-memory. Alternatively, a computing center or cloud computing service could be used to speed up the training process.
    [Show full text]
  • Downloadable At
    remote sensing Article LabelRS: An Automated Toolbox to Make Deep Learning Samples from Remote Sensing Images Junjie Li, Lingkui Meng, Beibei Yang, Chongxin Tao, Linyi Li and Wen Zhang * School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China; [email protected] (J.L.); [email protected] (L.M.); [email protected] (B.Y.); [email protected] (C.T.); [email protected] (L.L.) * Correspondence: [email protected]; Tel.: +86-027-68770771 Abstract: Deep learning technology has achieved great success in the field of remote sensing pro- cessing. However, the lack of tools for making deep learning samples with remote sensing images is a problem, so researchers have to rely on a small amount of existing public data sets that may influence the learning effect. Therefore, we developed an add-in (LabelRS) based on ArcGIS to help researchers make their own deep learning samples in a simple way. In this work, we proposed a feature merging strategy that enables LabelRS to automatically adapt to both sparsely distributed and densely distributed scenarios. LabelRS solves the problem of size diversity of the targets in remote sensing images through sliding windows. We have designed and built in multiple band stretching, image resampling, and gray level transformation algorithms for LabelRS to deal with the high spectral remote sensing images. In addition, the attached geographic information helps to achieve seamless conversion between natural samples, and geographic samples. To evaluate the reliability of LabelRS, we used its three sub-tools to make semantic segmentation, object detection and image classification samples, respectively.
    [Show full text]
  • Machine Learning on Real and Artificial SEM Images
    A Workflow for Characterizing Nanoparticle Monolayers for Biosensors: Machine Learning on Real and Artificial SEM Images Adam Hughes1, Zhaowen Liu2, Mayam Raftari3, and M. E. Reeves4 1-4The George Washington University, USA ABSTRACT A persistent challenge in materials science is the characterization of a large ensemble of heterogeneous nanostructures in a set of images. This often leads to practices such as manual particle counting, and s sampling bias of a favorable region of the “best” image. Herein, we present the open-source software, t imaging criteria and workflow necessary to fully characterize an ensemble of SEM nanoparticle images. n i Such characterization is critical to nanoparticle biosensors, whose performance and characteristics r are determined by the distribution of the underlying nanoparticle film. We utilize novel artificial SEM P images to objectively compare commonly-found image processing methods through each stage of the workflow: acquistion, preprocessing, segmentation, labeling and object classification. Using the semi- e r supervised machine learning application, Ilastik, we demonstrate the decomposition of a nanoparticle image into particle subtypes relevant to our application: singles, dimers, flat aggregates and piles. We P outline a workflow for characterizing and classifying nanoscale features on low-magnification images with thousands of nanoparticles. This work is accompanied by a repository of supplementary materials, including videos, a bank of real and artificial SEM images, and ten IPython Notebook tutorials to reproduce and extend the presented results. Keywords: Image Processing, Gold Nanoparticles, Biosensor, Plasmonics, Electron Microscopy, Microsopy, Ilastik, Segmentation, Bioengineering, Reproducible Research, IPython Notebook 1 INTRODUCTION Metallic colloids, especially gold nanoparticles (AuNPs) continue to be of high interdisciplinary in- terest, ranging in application from biosensing(Sai et al., 2009)(Nath and Chilkoti, 2002) to photo- voltaics(Shahin et al., 2012), even to heating water(McKenna, 2012).
    [Show full text]