Other Departments and Institutes Courses 1
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Bioimage Analysis Tools
Bioimage Analysis Tools Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Paul-Gilloteaux, Ulrike Schulze, Simon Norrelykke, Christian Tischer, Thomas Pengo To cite this version: Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Paul-Gilloteaux, et al.. Bioimage Analysis Tools. Kota Miura. Bioimage Data Analysis, Wiley-VCH, 2016, 978-3-527-80092-6. hal- 02910986 HAL Id: hal-02910986 https://hal.archives-ouvertes.fr/hal-02910986 Submitted on 3 Aug 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 2 Bioimage Analysis Tools 1 2 3 4 5 6 Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Pau/-Gilloteaux, - Ulrike Schulze,7 Simon F. Nerrelykke,8 Christian Tischer,9 and Thomas Penqo'" 1 European Molecular Biology Laboratory, Meyerhofstraße 1, 69117 Heidelberg, Germany National Institute of Basic Biology, Okazaki, 444-8585, Japan 2/nstitute for Research in Biomedicine ORB Barcelona), Advanced Digital Microscopy, Parc Científic de Barcelona, dBaldiri Reixac 1 O, 08028 Barcelona, Spain 3German Center of Neurodegenerative -
A Dissertation Entitled Big and Small Data for Value Creation And
A Dissertation entitled Big and Small Data for Value Creation and Delivery: Case for Manufacturing Firms By Blaine David Stout Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Doctor of Philosophy Degree in Manufacturing and Technology Management _________________________________________ Dr. Paul C. Hong, Major Advisor __________________________________________ Dr. An Chung Cheng, Committee Member _________________________________________ Dr. Thomas S. Sharkey, Committee Member __________________________________________ Dr. Steven A. Wallace, Committee Member __________________________________________ Dr. Amanda C. Bryant-Friedrich Dean, College of Graduate Studies The University of Toledo December 2018 Copyright 2018 ©, Blaine David Stout This document is copyrighted material. Under copyright law, no parts of this document may be reproduced without the expressed permission of the author. An Abstract of Big and Small Data for Value Creation and Delivery: Case for Manufacturing Firms By Blaine David Stout Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Doctor of Philosophy Degree in Manufacturing and Technology Management The University of Toledo November 2018 Today’s small-market and mid-market sized manufacturers, competitively face increasing pressure to capture, integrate, operationalize, and manage diverse sources of digitized data. Many have made significant investments in data technologies with the objective to improve on organization performance yet not all -
Simpleitk Tutorial Image Processing for Mere Mortals
SimpleITK Tutorial Image processing for mere mortals Insight Software Consortium Sept 23, 2011 (Insight Software Consortium) SimpleITK - MICCAI 2011 Sept 2011 1 / 142 This presentation is copyrighted by The Insight Software Consortium distributed under the Creative Commons by Attribution License 3.0 http://creativecommons.org/licenses/by/3.0 What this Tutorial is about Provide working knowledge of the SimpleITK platform (Insight Software Consortium) SimpleITK - MICCAI 2011 Sept 2011 3 / 142 Program Virtual Machines Preparation (10min) Introduction (15min) Basic Tutorials I (45min) Short Break (10min) Basic Tutorials II (45min) Co↵ee Break (30min) Intermediate Tutorials (45min) Short Break (10min) Advanced Topics (40min) Wrap-up (10min) (Insight Software Consortium) SimpleITK - MICCAI 2011 Sept 2011 4 / 142 Tutorial Goals Gentle introduction to ITK Introduce SimpleITK Provide hands-on experience Problem solving, not direction following ...but please follow directions! (Insight Software Consortium) SimpleITK - MICCAI 2011 Sept 2011 13 / 142 ITK Overview How many are familiar with ITK? (Insight Software Consortium) SimpleITK - MICCAI 2011 Sept 2011 14 / 142 Ever seen code like this? 1 // Setup image types. 2 typedef float InputPixelType ; 3 typedef float OutputPixelType ; 4 typedef itk ::Image<InputPixelType ,2> InputImageType ; 5 typedef itk ::Image<OutputPixelType,2> OutputImageType ; 6 // Filter type 7 typedef itk ::DiscreteGaussianImageFilter< 8 InputImageType , OutputImageType > 9 FilterType ; 10 // Create a filter 11 FilterType ::Pointer filter = FilterType ::New (); 12 // Create the pipeline 13 filter >SetInput( reader >GetOutput () ); 14 filter−>SetVariance(1.0);− 15 filter−>SetMaximumKernelWidth(5); 16 filter−>Update (); 17 OutputImageType− ::Pointer blurred = filter >GetOutput (); − (Insight Software Consortium) SimpleITK - MICCAI 2011 Sept 2011 15 / 142 We are here to tell you that you can.. -
Bioimage Informatics
Bioimage Informatics Lecture 22, Spring 2012 Empirical Performance Evaluation of Bioimage Informatics Algorithms Image Database; Content Based Image Retrieval Emerging Applications: Molecular Imaging Lecture 22 April 23, 2012 1 Outline • Empirical performance evaluation of bioimage Informatics algorithm • Image database; content-based image retrieval • Introduction to molecular imaging 2 • Empirical performance evaluation of bioimage Informatics algorithm • Image database; content-based image retrieval • Introduction to molecular imaging 3 Overview of Algorithm Evaluation (I) • Two sets of data are required - Input data - Corresponding correct output (ground truth) • Sources of input data - Actual/experimental data, often from experiments - Synthetic data, often from computer simulation • Actual/experimental data - Essential for performance evaluation - May be costly to acquire - May not be representative - Ground truth often is unknown Overview of Algorithm Evaluation (II) • What exactly is ground truth? Ground truth is a term used in cartography, meteorology, analysis of aerial photographs, satellite imagery and a range of other remote sensing techniques in which data are gathered at a distance. Ground truth refers to information that is collected "on location." - http://en.wikipedia.org/wiki/Ground_truth • Synthetic (simulated) data Advantages - Ground truth is known - Usually low-cost Disadvantages - Difficult to fully represent the original data 5 Overview of Algorithm Evaluation (III) • Simulated data Advantages - Ground truth is known - Usually low-cost Disadvantages - Difficult to fully represent the original data • Realistic synthetic data - To use as much information from real experimental data as possible • Quality control of manual analysis Test Protocol Development (I) • Implementation - Source codes are not always available and often are on different platforms. • Parameter setting - This is one of the most challenging issues. -
Rethinking the Internet of Things
® BOOKS FOR PROFESSIONALS BY PROFESSIONALS daCosta Rethinking the Internet of Things Over the next decade, most devices connected to the Internet will not be used by people in the familiar way that personal computers, tablets and smart phones are. Billions of interconnected devices will be monitoring the environment, transportation systems, factories, farms, forests, utilities, soil and weather conditions, oceans and resources. Many of these sensors and actuators will be networked into autonomous sets, with much of the information being exchanged machine-to-machine directly and without human involvement. Machine-to-machine communications are typically terse. Most sensors and actuators will report or act upon small pieces of information - “chirps”. Burdening these devices with current network protocol stacks is inefficient, unnecessary and unduly increases their cost of ownership. This must change. The architecture of the Internet of Things must evolve now by incorporating simpler protocols at the edges of the network, or remain forever inefficient. Rethinking the Internet of Things describes reasons why we must rethink current approaches to the Internet of Things. Appropriate architectures that will coexist with existing networking protocols are described in detail. An architecture comprised of integrator functions, propagator nodes, and end devices, along with their interactions, is explored. This book: • Discusses the difference between the “normal” Internet and the Internet of Things • Describes a new architecture and its components in the “chirp” context • Explains the shortcomings of IP for the Internet of Things • Describes the anatomy of the Internet of Things • Describes how to build a suitable network to maximize the amazing potential of the Internet of Things ISBN 978-1-4302-5740-0 53999 Shelve in Internet/General User level: Beginning–Advanced 9 781430 257400 For your convenience Apress has placed some of the front matter material after the index. -
Demystifying Big Data: Value of Data Analysis Skills for Research Librarians
Demystifying Big Data: Value of Data Analysis Skills for Research Librarians Tammy Ann Syrek-Marshall, MLS Abstract There was a time when librarians learned statistics solely as a management tool, a way of measuring library usage and materials circulation. More advanced users of statistics might use it in research papers as a metric for in-depth study of patron behavior and bibliographic analysis. At one time, Excel spreadsheets primary use was in managing library budgets or producing attractive graphs for library reports and publications. As for data, it’s no longer enough to simply provide access; the value now is in understanding and interpreting this data. This begins with knowing the difference between small data and big data. Small data is static, a snapshot of a single moment in time. Businesses and other organizations, however, are now eagerly tapping into the predictive power of big data. To fully realize the potential of big data, people who possess certain skill sets are in high demand, making data analytics and data science two of the fastest growing career fields. Depending on the size of the organization, librarians will find themselves called upon to work closely with data analysts. Librarians and researchers who possess even a basic understanding of data analysis are now positioned to competently collaborate with these data professionals. Research librarians with a strong grasp of the methodology and processes of data analysis stand the best chance of expanding their roles within their organizations. To this end, for those who currently have had limited exposure to big data and analytics, it is time to become acquainted with this growing field; to begin the process of expanding research methodology with an understanding of predictive data analysis. -
Bioimage Informatics 2019
BioImage Informatics 2019 Co-hosted by: October 2-4, 2019 Allen Institute 615 Westlake Avenue N Seattle, WA 98109 alleninstitute.org/bioimage-informatics twitter #bioimage19 THANKS TO OUR SPONSORS BioImage Informatics 2019 Wednesday, October 2 8:00-9:00am Registration and continental breakfast 9:00-9:15am Welcome and introductory remarks, BioImage Informatics Local Planning Committee • Michael Hawrylycz, Ph.D., Allen Institute for Brain Science (co-chair) • Winfried Wiegraebe, Ph.D., Allen Institute for Cell Science (co-chair) • Forrest Collman, Ph.D., Allen Institute for Brain Science • Greg Johnson, Ph.D., Allen Institute for Cell Science • Lydia Ng, Ph.D., Allen Institute for Brain Science • Kaitlyn Casimo, Ph.D., Allen Institute 9:15-10:15am Opening keynote Gaudenz Danuser, UT Southwestern Medical Center “Inference of causality in molecular pathways from live cell images” Defining cause-effect relations among molecular components is the holy grail of every cell biological study. It turns out that for many of the molecular pathways this task is far from trivial. Many of the pathways are characterized by functional overlap and nonlinear interactions between components. In this configuration, perturbation of one component may result in a measureable shift of the pathway output – commonly referred to as a phenotype – but it is strictly impossible to interpret the phenotype in terms of the roles the targeted component plays in the unperturbed system. This caveat of perturbation approaches applies equally to genetic perturbations, which lead to long-term adaptation, and more acute pharmacological and optogenetic approaches, which often induce short-term adaptation. For nearly two decades my lab has been devoted to circumventing this problem by exploiting basal fluctuations of unperturbed systems to establish cause-and-effect cascades between pathway components. -
A Bioimage Informatics Platform for High-Throughput Embryo Phenotyping James M
Briefings in Bioinformatics, 19(1), 2018, 41–51 doi: 10.1093/bib/bbw101 Advance Access Publication Date: 14 October 2016 Paper A bioimage informatics platform for high-throughput embryo phenotyping James M. Brown, Neil R. Horner, Thomas N. Lawson, Tanja Fiegel, Simon Greenaway, Hugh Morgan, Natalie Ring, Luis Santos, Duncan Sneddon, Lydia Teboul, Jennifer Vibert, Gagarine Yaikhom, Henrik Westerberg and Ann-Marie Mallon Corresponding author: James Brown, MRC Harwell Institute, Harwell Campus, Oxfordshire, OX11 0RD. Tel. þ44-0-1235-841237; Fax: +44-0-1235-841172; E-mail: [email protected] Abstract High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene–phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encom- pass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for them- selves without the need for highly specialized software or technical expertise. In this article, we present a platform of soft- ware tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC. -
View of Above Observations and Arguments, the Fol- Scopy Environment) [9]
Loyek et al. BMC Bioinformatics 2011, 12:297 http://www.biomedcentral.com/1471-2105/12/297 SOFTWARE Open Access BioIMAX: A Web 2.0 approach for easy exploratory and collaborative access to multivariate bioimage data Christian Loyek1*, Nasir M Rajpoot2, Michael Khan3 and Tim W Nattkemper1 Abstract Background: Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results: In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions: BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec. uni-bielefeld.de/BioIMAX with the username “test” and the password “test1” for testing purposes. Background key challenge of bioimage informatics is to present the In the last two decades, innovative biological and biome- large amount of image data in a way that enables them dical imaging technologies greatly improved our under- to be browsed, analyzed, queried or compared with standing of structures of molecules, cells and tissue and other resources [2]. -
Principles of Bioimage Informatics: Focus on Machine Learning of Cell Patterns
Principles of Bioimage Informatics: Focus on Machine Learning of Cell Patterns Luis Pedro Coelho1,2,3, Estelle Glory-Afshar3,6, Joshua Kangas1,2,3, Shannon Quinn2,3,4, Aabid Shariff1,2,3, and Robert F. Murphy1,2,3,4,5,6 1 Joint Carnegie Mellon University–University of Pittsburgh Ph.D. Program in Computational Biology 2 Lane Center for Computational Biology, Carnegie Mellon University 3 Center for Bioimage Informatics, Carnegie Mellon University 4 Department of Biological Sciences, Carnegie Mellon University 5 Machine Learning Department, Carnegie Mellon University 6 Department of Biomedical Engineering, Carnegie Mellon University Abstract. The field of bioimage informatics concerns the development and use of methods for computational analysis of biological images. Tra- ditionally, analysis of such images has been done manually. Manual an- notation is, however, slow, expensive, and often highly variable from one expert to another. Furthermore, with modern automated microscopes, hundreds to thousands of images can be collected per hour, making man- ual analysis infeasible. This field borrows from the pattern recognition and computer vi- sion literature (which contain many techniques for image processing and recognition), but has its own unique challenges and tradeoffs. Fluorescence microscopy images represent perhaps the largest class of biological images for which automation is needed. For this modality, typical problems include cell segmentation, classification of phenotypical response, or decisions regarding differentiated responses (treatment vs. control setting). This overview focuses on the problem of subcellular lo- cation determination as a running example, but the techniques discussed are often applicable to other problems. 1 Introduction Bioimage informatics employs computational and statistical techniques to ana- lyze images and related metadata. -
A Reference Guide to the Internet of Things
A Reference Guide to the Internet of Things A Collaboration by Contributors: Greg Dunko Joydeep Misra Josh Robertson Tom Snyder 2017 A Reference Guide to the Internet of Things Copyright © 2017 Bridgera LLC, RIoT Published by Bridgera LLC, 500 West Peace Street, Raleigh, NC 27603. Additional books may be available. Digital editions are also available. For more information, contact: (919) 230-9951 or [email protected] Content Editors: Contributors: Kayla Little and Ron Pascuzzi Greg Dunko, Joydeep Misra, Josh Cover & Interior Designer: Robertson, Tom Snyder Arpita Kirtaniya Compositor: Makana Dumlao March 2017: First Edition Revision History for the First Edition: 2017-03-01 First Release See https://bridgera.com/ebook for release details. The RIoT logo is a registered trademark of the Wireless Research Center of North Carolina. The Bridgera logo is a trademark of Bridgera LLC. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and RIoT & Bridgera LLC were aware of a trademark claim, the designations have been printed in caps or initial caps. Although the publisher and contributors have used reasonable care in preparing this book, the information it contains is distributed “as is” and without warranties of any kind. This book is not intended as legal or financial advice, and not all recommendations may be suitable for your situation. Professional legal and financial advisors should be consulted, when needed. Neither the publisher nor the contributors shall be liable for any cost, expenses, or damages resulting from use of or reliance on the information contained in this book. -
Introduction to Bio(Medical)- Informatics
Introduction to Bio(medical)- Informatics Kun Huang, PhD Raghu Machiraju, PhD Department of Biomedical Informatics Department of Computer Science and Engineering The Ohio State University Outline • Overview of bioinformatics • Other people’s (public) data • Basic biology review • Basic bioinformatics techniques • Logistics 2 Technical areas of bioinformatics IMMPORT https://immport.niaid.nih.gov/immportWeb/home/home.do?loginType=full# Databases and Query Databases and Query 547 5 Visualization 6 Visualization http://supramap.osu.edu 7 Network 8 Visualization and Visual Analytics § Visualizing genome data annotation 9 Network Mining Hopkins 2007, Nature Biotechnology, Network pharmacology 10 Role of Bioinformatics – Hypothesis Generation Hypothesis generation Bioinformatics Hypothesis testing 11 Training for Bioinformatics § Biology / medicine § Computing – programming, algorithm, pattern recognition, machine learning, data mining, network analysis, visualization, software engineering, database, etc § Statistics – hypothesis testing, permutation method, multivariate analysis, Bayesian method, etc Beyond Bioinformatics • Translational Bioinformatics • Biomedical Informatics • Medical Informatics • Nursing Informatics • Imaging Informatics • Bioimage Informatics • Clinical Research Informatics • Pathology Informatics • … • Health Analytics • … • Legal Informatics • Business Informatics Source: Department of Biomedical Informatics 13 Beyond Bioinformatics Basic Science Clinical Research Informatics Biomedical Informatics Theories & Methods