Open Source Bioimage Informatics for Cell Biology

Total Page:16

File Type:pdf, Size:1020Kb

Open Source Bioimage Informatics for Cell Biology CORE Metadata, citation and similar papers at core.ac.uk Provided by Elsevier - Publisher Connector Opinion Special Issue – Imaging Cell Biology Open source bioimage informatics for cell biology Jason R. Swedlow1 and Kevin W. Eliceiri2 1 Wellcome Trust Centre for Gene Regulation and Expression, College of Life Sciences, University of Dundee, Dundee, Scotland DD1 5EH, UK 2 Laboratory for Optical and Computational Instrumentation, Departments of Molecular Biology and Biomedical Engineering, Graduate School, University of Wisconsin at Madison, Madison, WI 53706, USA Open access under CC BY license. Significant technical advances in imaging, molecular reduced representations that scientists can manipulate, biology and genomics have fueled a revolution in cell analyze, share with colleagues, and ultimately under- biology, in that the molecular and structural processes of stand. Despite the diversity of applications of imaging in the cell are now visualized and measured routinely. biology, there are common unifying challenges such as Driving much of this recent development has been the displaying a multi-gigabyte time-lapse movie on a laptop advent of computational tools for the acquisition, visual- screen, or identifying, tracking, and measuring the objects ization, analysis and dissemination of these datasets. in that movie and presenting the resulting measurements These tools collectively make up a new subfield of com- in a graph that reveals the mechanisms that drive their putational biology called bioimage informatics, which is movements. These requirements have spawned the new facilitated by open source approaches. We discuss why field of bioimage informatics [11], which aims to deliver open source tools for image informatics in cell biology tools for data visualization, management, storage, and are needed, some of the key general attributes of what analysis. While still a relatively young field, bioimage make an open source imaging application successful, informatics has already had a major impact in cell biology and point to opportunities for further operability that particularly in the area of quantitative cell imaging where should greatly accelerate future cell biology discovery. advanced feature recognition, segmentation, annotation and data mining approaches are used regularly [12–20]. Bioimage Informatics as a discovery tool in cell biology Almost all commercially provided image acquisition sys- Imaging is used as a tool for discovery throughout basic life tems include software tools that provide sophisticated ima- science, and biomedical and clinical research. In these ge visualization and analysis functions for the images domains, advances in light and electron microscopy have recorded by the instrument they control. However, in recent transformed biological discovery, enabling visualization of years, many non-commercial projects have appeared, mechanism and dynamics across scales of nanometers to almost always based in research laboratories that require millimeters and picoseconds to many days. Fluorescent functionality not available in commercial products. Here, we protein (FP)-tagged fusions can be used as reporters of discuss the application of bioimage informatics in cell biomolecular interactions in cultured living cells [1], and biology and focus specifically on the development of open the same reporter can reveal the localization and growth of source solutions for bioimage informatics that have emerged a tumor in a living animal [2,3]. In short, the last 20 years over the last few years. have provided us with a wealth of sophisticated biological reporters and image data acquisition tools for biomedical What are the informatics challenges in quantitative cell research. Many of these imaging and instrumentation biology imaging? developments have been driven by partnerships between Given the rapid development in image acquisition systems academic laboratories that invent and prototype new tech- in the last 20 years, it is worth considering why a corre- nology and commercial entities that develop and market sponding rapid development of informatics tools has them as commercial products. This development and deliv- occurred only recently. Certainly, one of the barriers to ery pipeline of commercial imaging instrumentation and providing universal tools for bioimage informatics is the software has been quite successful, having delivered the diversity of data structures and experimental applications laser scanning confocal [4,5], spinning disc confocal [6,7], that produce imaging data. In optical microscopy alone wide-field deconvolution [8,9] and multiphoton micro- there are a substantial number of different types of ima- scopes [10] that are engines of discovery in cell and devel- ging modalities and, indeed, a method like fluorescence opmental biology. microscopy encapsulates a huge and rapidly growing field All of these methodologies produce complex, multi- of image acquisition approaches [21]. Informatics tools that dimensional data sets that must be transformed into support this range of methods must be capable of capturing Corresponding author: Swedlow, J.R. ([email protected]) the raw data (the individual pixels) and the metadata 656 0962-8924 ß 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.tcb.2009.08.007 Available online 14 October 2009 Opinion Trends in Cell Biology Vol.19 No.11 around the acquisition methodology itself, including works may be distributed. Table 1 gives some examples of instrument settings, exposure details etc. This diversity commonly used open source software licenses and summar- of data structures makes delivering common informatics izes their terms. For any users or developers, these details solutions difficult, and this complexity is multiplied by the are important and must be understood given the great large number of commercial imaging systems that use implications for development and deployment. individually specified, and often proprietary, file formats The ability to see and make changes to the work of for data storage. Our current estimates are that there are another developer is a critical component of open source approximately 80 proprietary file formats for optical micro- software. The attractive aspect of this approach for science scopy alone (and not including other common imaging is that users and developers can directly see, evaluate, and techniques) that must be supported by any bioimage infor- use another’s work (really, their intellectual property) and, matics tool that aims to provide a generalizable solution. In if necessary, build upon it. This is a key and often over- short, the lack of standardized access to data makes the looked part of open source software. Successful open source generation of informatics tools quite difficult. software development projects are dynamic, evolving A deeper challenge resides in each individual laboratory enterprises allowing input, feedback, and often contri- that uses imaging as part of its experimental repertoire. butions from their community. The sheer size of the raw data sets and the rate of pro- This evolving, adaptable aspect makes open source soft- duction mean that individual researchers can easily gen- ware particularly useful for scientific discovery and, more erate many tens of gigabytes of data per day. This means specifically, for the rapidly evolving and diverse set of that large laboratories or departmental imaging facilities imaging applications used in biological research. Commer- generate many hundreds of gigabytes to terabytes per cial and closed source applications have certainly sup- week, and are now enterprise-level data production facili- ported many significant advances in imaging. However, ties. However, the expertise for developing enterprise soft- an essential part of bioimaging data analysis is the ability ware tools or even simply running the hardware necessary to easily try new methodology and approaches or even to for this scale of data management and analysis rarely combine existing ones to generate a derivative result based exists in individual laboratories. In short, the sophisticated on the combination of two approaches. Open source systems and development expertise that are used to deliver approaches make this possible. As such, there is a natural genomics databases and applications are required in indi- fit between open source software and the process of scien- vidual imaging laboratories and facilities. The delivery of tific discovery. In addition, a consequence of the growth of tools that provide access to a broad range of data types, the open source community is a de facto establishment of manage and analyze large sets of data, and help run the standardized documentation methods (http://java.sun.- systems that store and process this data is the challenge com/j2se/javadoc/) and software specifications (http://java.- that bioimage informatics seeks to address. sun.com/products/ejb/docs.html). These specifications ensure that developers can understand and use each Why are open source approaches essential? other’s code and, most importantly, that two independent A critical development in the field of bioimage informatics software packages can use a specified, common interface. has been the introduction of many open source projects in This software ‘interoperability’, enforced by the community the last few years [11,22–30]. These projects range from either formally or informally, is a general hallmark of open being open source distributions where the code is available source software, and perhaps
Recommended publications
  • Bioimage Analysis Tools
    Bioimage Analysis Tools Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Paul-Gilloteaux, Ulrike Schulze, Simon Norrelykke, Christian Tischer, Thomas Pengo To cite this version: Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Paul-Gilloteaux, et al.. Bioimage Analysis Tools. Kota Miura. Bioimage Data Analysis, Wiley-VCH, 2016, 978-3-527-80092-6. hal- 02910986 HAL Id: hal-02910986 https://hal.archives-ouvertes.fr/hal-02910986 Submitted on 3 Aug 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 2 Bioimage Analysis Tools 1 2 3 4 5 6 Kota Miura, Sébastien Tosi, Christoph Möhl, Chong Zhang, Perrine Pau/-Gilloteaux, - Ulrike Schulze,7 Simon F. Nerrelykke,8 Christian Tischer,9 and Thomas Penqo'" 1 European Molecular Biology Laboratory, Meyerhofstraße 1, 69117 Heidelberg, Germany National Institute of Basic Biology, Okazaki, 444-8585, Japan 2/nstitute for Research in Biomedicine ORB Barcelona), Advanced Digital Microscopy, Parc Científic de Barcelona, dBaldiri Reixac 1 O, 08028 Barcelona, Spain 3German Center of Neurodegenerative
    [Show full text]
  • Other Departments and Institutes Courses 1
    Other Departments and Institutes Courses 1 Other Departments and Institutes Courses About Course Numbers: 02-250 Introduction to Computational Biology Each Carnegie Mellon course number begins with a two-digit prefix that Spring: 12 units designates the department offering the course (i.e., 76-xxx courses are This class provides a general introduction to computational tools for biology. offered by the Department of English). Although each department maintains The course is divided into two halves. The first half covers computational its own course numbering practices, typically, the first digit after the prefix molecular biology and genomics. It examines important sources of biological indicates the class level: xx-1xx courses are freshmen-level, xx-2xx courses data, how they are archived and made available to researchers, and what are sophomore level, etc. Depending on the department, xx-6xx courses computational tools are available to use them effectively in research. may be either undergraduate senior-level or graduate-level, and xx-7xx In the process, it covers basic concepts in statistics, mathematics, and courses and higher are graduate-level. Consult the Schedule of Classes computer science needed to effectively use these resources and understand (https://enr-apps.as.cmu.edu/open/SOC/SOCServlet/) each semester for their results. Specific topics covered include sequence data, searching course offerings and for any necessary pre-requisites or co-requisites. and alignment, structural data, genome sequencing, genome analysis, genetic variation, gene and protein expression, and biological networks and pathways. The second half covers computational cell biology, including biological modeling and image analysis. It includes homework requiring Computational Biology Courses modification of scripts to perform computational analyses.
    [Show full text]
  • Bioimage Informatics
    Bioimage Informatics Lecture 22, Spring 2012 Empirical Performance Evaluation of Bioimage Informatics Algorithms Image Database; Content Based Image Retrieval Emerging Applications: Molecular Imaging Lecture 22 April 23, 2012 1 Outline • Empirical performance evaluation of bioimage Informatics algorithm • Image database; content-based image retrieval • Introduction to molecular imaging 2 • Empirical performance evaluation of bioimage Informatics algorithm • Image database; content-based image retrieval • Introduction to molecular imaging 3 Overview of Algorithm Evaluation (I) • Two sets of data are required - Input data - Corresponding correct output (ground truth) • Sources of input data - Actual/experimental data, often from experiments - Synthetic data, often from computer simulation • Actual/experimental data - Essential for performance evaluation - May be costly to acquire - May not be representative - Ground truth often is unknown Overview of Algorithm Evaluation (II) • What exactly is ground truth? Ground truth is a term used in cartography, meteorology, analysis of aerial photographs, satellite imagery and a range of other remote sensing techniques in which data are gathered at a distance. Ground truth refers to information that is collected "on location." - http://en.wikipedia.org/wiki/Ground_truth • Synthetic (simulated) data Advantages - Ground truth is known - Usually low-cost Disadvantages - Difficult to fully represent the original data 5 Overview of Algorithm Evaluation (III) • Simulated data Advantages - Ground truth is known - Usually low-cost Disadvantages - Difficult to fully represent the original data • Realistic synthetic data - To use as much information from real experimental data as possible • Quality control of manual analysis Test Protocol Development (I) • Implementation - Source codes are not always available and often are on different platforms. • Parameter setting - This is one of the most challenging issues.
    [Show full text]
  • Bioimage Informatics 2019
    BioImage Informatics 2019 Co-hosted by: October 2-4, 2019 Allen Institute 615 Westlake Avenue N Seattle, WA 98109 alleninstitute.org/bioimage-informatics twitter #bioimage19 THANKS TO OUR SPONSORS BioImage Informatics 2019 Wednesday, October 2 8:00-9:00am Registration and continental breakfast 9:00-9:15am Welcome and introductory remarks, BioImage Informatics Local Planning Committee • Michael Hawrylycz, Ph.D., Allen Institute for Brain Science (co-chair) • Winfried Wiegraebe, Ph.D., Allen Institute for Cell Science (co-chair) • Forrest Collman, Ph.D., Allen Institute for Brain Science • Greg Johnson, Ph.D., Allen Institute for Cell Science • Lydia Ng, Ph.D., Allen Institute for Brain Science • Kaitlyn Casimo, Ph.D., Allen Institute 9:15-10:15am Opening keynote Gaudenz Danuser, UT Southwestern Medical Center “Inference of causality in molecular pathways from live cell images” Defining cause-effect relations among molecular components is the holy grail of every cell biological study. It turns out that for many of the molecular pathways this task is far from trivial. Many of the pathways are characterized by functional overlap and nonlinear interactions between components. In this configuration, perturbation of one component may result in a measureable shift of the pathway output – commonly referred to as a phenotype – but it is strictly impossible to interpret the phenotype in terms of the roles the targeted component plays in the unperturbed system. This caveat of perturbation approaches applies equally to genetic perturbations, which lead to long-term adaptation, and more acute pharmacological and optogenetic approaches, which often induce short-term adaptation. For nearly two decades my lab has been devoted to circumventing this problem by exploiting basal fluctuations of unperturbed systems to establish cause-and-effect cascades between pathway components.
    [Show full text]
  • A Bioimage Informatics Platform for High-Throughput Embryo Phenotyping James M
    Briefings in Bioinformatics, 19(1), 2018, 41–51 doi: 10.1093/bib/bbw101 Advance Access Publication Date: 14 October 2016 Paper A bioimage informatics platform for high-throughput embryo phenotyping James M. Brown, Neil R. Horner, Thomas N. Lawson, Tanja Fiegel, Simon Greenaway, Hugh Morgan, Natalie Ring, Luis Santos, Duncan Sneddon, Lydia Teboul, Jennifer Vibert, Gagarine Yaikhom, Henrik Westerberg and Ann-Marie Mallon Corresponding author: James Brown, MRC Harwell Institute, Harwell Campus, Oxfordshire, OX11 0RD. Tel. þ44-0-1235-841237; Fax: +44-0-1235-841172; E-mail: [email protected] Abstract High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene–phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encom- pass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for them- selves without the need for highly specialized software or technical expertise. In this article, we present a platform of soft- ware tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC.
    [Show full text]
  • View of Above Observations and Arguments, the Fol- Scopy Environment) [9]
    Loyek et al. BMC Bioinformatics 2011, 12:297 http://www.biomedcentral.com/1471-2105/12/297 SOFTWARE Open Access BioIMAX: A Web 2.0 approach for easy exploratory and collaborative access to multivariate bioimage data Christian Loyek1*, Nasir M Rajpoot2, Michael Khan3 and Tim W Nattkemper1 Abstract Background: Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results: In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions: BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec. uni-bielefeld.de/BioIMAX with the username “test” and the password “test1” for testing purposes. Background key challenge of bioimage informatics is to present the In the last two decades, innovative biological and biome- large amount of image data in a way that enables them dical imaging technologies greatly improved our under- to be browsed, analyzed, queried or compared with standing of structures of molecules, cells and tissue and other resources [2].
    [Show full text]
  • Principles of Bioimage Informatics: Focus on Machine Learning of Cell Patterns
    Principles of Bioimage Informatics: Focus on Machine Learning of Cell Patterns Luis Pedro Coelho1,2,3, Estelle Glory-Afshar3,6, Joshua Kangas1,2,3, Shannon Quinn2,3,4, Aabid Shariff1,2,3, and Robert F. Murphy1,2,3,4,5,6 1 Joint Carnegie Mellon University–University of Pittsburgh Ph.D. Program in Computational Biology 2 Lane Center for Computational Biology, Carnegie Mellon University 3 Center for Bioimage Informatics, Carnegie Mellon University 4 Department of Biological Sciences, Carnegie Mellon University 5 Machine Learning Department, Carnegie Mellon University 6 Department of Biomedical Engineering, Carnegie Mellon University Abstract. The field of bioimage informatics concerns the development and use of methods for computational analysis of biological images. Tra- ditionally, analysis of such images has been done manually. Manual an- notation is, however, slow, expensive, and often highly variable from one expert to another. Furthermore, with modern automated microscopes, hundreds to thousands of images can be collected per hour, making man- ual analysis infeasible. This field borrows from the pattern recognition and computer vi- sion literature (which contain many techniques for image processing and recognition), but has its own unique challenges and tradeoffs. Fluorescence microscopy images represent perhaps the largest class of biological images for which automation is needed. For this modality, typical problems include cell segmentation, classification of phenotypical response, or decisions regarding differentiated responses (treatment vs. control setting). This overview focuses on the problem of subcellular lo- cation determination as a running example, but the techniques discussed are often applicable to other problems. 1 Introduction Bioimage informatics employs computational and statistical techniques to ana- lyze images and related metadata.
    [Show full text]
  • Introduction to Bio(Medical)- Informatics
    Introduction to Bio(medical)- Informatics Kun Huang, PhD Raghu Machiraju, PhD Department of Biomedical Informatics Department of Computer Science and Engineering The Ohio State University Outline • Overview of bioinformatics • Other people’s (public) data • Basic biology review • Basic bioinformatics techniques • Logistics 2 Technical areas of bioinformatics IMMPORT https://immport.niaid.nih.gov/immportWeb/home/home.do?loginType=full# Databases and Query Databases and Query 547 5 Visualization 6 Visualization http://supramap.osu.edu 7 Network 8 Visualization and Visual Analytics § Visualizing genome data annotation 9 Network Mining Hopkins 2007, Nature Biotechnology, Network pharmacology 10 Role of Bioinformatics – Hypothesis Generation Hypothesis generation Bioinformatics Hypothesis testing 11 Training for Bioinformatics § Biology / medicine § Computing – programming, algorithm, pattern recognition, machine learning, data mining, network analysis, visualization, software engineering, database, etc § Statistics – hypothesis testing, permutation method, multivariate analysis, Bayesian method, etc Beyond Bioinformatics • Translational Bioinformatics • Biomedical Informatics • Medical Informatics • Nursing Informatics • Imaging Informatics • Bioimage Informatics • Clinical Research Informatics • Pathology Informatics • … • Health Analytics • … • Legal Informatics • Business Informatics Source: Department of Biomedical Informatics 13 Beyond Bioinformatics Basic Science Clinical Research Informatics Biomedical Informatics Theories & Methods
    [Show full text]
  • Bioimagexd: an Open, General-Purpose and High-Throughput Image-Processing Platform
    FOCUS ON BIOIMAGE INFORMATICS PERSPECTIVE BioImageXD: an open, general-purpose and high-throughput image-processing platform Pasi Kankaanpää1, Lassi Paavolainen2, Silja Tiitta1, Mikko Karjalainen2, Joacim Päivärinne1, Jonna Nieminen1, Varpu Marjomäki2, Jyrki Heino1 & Daniel J White2,3 BioImageXD puts open-source computer science BioImageXD in practice with user-friendly, auto- tools for three-dimensional visualization and mated, multiparametric image analyses of the move- analysis into the hands of all researchers, through a ment, clustering and internalization of integrins. user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses Criteria in BioImageXD development or undisclosed algorithms and enables publication Open. Our goal was to develop software that is trans- of precise, reproducible and modifiable workflows. parent, with open and documented source code and It allows simple construction of processing pipelines no undisclosed algorithms or hidden processing the and should enable biologists to perform challenging user is unaware of or cannot control. It is our view that analyses of complex processes. We demonstrate its software for scientific data analysis, in contrast to engi- performance in a study of integrin clustering in neering, is best when it is not a ‘black box’. Different response to selected inhibitors. implementations of the same method can often give dif- Bioimaging has emerged as one of the key tools in bio- ferent results, the reasons for which only become clear medical research. New imaging devices are appearing by comparing the source codes. Increasingly (and cor- rapidly, producing large multidimensional images with rectly in our view), disclosure of source code for image- several data channels in medium-to-high throughput.
    [Show full text]
  • Bioimage Informatics for Plant Sciences Blobs and Curves: Object-Based Colocalisation for Plant Cells
    ORE Open Research Exeter TITLE Blobs and curves: object-based colocalisation for plant cells AUTHORS Nelson, CJ; Duckney, P; Hawkins, TJ; et al. JOURNAL Functional Plant Biology DEPOSITED IN ORE 10 March 2017 This version available at http://hdl.handle.net/10871/26430 COPYRIGHT AND REUSE Open Research Exeter makes this work available in accordance with publisher policies. A NOTE ON VERSIONS The version presented here may differ from the published version. If citing, you are advised to consult the published version for pagination, volume/issue and date of publication Bioimage Informatics For Plant Sciences Blobs and Curves: Object-Based Colocalisation for Plant Cells Carl J. Nelson (A), Patrick Duckney, (B), Timothy J. Hawkins (B), Michael J. Deeks (C), Philippe Laissue (D), Patrick J. Hussey (B) and Boguslaw Obara (A)(*) (A) School of Engineering and Computing Sciences, Durham University, Durham, UK (B) School of Biological and Biomedical Sciences, Durham University, Durham, UK (C) College of Life and Environmental Sciences, University of Exeter, Exeter, UK (D) School of Biological Sciences, University of Essex, Colchester, UK (*) Corresponding author (email: [email protected]) Abstract: Quantifying the colocalisation of labels is a major application of fluorescent microscopy in plant biology. Pixel-based, i.e. Pearson's coefficient, quantification of colocalisation is a limited tool that gives limited information for further analysis. We show how applying bioimage informatics tools to a commonplace experiment allows further quantifiable results to be extracted. We use our object-based colocalisation technique to extract distance information, show temporal changes and demonstrate the advantages, and pitfalls, of using bioimage informatics for plant science.
    [Show full text]
  • A Short Course on Bioimage Informatics Lecture 1: Basic Concepts and Techniques
    A Short Course on Bioimage Informatics Lecture 1: Basic Concepts and Techniques Ge Yang (杨戈) Department of Biomedical Engineering & Department of Computational Biology Carnegie Mellon University August 10, 2015 1 Acknowledgements • Dr. Chao Tang • Dr. Luhua Lai • Students, faculty, and staff of the Center for Quantitative Biology, Peking University • Dr. Jing Yuan (袁静) • Youjun Xu (徐优俊) • 111 Visiting Scholar Program, Ministry of Education • College of Chemistry and Molecular Engineering, Peking University • School of Engineering & School of Computer Science, Carnegie Mellon University 2 Outline • Background • What is bioimage informatics? • Where does it come from? • Where is it going? • Basic concepts of image formation and collection • Overview of bioimage informatics techniques • Summary 3 • Background • What is bioimage informatics? • Where does it come from? • Where is it going? • Basic concepts of image formation and collection • Overview of biological image analysis techniques • Summary 4 What information can we extract from images? 刘奇 5 More Examples of Biological Images Typical Readouts from Images • From static images, e.g. immunofluorescence images - Location - Shape/morphology - Spatial distribution: e.g. spatial patterns - Spatial relation: e.g. protein colocalization • From dynamic images, e.g. live cell time-lapse images - All the readouts above, which now may be time-dependent - In other words, spatiotemporal dynamics/processes • Computational analysis techniques are required to extract these quantitative readouts from images. 7 Relations of Imaging and Image Analysis with Other Biological Research Tools • None of these readouts can be directly provided by e.g. genetics or biochemistry. • However, understanding these readouts often requires close integration of computational image analysis with experimental analysis using genetics and/or biochemistry techniques.
    [Show full text]
  • Phenotypic Image Analysis Software Tools for Exploring and Understanding Big Image Data from Cell-Based Assays
    Cell Systems Review Phenotypic Image Analysis Software Tools for Exploring and Understanding Big Image Data from Cell-Based Assays Kevin Smith,1,2 Filippo Piccinini,3 Tamas Balassa,4 Krisztian Koos,4 Tivadar Danka,4 Hossein Azizpour,1,2 and Peter Horvath4,5,* 1KTH Royal Institute of Technology, School of Electrical Engineering and Computer Science, Lindstedtsvagen€ 3, 10044 Stockholm, Sweden 2Science for Life Laboratory, Tomtebodavagen€ 23A, 17165 Solna, Sweden 3Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori (IRST) IRCCS, Via P. Maroncelli 40, Meldola, FC 47014, Italy 4Synthetic and Systems Biology Unit, Hungarian Academy of Sciences, Biological Research Center (BRC), Temesva´ ri krt. 62, 6726 Szeged, Hungary 5Institute for Molecular Medicine Finland, University of Helsinki, Tukholmankatu 8, 00014 Helsinki, Finland *Correspondence: [email protected] https://doi.org/10.1016/j.cels.2018.06.001 Phenotypic image analysis is the task of recognizing variations in cell properties using microscopic image data. These variations, produced through a complex web of interactions between genes and the environ- ment, may hold the key to uncover important biological phenomena or to understand the response to a drug candidate. Today, phenotypic analysis is rarely performed completely by hand. The abundance of high-dimensional image data produced by modern high-throughput microscopes necessitates computa- tional solutions. Over the past decade, a number of software tools have been developed to address this need. They use statistical learning methods to infer relationships between a cell’s phenotype and data from the image. In this review, we examine the strengths and weaknesses of non-commercial phenotypic image analysis software, cover recent developments in the field, identify challenges, and give a perspective on future possibilities.
    [Show full text]