Multiple Usage of KNIME in a Screening Laboratory Environment

Total Page:16

File Type:pdf, Size:1020Kb

Multiple Usage of KNIME in a Screening Laboratory Environment Multiple Usage of KNIME in a Screening Laboratory Environment KNIME UGM Zürich, 02.02.2012 Marc Bickle HT-TDS, MPI-CBG Outline • Presentation of TDS • Our problem: large complex datasets • KNIME as data mining tool for screening (Community nodes) • HCS tools (Community nodes) • R, python, Matlab, Groovy integration • Some examples of other usage of KNIME The High Throughput Technology Development Studio (HT-TDS) Mission: provide cell-based screening services Automated microscopy and automated image analysis • High spatio-temporal resolution on a cell-by-cell basis • Quantitative measurement of many cellular parameters (intensity, sub cellular localization) allowing finely resolved phenotypic classification • System biology readouts of chemogenomic screens (genome-wide RNAi screens + chemical screens) • Clustering of RNAi and chemical phenotypes for mode-of-action identification (170.000 cpds, GW libraries) shape of cells number of nuclei intensity Automated Confocal Microscopy Profiling/clustering • high resolution • advanced statistics • high throughput • identify target of compounds distribution/distance Automated Image Analysis sub cellular localization • multiple parameters • high definition of phenotypes Identifying MOA By Integrating Chemical And Genetic Screens Compound Screen RNAi Screen 6.0 CHML all oligos run 5 6.0 CHML all oligos run 5 4.0 4.0 2.0 2.0 0.0 0.0 -2.0 -2.0 -4.0 0.0 5.0 10 15 20 25 30 35 40 -4.0 0.0 5.0 10 15 20 25 30 35 40 What genes influence the same parameters as the compounds ? Devise assays to test predictions upon pathways Image Screen Dataflow 1-10 Mio Images (tif) 2-10TB Image Analysis 1-10 x 104 wells 1-10x 106 fields 1-10 x 108 cells 1-10 x 1010 objects CSV files/database 2-100GB 1-10 Mio Images (png) Data Mining 1-50 plots (png/svg) 7 (2010)7 1-10 result files - 1 doi:10.1038/nature08779 C Collinet al.000, Natureet (CSV/xls/pdf) 1-10MB Available Software Solutions • Few software are able to deal with n-dimensional data structures of several GB size 1. Scripting languages: R, S, Matlab, Python, Java, C Issues: • Biologists are rarely at ease with scripting languages • No overview of the data and the analysis flow (not graphical) 2. Commercial software: Genedata, Spotfire, Pipeline Pilot Issues: • Very expensive • Not flexible, no possibility to extend the code • No or small community to share problems and solutions with (but there are field scientists to help out) 3. Graphical Open Source software: KNIME, RapidMiner Issues: • None? KNIME 1. KNIME can handle very large datasets on normal desktop computers 1. The workspace allows to easily assemble analysis pipelines • Good overview of the analysis path and operations (annotation of nodes) 2. Many useful data manipulation nodes, powerful clustering methods and cheminformatic nodes existed 3. The possibility of integrated scripting languages (R, Java) offered great flexibility HCS Tools • KNIME did not have any screening-specific tools implemented ➜ We created a set of KNIME nodes for analyzing screening data 1. Instrument output readers 2. Well annotation tools, barcode tools 3. Typical QC tools: Z’ factor, SSMD, CV 4. Typical normalization tools: Z score, Percent of Control, Normalized Percent Inhibition, B score 5. Typical visualization tools: heatmap Scripting Integration • Some methods were not implemented in KNIME nodes ➜ We integrated R, Python, Groovy, Matlab (requires licensed server) scripting languages with RGG: • Hides script behind a GUI • Choose from a set of templates for methods or plots • Parametrization with buttons or drop boxes (http ://idisk-srv1.mpi-cbg.de/knime/scripting- templates_tds/Matlab/TDS_figure-templates.txt) Workflow Annotate Screen Normalize Read Data QC Data Data Snippets to test for normality, to transform to normality (Box Cox), calculate Mahalanobis distance, Pearson’s correlations Other Applications I • Create a loop to open many csv files, calculate something and close and save the files Example: merge a measurement column from a lot of files to a lot of other files Other Applications II • Standardization of libraries (compounds and siRNA libraries), • Different providers have different datasheets. In order to integrate all libraries in a common database for screen annotation, they need to be standardized and rearrayed to 384 well format Other Applications III • Hitpicking rearraying. After a screen, we need to reconfirm hits and to cherry pick them from the library and transfer to a new 384 well plate. The workflow takes into account the work logic of the robot to obtain the final plate layout. Other Applications IV • Use the generic xml reader to read files that are often not accessible • Example on OPERA database, read microscope specific data such as focus height, sublayout, dichroic mirrors settings and combine with QC image analysis script running on the fly. Other Applications IV Plot the intensity of a channel per column and Verify the parameter profile of controls plate Workshop: Multifactorial Optimization • The HT-TDS offers a one week workshop to learn: 1. Optimize siRNA transfection and antibody staining in 96 well format 2. How to use the Perkin Elmer OPERETTA (widefield microscope) 3. How to perform image analysis with the Open Source software CellProfiler 4. How to perform multiparametric analysis with KNIME Please contact Marc Bickle: [email protected] Summary • The HT-TDS is screening facility specialized in automated imaging (HCS) open to any users • We have created a set of nodes and templates for analyzing screening data in KNIME • KNIME can be used for many other common tasks • The HT-TDS offers a one week workshop for learning automated microscopy, image processing and multivariate analysis using KNIME tools Aknowledgements MPI-CBG Holger Antje Martin Felix Brandl Niederlein Stoeter Meyenhofer .
Recommended publications
  • Text Mining Course for KNIME Analytics Platform
    Text Mining Course for KNIME Analytics Platform KNIME AG Copyright © 2018 KNIME AG Table of Contents 1. The Open Analytics Platform 2. The Text Processing Extension 3. Importing Text 4. Enrichment 5. Preprocessing 6. Transformation 7. Classification 8. Visualization 9. Clustering 10. Supplementary Workflows Licensed under a Creative Commons Attribution- ® Copyright © 2018 KNIME AG 2 Noncommercial-Share Alike license 1 https://creativecommons.org/licenses/by-nc-sa/4.0/ Overview KNIME Analytics Platform Licensed under a Creative Commons Attribution- ® Copyright © 2018 KNIME AG 3 Noncommercial-Share Alike license 1 https://creativecommons.org/licenses/by-nc-sa/4.0/ What is KNIME Analytics Platform? • A tool for data analysis, manipulation, visualization, and reporting • Based on the graphical programming paradigm • Provides a diverse array of extensions: • Text Mining • Network Mining • Cheminformatics • Many integrations, such as Java, R, Python, Weka, H2O, etc. Licensed under a Creative Commons Attribution- ® Copyright © 2018 KNIME AG 4 Noncommercial-Share Alike license 2 https://creativecommons.org/licenses/by-nc-sa/4.0/ Visual KNIME Workflows NODES perform tasks on data Not Configured Configured Outputs Inputs Executed Status Error Nodes are combined to create WORKFLOWS Licensed under a Creative Commons Attribution- ® Copyright © 2018 KNIME AG 5 Noncommercial-Share Alike license 3 https://creativecommons.org/licenses/by-nc-sa/4.0/ Data Access • Databases • MySQL, MS SQL Server, PostgreSQL • any JDBC (Oracle, DB2, …) • Files • CSV, txt
    [Show full text]
  • Imagej2-Allow the Users to Use Directly Use/Update Imagej2 Plugins Inside KNIME As Well As Recording and Running KNIME Workflows in Imagej2
    The KNIME Image Processing Extension for Biomedical Image Analysis Andries Zijlstra (Vanderbilt University Medical Center The need for image processing in medicine Kevin Eliceiri (University of Wisconsin-Madison) KNIME Image Processing and ImageJ Ecosystem [email protected] [email protected] The need for precision oncology 36% of newly diagnosed cancers, and 10% of all cancer deaths in men Out of every 100 men... 16 will be diagnosed with prostate cancer in their lifetime In reality, up to 80 will have prostate cancer by age 70 And 3 will die from it. But which 3 ? In the meantime, we The goal: Diagnose patients that have over-treat many aggressive disease through Precision Medicine patients Objectives of Approach to Modern Medicine Precision Medicine • Measure many things (data density) • Improved outcome through • Make very accurate measurements (fidelity) personalized/precision medicine • Consider multiple perspectives (differential) • Reduced expense/resource allocation through • Achieve confidence in the diagnosis improved diagnosis, prognosis, treatment • Match patients with a treatment they are most • Maximize quality of life by “targeted” therapy likely to respond to. Objectives of Approach to Modern Medicine Precision Medicine • Measure many things (data density) • Improved outcome through • Make very accurate measurements (fidelity) personalized/precision medicine • Consider multiple perspectives (differential) • Reduced expense/resource allocation through • Achieve confidence in the diagnosis improved diagnosis,
    [Show full text]
  • KNIME Workbench Guide
    KNIME Workbench Guide KNIME AG, Zurich, Switzerland Version 4.4 (last updated on 2021-06-08) Table of Contents Workspaces . 1 KNIME Workbench . 2 Welcome page . 4 Workflow editor & nodes . 5 KNIME Explorer . 13 Workflow Coach . 35 Node repository . 37 KNIME Hub view . 38 Description. 40 Node Monitor. 40 Outline. 41 Console. 41 Customizing the KNIME Workbench . 42 Reset and logging . 42 Show heap status . 42 Configuring KNIME Analytics Platform . 43 Preferences . 43 Setting up knime.ini. 47 KNIME runtime options . 49 KNIME tables . 55 Data table . 55 Column types. 56 Sorting . 59 Column rendering . 59 Table storage. 61 KNIME Workbench Guide This guide describes the first steps to take after starting KNIME Analytics Platform and points you to the resources available in the KNIME Workbench for building workflows. It also explains how to customize the workbench and configure KNIME Analytics Platform to best suit specific needs. In the last part of this guide we introduce data tables. Workspaces When you start KNIME Analytics Platform, the KNIME Analytics Platform launcher window appears and you are asked to define the KNIME workspace, as shown in Figure 1. The KNIME workspace is a folder on the local computer to store KNIME workflows, node settings, and data produced by the workflow. Figure 1. KNIME Analytics Platform launcher The workflows and data stored in the workspace are available through the KNIME Explorer in the upper left corner of the KNIME Workbench. © 2021 KNIME AG. All rights reserved. 1 KNIME Workbench Guide KNIME Workbench After selecting a workspace for the current project, click Launch. The KNIME Analytics Platform user interface - the KNIME Workbench - opens.
    [Show full text]
  • 3 Workflow Systems
    Large Scale Data Handling in Biology Workflow Systems 3 Workflow Systems Within the last few years a large number of tools and softwares dealing with different computational problems related to HCS have been developed. Incorporating third party or new tools into existing frameworks needs a flexible, modular and customizable workflow framework. Workflow (Pipeline) systems could become crucial for enabling HCS researchers doing large scale experiments to deal with this data explosion. The workflow is termed abstract in that it is not yet fully functional but the actual components are in place and in the requisite order. In general, workflow systems concentrate on the creation of abstract process workflows to which data can be applied when the design process is complete. In contrast, workflow systems in the life sciences domain are often based on a data-flow model, due to the data-centric and data-driven nature of many scientific analyses. A comprehensive understanding of biological phenomena can be achieved only through the integration of all available biological information and different data analysis tools and applications. In general, an ideal workflow system in HCS can integrate nearly all standard tools and software. For example, for an HCS using small molecules, the workflow system must be able to integrate different image processing software and data mining toolkits with flexibility. The possibility that any single software covers all possible domains and data models is nearly zero. No one vendor or source can provide all the tools needed by HCS informatics. So it is suggested that one uses specialized tools from specialized sources. Also not all softwares components can be integrated with all workflow systems.
    [Show full text]
  • Data Analytics with Knime
    DATA ANALYTICS WITH KNIME v.3.4.0 QUALIFICATIONS & EXPERIENCE ▶ 38 years of providing professional services to state and local taxing officials ▶ TMA works exclusively with government partners WHO ▶ TMA is composed of 150+ WE ARE employees in five main offices across the United States Tax Management Associates is a professional services firm that has ▶ Our main focus is on revenue served the interests of state and local enhancement services for state government since 1979. and local jurisdictions and property tax compliance efforts KNIME POWERED CUSTOM ANALYTICS ▶ TMA is a proud KNIME Trusted Consulting Partner. Visit: www.knime.org/knime-trusted-partners ▶ Successful analytics solutions: ○ Fraud Detection (Michigan Department of Treasury) ○ Entity Discovery (multiple counties) ○ Data Aggregation (Louisiana State Tax Commission) KNIME POWERED CUSTOM ANALYTICS ▶ KNIME is an open source data toolkit ▶ Active development community and core team ▶ GUI based with scripting integration ○ Easy adoption, integration, and training ▶ Data ingestion, transformation, analytics, and reporting FEATURES & TERMINOLOGY KNIME WORKBENCH TAX MANAGEMENT ASSOCIATES, INC. KNIME WORKFLOW TAX MANAGEMENT ASSOCIATES, INC. KNIME NODES TAX MANAGEMENT ASSOCIATES, INC. DATA TYPES & SOURCES DATA AGNOSTIC ▶ Flat Files ▶ Shapefiles ▶ Xls/x Reader ▶ HTTP Requests ▶ Fixed Width ▶ RSS Feeds ▶ Text Files ▶ Custom API’s/Curl ▶ Image Files ▶ Standard API’s ▶ XML ▶ JSON TAX MANAGEMENT ASSOCIATES, INC. KNIME DATA NODES TAX MANAGEMENT ASSOCIATES, INC. DATABASE AGNOSTIC ▶ Microsoft SQL ▶ Oracle ▶ MySQL ▶ IBM DB2 ▶ Postgres ▶ Hadoop ▶ SQLite ▶ Any JDBC driver TAX MANAGEMENT ASSOCIATES, INC. KNIME DATABASE NODES TAX MANAGEMENT ASSOCIATES, INC. CORE DATA ANALYTICS FEATURES KNIME DATA ANALYTICS LIFECYCLE Read Data Extract, Data Analytics Reporting or Predictive Read Transform, and/or Load (ETL) Analysis Injection Data Read Data TAX MANAGEMENT ASSOCIATES, INC.
    [Show full text]
  • BIOVIA Pipeline Pilot System Requirements
    SYSTEM REQUIREMENTS PIPELINE PILOT 2020 Copyright Notice ©2019 Dassault Systèmes. All rights reserved. 3DEXPERIENCE, the Compass icon and the 3DS logo, CATIA, SOLIDWORKS, ENOVIA, DELMIA, SIMULIA, GEOVIA, EXALEAD, 3DVIA, 3DSWYM, BIOVIA, NETVIBES, IFWE and 3DEXCITE, are commercial trademarks or registered trademarks of Dassault Systèmes, a French "société européenne" (Versailles Commercial Register # B 322 306 440), or its subsidiaries in the U.S. and/or other countries. All other trademarks are owned by their respective owners. Use of any Dassault Systèmes or its subsidiaries trademarks is subject to their express written approval. Acknowledgments and References To print photographs or files of computational results (figures and/or data) obtained by using Dassault Systèmes software, acknowledge the source in an appropriate format. For example: "Computational results were obtained by using Dassault Systèmes BIOVIA software programs. Pipeline Pilot Server was used to perform the calculations and to generate the graphical results." Dassault Systèmes may grant permission to republish or reprint its copyrighted materials. Requests should be submitted to Dassault Systèmes Customer Support, either by visiting https://www.3ds.com/support/ and clicking Call us or Submit a request, or by writing to: Dassault Systèmes Customer Support 10, Rue Marcel Dassault 78140 Vélizy-Villacoublay FRANCE Contents About This Document 1 Definitions 1 Additional Information 1 Dassault Systèmes Support Resources 1 Pipeline Pilot Server Requirements 2 Minimum Hardware
    [Show full text]
  • Integration Collection for Pipeline Pilot
    QUICK START GUIDE PROTOCOL DEVELOPMENT INTEGRATION COLLECTION 2017 Copyright Notice ©2016 Dassault Systèmes. All rights reserved. 3DEXPERIENCE, the Compass icon and the 3DS logo, CATIA, SOLIDWORKS, ENOVIA, DELMIA, SIMULIA, GEOVIA, EXALEAD, 3D VIA, BIOVIA and NETVIBES are commercial trademarks or registered trademarks of Dassault Systèmes or its subsidiaries in the U.S. and/or other countries. All other trademarks are owned by their respective owners. Use of any Dassault Systèmes or its subsidiaries trademarks is subject to their express written approval. Acknowledgments and References To print photographs or files of computational results (figures and/or data) obtained using BIOVIA software, acknowledge the source in an appropriate format. For example: "Computational results obtained using software programs from Dassault Systèmes BIOVIA. The ab initio calculations were performed with the DMol3 program, and graphical displays generated with Pipeline Pilot." BIOVIA may grant permission to republish or reprint its copyrighted materials. Requests should be submitted to BIOVIA Support, either through electronic mail to [email protected], or in writing to: BIOVIA Support 5005 Wateridge Vista Drive, San Diego, CA 92121 USA Contents Chapter 1: Introduction 1 Run Program 17 Architectural Overview 1 Run Program on Remote Host 18 Extending Functionality 2 SOAP 18 Application Integration Components 3 Chapter 4: Language-Based Components 19 Command-line Integration 3 Perl 19 Run Program Components 3 Java 20 FTP 3 .NET 20 SSH 3 Windows Script Host Components 20 SCP 4 Python 21 Telnet 4 VBScript 21 Language-based Integration 4 PilotScript 21 Java Component Development 4 About the PilotScript Language 21 .NET Component Development 4 PilotScript vs.
    [Show full text]
  • Sheffield HPC Documentation
    Sheffield HPC Documentation Release November 14, 2016 Contents 1 Research Computing Team 3 2 Research Software Engineering Team5 i ii Sheffield HPC Documentation, Release The current High Performance Computing (HPC) system at Sheffield, is the Iceberg cluster. A new system, ShARC (Sheffield Advanced Research Computer), is currently under development. It is not yet ready for use. Contents 1 Sheffield HPC Documentation, Release 2 Contents CHAPTER 1 Research Computing Team The research computing team are the team responsible for the iceberg service, as well as all other aspects of research computing. If you require support with iceberg, training or software for your workstations, the research computing team would be happy to help. Take a look at the Research Computing website or email research-it@sheffield.ac.uk. 3 Sheffield HPC Documentation, Release 4 Chapter 1. Research Computing Team CHAPTER 2 Research Software Engineering Team The Sheffield Research Software Engineering Team is an academically led group that collaborates closely with CiCS. They can assist with code optimisation, training and all aspects of High Performance Computing including GPU computing along with local, national, regional and cloud computing services. Take a look at the Research Software Engineering website or email rse@sheffield.ac.uk 2.1 Using the HPC Systems 2.1.1 Getting Started If you have not used a High Performance Computing (HPC) cluster, Linux or even a command line before this is the place to start. This guide will get you set up using iceberg in the easiest way that fits your requirements. Getting an Account Before you can start using iceberg you need to register for an account.
    [Show full text]
  • Mathematica Document
    Mathematica Project: Exploratory Data Analysis on ‘Data Scientists’ A big picture view of the state of data scientists and machine learning engineers. ����� ���� ��������� ��� ������ ���� ������ ������ ���� ������/ ������ � ���������� ���� ��� ������ ��� ���������������� �������� ������/ ����� ��������� ��� ���� ���������������� ����� ��������������� ��������� � ������(�������� ���������� ���������) ������ ��������� ����� ������� �������� ����� ������� ��� ������ ����������(���� �������) ��������� ����� ���� ������ ����� (���������� �������) ����������(���������� ������� ���������� ��� ���� ���� �����/ ��� �������������� � ����� ���� �� �������� � ��� ����/���������� ��������������� ������� ������������� ��� ���������� ����� �����(���� �������) ����������� ����� / ����� ��� ������ ��������������� ���������� ����������/�++ ������/������������/����/������ ���� ������� ����� ������� ������� ����������������� ������� ������� ����/����/�������/��/��� ����������(�����/����-�������� ��������) ������������ In this Mathematica project, we will explore the capabilities of Mathematica to better understand the state of data science enthusiasts. The dataset consisting of more than 10,000 rows is obtained from Kaggle, which is a result of ‘Kaggle Survey 2017’. We will explore various capabilities of Mathematica in Data Analysis and Data Visualizations. Further, we will utilize Machine Learning techniques to train models and Classify features with several algorithms, such as Nearest Neighbors, Random Forest. Dataset : https : // www.kaggle.com/kaggle/kaggle
    [Show full text]
  • Direct Submission Or Co-Submission Direct Submission
    Z-Matrix template-based substitution approach Title for enumeration of 3D molecular structures Authors Wanutcha Lorpaiboon and Taweetham Limpanuparb* Science Division, Mahidol University International College, Affiliations Mahidol University, Salaya, Nakhon Pathom 73170, Thailand Corresponding Author’s email address [email protected] • Chemical structures • Education Keywords • Molecular generator • Structure generator • Z-matrix Direct Submission or Co-Submission Direct Submission ABSTRACT The exhaustive enumeration of 3D chemical structures based on Z-matrix templates has recently been used in the quantum chemical investigation of constitutional isomers, diastereomers and 5 rotamers. This simple yet powerful initial structure generation approach can apply beyond the investigation of compounds of identical formula by quantum chemical methods. This paper aims to provide a short description of the overall concept followed by a practical tutorial to the approach. • The four steps required for Z-matrix template-based substitution are template construction, generation of tuples for substitution sites, removal of duplicate tuples and 10 substitution on the template. • The generated tuples can be used to create chemical identifiers to query compound properties from chemical databases. • All of these steps are demonstrated in this paper by common model compounds and are very straightforward for an undergraduate audience to reproduce. A comparison of the 15 approach in this tutorial and other options is also discussed. SPECIFICATIONS TABLE Subject Area Chemistry More specific subject area Cheminformatics Method name Z-matrix template-based substitution Name and reference of original method N/A Source codes are available as supplementary information in this Resource availability paper. 2 of 10 Method details 20 1. Introduction Initial structures (Z-matrix or Cartesian coordinate) are important starting points for the in silico investigation of chemical species.
    [Show full text]
  • Role of Materials Data Science and Informatics in Accelerated Materials Innovation Surya R
    Role of materials data science and informatics in accelerated materials innovation Surya R. Kalidindi , David B. Brough , Shengyen Li , Ahmet Cecen , Aleksandr L. Blekh , Faical Yannick P. Congo , and Carelyn Campbell The goal of the Materials Genome Initiative is to substantially reduce the time and cost of materials design and deployment. Achieving this goal requires taking advantage of the recent advances in data and information sciences. This critical need has impelled the emergence of a new discipline, called materials data science and informatics. This emerging new discipline not only has to address the core scientifi c/technological challenges related to datafi cation of materials science and engineering, but also, a number of equally important challenges around data-driven transformation of the current culture, practices, and workfl ows employed for materials innovation. A comprehensive effort that addresses both of these aspects in a synergistic manner is likely to succeed in realizing the vision of scaled-up materials innovation. Key toolsets needed for the successful adoption of materials data science and informatics in materials innovation are identifi ed and discussed in this article. Prototypical examples of emerging novel toolsets and their functionality are described along with select case studies. Introduction goal of reducing the time and cost of materials development Materials innovation initiatives and deployment by 50%. 1 Essential to achieving this goal is A number of US-based, 1 – 3 as well as international, 4 , 5 efforts are the development and deployment of a supporting infrastruc- now focused on accelerated deployment of advanced materials ture that integrates a wide range of data, experimental, and in commercial products.
    [Show full text]
  • Titel Untertitel
    KNIME Image Processing Nycomed Chair for Bioinformatics and Information Mining Department of Computer and Information Science Konstanz University, Germany Why Image Processing with KNIME? KNIME UGM 2013 2 The “Zoo” of Image Processing Tools Development Processing UI Handling ImgLib2 ImageJ OMERO OpenCV ImageJ2 BioFormats MatLab Fiji … NumPy CellProfiler VTK Ilastik VIGRA CellCognition … Icy Photoshop … = Single, individual, case specific, incompatible solutions KNIME UGM 2013 3 The “Zoo” of Image Processing Tools Development Processing UI Handling ImgLib2 ImageJ OMERO OpenCV ImageJ2 BioFormats MatLab Fiji … NumPy CellProfiler VTK Ilastik VIGRA CellCognition … Icy Photoshop … → Integration! KNIME UGM 2013 4 KNIME as integration platform KNIME UGM 2013 5 Integration: What and How? KNIME UGM 2013 6 Integration ImgLib2 • Developed at MPI-CBG Dresden • Generic framework for data (image) processing algoritms and data-structures • Generic design of algorithms for n-dimensional images and labelings • http://fiji.sc/wiki/index.php/ImgLib2 → KNIME: used as image representation (within the data cells); basis for algorithms KNIME UGM 2013 7 Integration ImageJ/Fiji • Popular, highly interactive image processing tool • Huge base of available plugins • Fiji: Extension of ImageJ1 with plugin-update mechanism and plugins • http://rsb.info.nih.gov/ij/ & http://fiji.sc/ → KNIME: ImageJ Macro Node KNIME UGM 2013 8 Integration ImageJ2 • Next-generation version of ImageJ • Complete re-design of ImageJ while maintaining backwards compatibility • Based on ImgLib2
    [Show full text]