The Bioinformatics and Mathematical Bioscience Lab (Bmbl) Biweekly Science Report

Total Page:16

File Type:pdf, Size:1020Kb

The Bioinformatics and Mathematical Bioscience Lab (Bmbl) Biweekly Science Report THE BIOINFORMATICS AND MATHEMATICAL BIOSCIENCE LAB (BMBL) BIWEEKLY SCIENCE REPORT Reporting Time: Due at 5pm on Mar. 25th Institution: South Dakota State University Report prepared by: Jinyu yang Advisor: Qin Ma Project Name: How to do differential expression analysis from Fastq format data on HPC (required packages or software: R, ShortRead, BBMap, Bowtie2, TopHat2, Samtools, HTSeq and DESeq) SIGNIFICANT SCIENCE ACCOMPLISHMENTS: (Examples: major achievement in meeting a milestone, new collaborations, publication in high impact journal.) 1. Summary of Science Activities (<1 page): raw data(.fq / .fasta file)---[ ShortRead (QA, filtering and trimming) then BBMap(QC, filtering and trimming)]---preprocessed data(.fq / .fasta file)---[ Bowtie+Tophat ]---accepted_hits.bam--- [ samtools ]- --XXX.sam--- [ HTSeq ]---XXX.count---[ DESeq ]---result files(E.g. P-value table) 2. Future Work Plans (Brief summary of the tasks/milestones working on next month): 3. Issues to Resolve (Issues that need input from another partner, FA Lead, Science Coordinator or BESC Director for resolution, etc.): 4. Publications: 5. Presentations: 6. News / Awards: 7. Personnel changes (New, reassigned, or departed): 8. Intellectual Property: 9. Quality Assurance: 1 10. Environment, Safety and Health: Please complete and return to Qin Ma ([email protected]). If no activity, please indicate “N/A”. 2 Supplementary Tables and Figures (to support above accomplishments) Now we use several real datasets with .fq format to do differential expression analysis, in this example, the datasets are derived from two different conditions (gu and ye), and each condition has a biological replicate. (please install the required software following the instructions in “How to use DESeq do differential expression analysis” before that). 1. Prepare you datasets Upload your raw datasets (.fq or .fastq format file) and relative files (e.g., reference genomes and gene model annotations) to HPC or other server, like following: where the GL.fa is reference genome, GL.gb.gff is gene model annotation. Then extract compressed files to current directory, command like this: > gunzip *.fq.gz Then you need to use ShortRead to do QA(Quality Assessment), filtering and trimming, so make sure that you have installed R package. If not, don’t warry, you could do as the following steps now. Otherwise, you can skip them. Firstly, Download the latest R package( https://cran.r-project.org/ ) for Linux, e.g., R-3.2.3.tar.gz, and upload this file to HPC, then execute the following commands: 3 > tar –zxvf R-3.2.3.tar.gz > cd R-3.2.3 > ./configure > make > make install > vim ~/.bash_profile now you need to edit following two sentences in ~/.bash_profile: PATH = /home/yangj/R-3.2.3/bin:$PATH Export PATH then save and quit (/home/yangj/R-3.2.3 is absolute path of R package in my PC, remembering to substitute yours), finally, execute: > source ~/.bash_profile 2. ShortRead( QA(Quality Assessment), filtering and trimming) a) QA Firstly, we need to move into R environment: > R then we need to install the ShortRead and DESeq, if you have not installed before. > source("https://bioconductor.org/biocLite.R") > biocLite(“ShortRead”, "DESeq") then use setwd() to set the working directory to where the FQ file are situated > setwd("/home/yangj/DESeq-SDU/Sample") 4 load ShortRead library: > library("ShortRead") Now we can access quality control with ShortRead, > fls = dir("./", "*fq$", full=TRUE) > qaSummary = qa(fls, type="fastq") > report(qaSummary, type="html", dest="fqQAreport") However, HPC do not support X11, then you will get the error as following: So we need to download the qaSummary to our local machine, and manipulate it with RStudio. save(qaSummary, file="qaSummary") Download qaSummary to your local machine, then boot up you RStudio. > load("C:/BaiduYunDownload/qaSummary") > report(qaSummary, type="html", dest="fqQAreport") You will get a file named “fqQAreport”, which includes QA report. Ok, back to HPC > qaSummary[["readCounts"]] read filter aligned gu2_read1.fq 35006259 NA NA gu2_read2.fq 35006259 NA NA gu3_read1.fq 30748467 NA NA gu3_read2.fq 30748467 NA NA ye1_read1.fq 31187333 NA NA ye1_read2.fq 31187333 NA NA ye3_read1.fq 33479655 NA NA ye3_read2.fq 33479655 NA NA As you can see, the upper command results the number of reads, the number of reads surviving the Solexa filtering criteria, and the number of reads aligned to the reference genome for the lane (Because the filtering 5 and aligning have not be done, so the last two are NA). Meantime, you can inspect the detail information of each FQ file as following: > qaSummary[["baseCalls"]] A C G T N gu2_read1.fq 21685857 28412307 28130219 21767568 4049 gu2_read2.fq 21729895 28722063 27816884 21730591 567 gu3_read1.fq 21723444 28346939 28174527 21751407 3683 gu3_read2.fq 21734048 28697947 27824570 21742736 699 ye1_read1.fq 21675483 28443112 28095517 21781660 4228 ye1_read2.fq 21702486 28762839 27785294 21748745 636 ye3_read1.fq 21795076 28360347 27968237 21872354 3986 ye3_read2.fq 21807964 28695030 27618484 21877864 658 b) Filtering and trimming Construct a function for filtering and trimming, here we use nFilter() to guarantee the reads can not contain ‘N’. The sliding window (trimTailw(object, k, a, halfwidth)) starts at the left-most nucleotide, tabulating the number of cycles in a window of 2 * halfwidth + 1 surrounding the current nucleotide with quality scores that fall at or below a. The read is trimmed at the first nucleotide for which this number >= k. Then we drop reads that are less than 35nt. myFilterAndTrim <- function(fl, destination=sprintf("%s_SR.fq.gz", substr(fl,1,nchar(fl)-6))){ stream <- open(FastqStreamer(fl)) on.exit(close(stream)) repeat { fq <- yield(stream) if (length(fq) == 0) break fq <- fq[nFilter()(fq)] fq <- trimTailw(fq, 2, "4", 2) fq <- fq[width(fq) >= 35] writeFastq(fq, destination, "a") } } execute the function using a “for loop”: dataSets = c("gu2_read1.fq.gz","gu2_read2.fq.gz","gu3_read1.fq.gz","gu3_read2.fq.gz","ye1_read1.fq.gz","ye1_read2.fq.gz","ye3 _read1.fq.gz","ye3_read2.fq.gz") for(i in 1:length(dataSets)){myFilterAndTrim(dataSets[i]);} 3. fastqc (QC(quality control)) 6 Also, we use fastqc to get the quality control report. First, create a new folder “fastqcReport” and fastqc.pbs, then edit the command in fastqc.pbs as following: #!/bin/bash # File fastqc.pbs # fastqc script for blackjack # ~April 2015 [email protected] # Job name #PBS -N fastqc # To request 10 hours of wall clock time #PBS -l walltime=10:00:00 # To request a single node with 1 core #PBS -l nodes=1:ppn=1 #The environment variable $PBS_O_WORKDIR specify the directory from which you submitted the job cd $PBS_O_WORKDIR # Modify input and output files below to match your run!! # You will also need correct ancillary files (parameters, etc) in the # working directory to get your simulation to run. #load module . /usr/share/modules/init/sh module load bio/FastQC/0.11.3 fastqc -o ./fastqcReport-1 *fq #fastqc --help then execute it by the command: qsub fastqc.pbs ShortRead drops the reads containing the ‘N’, but it looks like that the low quality bases still exists, so we decide to filtering and trimming the ShortRead result with BBMap. 4. BBMap(Filtering and Trimming) 7 (Note: in the section, all the operations are executed on zcluster, a server at UGA) First, create the bbmap.sh, and edit the command as following: #!/bin/bash cd ~/Jinyu/DESeq-SDU/shortReadRes time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=gu2_read1_SR.fq out=gu2_read1_BBM.fq qin=64 qout=64 qtrim=r trimq=4 time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=gu2_read2_SR.fq out=gu2_read2_BBM.fq qin=64 qout=64 qtrim=r trimq=4 time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=gu3_read1_SR.fq out=gu3_read1_BBM.fq qin=64 qout=64 qtrim=r trimq=4 time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=gu3_read2_SR.fq out=gu3_read2_BBM.fq qin=64 qout=64 qtrim=r trimq=4 time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=ye1_read1_SR.fq out=ye1_read1_BBM.fq qin=64 qout=64 qtrim=r trimq=4 time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=ye1_read2_SR.fq out=ye1_read2_BBM.fq qin=64 qout=64 qtrim=r trimq=4 time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=ye3_read1_SR.fq out=ye3_read1_BBM.fq qin=64 qout=64 qtrim=r trimq=4 time /usr/local/bbmap/latest/bbmap.sh ref=GL.fa in=ye3_read2_SR.fq out=ye3_read2_BBM.fq qin=64 qout=64 qtrim=r trimq=4 execute it by the command: qsub -q rcc-30d bbmap.sh Then use fastqc to get the quality control report again, to check whether the filtered and trimmed reads are reasonable. Creating the fastqc.sh, and editing the command as following: #!/bin/bash cd ~/Jinyu/DESeq-SDU/shortReadRes export PATH=${PATH}:/usr/local/fastqc/latest/ time /usr/local/fastqc/latest/fastqc -o ./fastqcReport-2 *BBM.fq Execute it by the command: qsub -q rcc-30d fastqc.sh Then you will see the quality control results are pretty well. You can also use ShortRead to get quality assessment again. 5. Align the reads to reference genome with Bowtie2 and Tophat2 First, we need put all the *BBM.fq files and reference genomes GL.fa and gene model annotation GL.gb.gff in a same folder. Second, create folder tophat_out_gu2, tophat_out_gu3, tophat_out_ye1, tophat_out_ye3. Then create tophat.pbs file, and feed the commands as following: 8 #!/bin/bash # File tophat.pbs # tophat script for bigjack # ~Jan. 2015 [email protected] # Job name #PBS -N tophat # To request 20 hours of wall clock time #PBS -l walltime=3:00:00:00 # To request a single node with 12 cores #PBS -l nodes=1:ppn=12 #The environment variable $PBS_O_WORKDIR specify the directory from which you submitted the job cd $PBS_O_WORKDIR # Modify input and output files below to match your run!! # You will also need correct ancillary files (parameters, etc) in the # working directory to get your simulation to run.
Recommended publications
  • Bioconductor: Open Software Development for Computational Biology and Bioinformatics Robert C
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Collection Of Biostatistics Research Archive Bioconductor Project Bioconductor Project Working Papers Year 2004 Paper 1 Bioconductor: Open software development for computational biology and bioinformatics Robert C. Gentleman, Department of Biostatistical Sciences, Dana Farber Can- cer Institute Vincent J. Carey, Channing Laboratory, Brigham and Women’s Hospital Douglas J. Bates, Department of Statistics, University of Wisconsin, Madison Benjamin M. Bolstad, Division of Biostatistics, University of California, Berkeley Marcel Dettling, Seminar for Statistics, ETH, Zurich, CH Sandrine Dudoit, Division of Biostatistics, University of California, Berkeley Byron Ellis, Department of Statistics, Harvard University Laurent Gautier, Center for Biological Sequence Analysis, Technical University of Denmark, DK Yongchao Ge, Department of Biomathematical Sciences, Mount Sinai School of Medicine Jeff Gentry, Department of Biostatistical Sciences, Dana Farber Cancer Institute Kurt Hornik, Computational Statistics Group, Department of Statistics and Math- ematics, Wirtschaftsuniversitat¨ Wien, AT Torsten Hothorn, Institut fuer Medizininformatik, Biometrie und Epidemiologie, Friedrich-Alexander-Universitat Erlangen-Nurnberg, DE Wolfgang Huber, Department for Molecular Genome Analysis (B050), German Cancer Research Center, Heidelberg, DE Stefano Iacus, Department of Economics, University of Milan, IT Rafael Irizarry, Department of Biostatistics, Johns Hopkins University Friedrich Leisch, Institut fur¨ Statistik und Wahrscheinlichkeitstheorie, Technische Universitat¨ Wien, AT Cheng Li, Department of Biostatistical Sciences, Dana Farber Cancer Institute Martin Maechler, Seminar for Statistics, ETH, Zurich, CH Anthony J. Rossini, Department of Medical Education and Biomedical Informat- ics, University of Washington Guenther Sawitzki, Statistisches Labor, Institut fuer Angewandte Mathematik, DE Colin Smith, Department of Molecular Biology, The Scripps Research Institute, San Diego Gordon K.
    [Show full text]
  • Statistical Computing with Pathway Tools Using Rcyc
    Statistical Computing with Pathway Tools using RCyc Statistical Computing with Pathway Tools using RCyc Tomer Altman [email protected] Biomedical Informatics, Stanford University Statistical Computing with Pathway Tools using RCyc R & BioConductor S: software community over 30 years of statistical computing, data mining, machine learning, and data visualization knowledge R: open-source S with a lazy Scheme interpreter at its heart (including closures, symbols, and even macros!) RCyc: an R package to allow the interaction between Pathway / Genome Databases and the wealth of biostatistics software in the R community Statistical Computing with Pathway Tools using RCyc BioConductor Figure: BioConductor: Thousands of peer-reviewed biostatistics packages. Statistical Computing with Pathway Tools using RCyc Software`R'-chitecture C code extension to R to allow Unix socket access Common Lisp code to hack in XML-based communication Make the life of *Cyc API developers easier. Currently supports exchange of numbers, strings, and lists R code and documentation Provides utilities for starting PTools and marshaling data types Assumes user is familiar with the PTools API: http://bioinformatics.ai.sri.com/ptools/api/ All wrapped up in R package Easily installs via standard command-line R interface Statistical Computing with Pathway Tools using RCyc Simple Example callPToolsFn("so",list("'meta")) callPToolsFn("get-slot-value",list("'proton", "'common-name")) callPToolsFn("get-class-all-instances",list("'|Reactions|")) Statistical Computing with Pathway Tools using RCyc Availability http://github.com/taltman/RCyc Linked from PTools website Statistical Computing with Pathway Tools using RCyc Next Steps Dynamic instantiation of API functions in R Coming next release (coordination with BRG) Make development of *Cyc APIs easier, less boilerplate code Frame to Object import/export Provide \RCelot" functionality to slurp Ocelot frames directly into R S4 reference objects for direct data access Support for more exotic data types Symbols, hash tables, arrays, structures, etc.
    [Show full text]
  • Bioconductor: Open Software Development for Computational Biology and Bioinformatics Robert C
    Bioconductor Project Bioconductor Project Working Papers Year 2004 Paper 1 Bioconductor: Open software development for computational biology and bioinformatics Robert C. Gentleman, Department of Biostatistical Sciences, Dana Farber Can- cer Institute Vincent J. Carey, Channing Laboratory, Brigham and Women’s Hospital Douglas J. Bates, Department of Statistics, University of Wisconsin, Madison Benjamin M. Bolstad, Division of Biostatistics, University of California, Berkeley Marcel Dettling, Seminar for Statistics, ETH, Zurich, CH Sandrine Dudoit, Division of Biostatistics, University of California, Berkeley Byron Ellis, Department of Statistics, Harvard University Laurent Gautier, Center for Biological Sequence Analysis, Technical University of Denmark, DK Yongchao Ge, Department of Biomathematical Sciences, Mount Sinai School of Medicine Jeff Gentry, Department of Biostatistical Sciences, Dana Farber Cancer Institute Kurt Hornik, Computational Statistics Group, Department of Statistics and Math- ematics, Wirtschaftsuniversitat¨ Wien, AT Torsten Hothorn, Institut fuer Medizininformatik, Biometrie und Epidemiologie, Friedrich-Alexander-Universitat Erlangen-Nurnberg, DE Wolfgang Huber, Department for Molecular Genome Analysis (B050), German Cancer Research Center, Heidelberg, DE Stefano Iacus, Department of Economics, University of Milan, IT Rafael Irizarry, Department of Biostatistics, Johns Hopkins University Friedrich Leisch, Institut fur¨ Statistik und Wahrscheinlichkeitstheorie, Technische Universitat¨ Wien, AT Cheng Li, Department
    [Show full text]
  • Decode™ Bioinformatic Analysis
    PROTOCOL Decode™ Bioinformatic Analysis An introductory guide for the use of high-throughput sequencing to determine primary hits from an shRNA pooled screen. Table of contents 1. Legal disclaimers 1. Legal disclaimers 1 SOFTWARE LICENSE TERMS. With respect to any software products 2. Summary 2 incorporated in or forming a part of the software provided or described 3. Intended audience 2 hereunder (“Software”), Horizon Discovery (“Seller”) and you, the purchaser, 4. Software requirements 2 licensee and/or end-user of the software (“Buyer”) intend and agree that 5. FASTQ files from high-throughput sequencing 2 such software products are being licensed and not sold, and that the words 6. Align the FASTQ files to reference shRNA FASTA files 3 “purchase”, “sell” or similar or derivative words are understood and agreed to A. Create a Bowtie index 3 mean “license”, and that the word “Buyer” or similar or derivative words are B. Align using Bowtie 3 understood and agreed to mean “licensee”. Notwithstanding anything to the C. Batch process 4 contrary contained herein, Seller or its licensor, as the case may be, retains all D. Alignment summaries 4 rights and interest in software products provided hereunder. 7. Differential expression analysis 5 A. Convert Bowtie files into DESeq input 5 Seller hereby grants to Buyer a royalty-free, non-exclusive, nontransferable B. Run DESeq on .rtable files 7 license, without power to sublicense, to use software provided hereunder 8. Conclusions 9 solely for Buyer’s own internal business purposes and to use the related documentation solely for Buyer’s own internal business purposes.
    [Show full text]
  • R / Bioconductor for High-Throughput Sequence Analysis
    R / Bioconductor for High-Throughput Sequence Analysis Nicolas Delhomme1 21 October - 26 October, 2013 [email protected] Contents 1 Day2 of the workshop2 1.1 Introduction............................................2 1.2 Main Bioconductor packages of interest for the day......................2 1.3 A word on High-throughput sequence analysis.........................2 1.4 A word on Integrated Development Environment (IDE)...................2 1.5 Today's schedule.........................................2 2 Prelude 4 2.1 Purpose..............................................4 2.2 Creating GAlignment objects from BAM files.........................4 2.3 Processing the files in parallel..................................4 2.4 Processing the files one chunk at a time............................5 2.5 Pros and cons of the current solution..............................6 2.5.1 Pros............................................6 2.5.2 Cons............................................6 3 Sequences and Short Reads7 3.1 Alignments and Bioconductor packages............................7 3.1.1 The pasilla data set...................................7 3.1.2 Alignments and the ShortRead package........................8 3.1.3 Alignments and the Rsamtools package........................9 3.1.4 Alignments and other Bioconductor packages..................... 13 3.1.5 Resources......................................... 17 4 Interlude 18 5 Estimating Expression over Genes and Exons 20 5.1 Counting reads over known genes and exons.......................... 20 5.1.1
    [Show full text]
  • Open-Source Statistical Software for the Analysis of Microarray Data
    The Bioconductor Project: Open-source Statistical Software for the Analysis of Microarray Data Sandrine Dudoit Division of Biostatistics University of California, Berkeley www.stat.berkeley.edu/~sandrine EMBO Practical Course on Analysis and Informatics of Microarray Data Wellcome Trust Genome Campus, Hinxton, Cambridge, UK March 18, 2003 © Copyright 2003, all rights reserved Materials from Bioconductor short courses developed with Robert Gentleman, Rafael Irizarry. Expanded version of this course: Fred Hutchinson Cancer Research Center, December 2002 Biological question Experimental design Microarray experiment Image analysis Expression quantification Pre-processing A Normalization n a l Testing Estimation Clustering Prediction y s Biological verification i and interpretation s Statistical computing Everywhere … • Statistical design and analysis: – image analysis, normalization, estimation, testing, clustering, prediction, etc. • Integration of experimental data with biological metadata from WWW-resources – gene annotation (GenBank, LocusLink); – literature (PubMed); – graphical (pathways, chromosome maps). Outline • Overview of the Bioconductor project • Annotation • Visualization • Pre-processing: spotted and Affy arrays • Differential gene expression • Clustering and classification Acknowledgments • Bioconductor core team • Vince Carey, Biostatistics, Harvard • Yongchao Ge, Statistics, UC Berkeley • Robert Gentleman, Biostatistics, Harvard • Jeff Gentry, Dana-Farber Cancer Institute • Rafael Irizarry, Biostatistics, Johns Hopkins
    [Show full text]
  • Bioinformatics for High-Throughput Sequencing an Overview
    Bioinformatics for High-throughput Sequencing An Overview Simon Anders Nicolas Delhomme EBI is an Outstation of the European Molecular Biology Laboratory. Overview In recent years, new sequencing schemes, also called • high-throughput sequencing • massively parallel sequencing • flow-cell sequencing have been proposed. Commercially available are devices from • Roche (formerly: 454) • Illumina (formerly: Solexa): “GenomeAnalyzer” • Applied Biosystems: “SOLiD system” • Helicos: “Helicoscope” Core ideas Two core differences of HTS to Sanger capillary sequencing: • The library is not constructed by cloning, but by a novel way of doing PCR, where the fragments are separated by physico-chemical means (emulsion PCR or bridge PCR). • Very many fragments are sequenced in parallel in a flow cell (as opposed to a capillary), observed by a microscope with CCD camera. Solexa workflow • Bridge PCD to prepare “clusters” • Sequencing: 35 or more cycles x 4 bases, with micrographs taken in 300 tiles x 8 lanes -> more than 1 terabyte of image data • “SolexaPipeline”: Sequences and alignment Solexa: Flow cell Solexa: sample preparartion Solexa: sample preparartion Solexa: sequencing Solexa: sequencing Roche 454 • presented 2005, first on market • emulsion PCR • pyrosequencing (polymerase-based) • read length: 250 bp • paired read separation: 3 kb • 300 Mb per day • $60 per Mb • error rate: around 5% per bp • dominant type of error: indels, especially in homopolymers Illumina / Solexa • second on the market • bridge PCR • polymerase-based sequencing-by-synthesis
    [Show full text]
  • Introduction to Genomicfiles
    Introduction to GenomicFiles Valerie Obenchain, Michael Love, Martin Morgan Last modified: October 2014; Compiled: May 19, 2021 Contents 1 Introduction ..............................1 2 Quick Start ..............................2 3 Overview of classes and functions .................3 3.1 GenomicFiles class ........................3 3.2 Functions ............................3 4 Queries across files: reduceByRange and reduceRanges......5 4.1 Pileup summaries ........................6 4.2 Basepair-level t-test with case / control groups...........7 5 Queries within files: reduceByFile and reduceFiles .......8 5.1 Counting read junctions......................8 5.2 Coverage 1: reduceByFile .................... 10 5.3 Coverage 2: reduceFiles ..................... 12 5.4 Coverage 3: reduceFiles with chunking .............. 13 6 Chunking ............................... 15 6.1 Ranges in a file.......................... 15 6.2 Records in a file ......................... 15 7 sessionInfo() ............................. 17 1 Introduction This vignette illustrates how to use the GenomicFiles package for distributed computing across files. The functions in GenomicFiles manipulate and combine data subsets via two user-supplied functions, MAP and REDUCE. These are similar in spirit to Map and Reduce in base R. Together they provide a flexible interface to extract, manipulate and combine data. Both functions are executed in the distributed step which means results are combined on a single worker, not across workers. Introduction to GenomicFiles We assume the
    [Show full text]
  • Getting Started Deciphering
    Getting Started DECIPHERing Erik S. Wright May 19, 2021 Contents 1 About DECIPHER 1 2 Design Philosophy 2 2.1 Curators Protect the Originals . 2 2.2 Don't Reinvent the Wheel . 2 2.3 That Which is the Most Difficult, Make Fastest . 2 2.4 Stay Organized . 2 3 Functionality 3 4 Installation 5 4.1 Typical Installation (recommended) . 5 4.2 Manual Installation . 5 4.2.1 All platforms . 5 4.2.2 MacOSX........................................... 6 4.2.3 Linux ............................................. 6 4.2.4 Windows ........................................... 6 5 Example Workflow 6 6 Session Information 11 1 About DECIPHER DECIPHER is a software toolset that can be used for deciphering and managing biological sequences efficiently using the R statistical programming language. The program features tools falling into five categories: Sequence databases: import, maintain, view, and export a massive number of sequences. Sequence alignment: accurately align thousands of DNA, RNA, or amino acid sequences. Quickly find and align the syntenic regions of multiple genomes. Oligo design: test oligos in silico, or create new primer and probe sequences optimized for a variety of objectives. Manipulate sequences: trim low quality regions, correct frameshifts, reorient nucleotides, determine consensus, or digest with restriction enzymes. Analyze sequences: find chimeras, classify into a taxonomy of organisms or functions, detect repeats, predict secondary structure, create phylogenetic trees, and reconstruct ancestral states. 1 Gene finding: predict coding and non-coding genes in a genome, extract them from the genome, and export them to a file. DECIPHER is available under the terms of the GNU Public License version 3. 2 Design Philosophy 2.1 Curators Protect the Originals One of the core principles of DECIPHER is the idea of the non-destructive workflow.
    [Show full text]
  • Using Lumi, a Package Processing Illumina Microarray
    Using lumi, a package processing Illumina Microarray Pan Duz∗, Gang Fengzy, Warren A. Kibbezz, Simon Linzx February 27, 2014 zRobert H. Lurie Comprehensive Cancer Center Northwestern University, Chicago, IL, 60611, USA Contents 1 Overview of lumi 2 2 Citation 2 3 Installation of lumi package 3 4 Object models of major classes 3 5 Data preprocessing 3 5.1 Intelligently read the BeadStudio output file . .5 5.2 Quality control of the raw data . .8 5.3 Background correction . 18 5.4 Variance stabilizing transform . 19 5.5 Data normalization . 19 5.6 Quality control after normalization . 23 5.7 Encapsulate the processing steps . 23 5.8 Inverse VST transform to the raw scale . 30 6 Handling large data sets 32 7 Performance comparison 33 8 Gene annotation 33 9 A use case: from raw data to functional analysis 34 9.1 Preprocess the Illumina data . 34 9.2 Identify differentially expressed genes . 35 9.3 Gene Ontology and other functional analysis . 37 9.4 GEO submission of the data . 38 ∗dupan.mail (at) gmail.com yg-feng (at) northwestern.edu zwakibbe (at) northwestern.edu xs-lin2 (at) northwestern.edu 1 10 Session Info 39 11 Acknowledgments 39 12 References 40 1 Overview of lumi Illumina microarray is becoming a popular microarray platform. The BeadArray technology from Illumina makes its preprocessing and quality control different from other microarray technologies. Unfortunately, until now, most analyses have not taken advantage of the unique properties of the BeadArray system. The lumi Bioconductor package especially designed to process the Illumina mi- croarray data, including Illumina Expression and Methylation microarray data.
    [Show full text]
  • Limma: Linear Models for Microarray and RNA-Seq Data User’S Guide
    limma: Linear Models for Microarray and RNA-Seq Data User's Guide Gordon K. Smyth, Matthew Ritchie, Natalie Thorne, James Wettenhall, Wei Shi and Yifang Hu Bioinformatics Division, The Walter and Eliza Hall Institute of Medical Research, Melbourne, Australia First edition 2 December 2002 Last revised 14 July 2021 This free open-source software implements academic research by the authors and co-workers. If you use it, please support the project by citing the appropriate journal articles listed in Section 2.1. Contents 1 Introduction 5 2 Preliminaries 7 2.1 Citing limma ......................................... 7 2.2 Installation . 9 2.3 How to get help . 9 3 Quick Start 11 3.1 A brief introduction to R . 11 3.2 Sample limma Session . 12 3.3 Data Objects . 13 4 Reading Microarray Data 15 4.1 Scope of this Chapter . 15 4.2 Recommended Files . 15 4.3 The Targets Frame . 15 4.4 Reading Two-Color Intensity Data . 17 4.5 Reading Single-Channel Agilent Intensity Data . 19 4.6 Reading Illumina BeadChip Data . 19 4.7 Image-derived Spot Quality Weights . 20 4.8 Reading Probe Annotation . 21 4.9 Printer Layout . 22 4.10 The Spot Types File . 22 5 Quality Assessment 24 6 Pre-Processing Two-Color Data 26 6.1 Background Correction . 26 6.2 Within-Array Normalization . 28 6.3 Between-Array Normalization . 30 6.4 Using Objects from the marray Package . 33 7 Filtering unexpressed probes 34 1 8 Linear Models Overview 36 8.1 Introduction . 36 8.2 Single-Channel Designs . 37 8.3 Common Reference Designs .
    [Show full text]
  • Spatiallibd Advance Access Publication Date: Day Month Year Application Note
    bioRxiv preprint doi: https://doi.org/10.1101/2021.04.29.440149; this version posted April 30, 2021. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-ND 4.0 International license. i i “output” — 2021/4/29 — 16:21 — page 1 — #1 i i Preprint https://research.libd.org/spatialLIBD Advance Access Publication Date: Day Month Year Application Note Gene Expression spatialLIBD: an R/Bioconductor package to visualize spatially-resolved transcriptomics data Brenda Pardo 1;2, Abby Spangler 2, Lukas M. Weber 3, Stephanie C. Hicks 3, Andrew E. Jaffe 2, Keri Martinowich 2, Kristen R. Maynard 2, Leonardo Collado-Torres 2;∗ 1Licenciatura de Ciencias Genómicas, Escuela Nacional de Estudios Superiores Unidad Juriquilla, Universidad Nacional Autónoma de México, Querétaro, 76230, México; 2Lieber Institute for Brain Development, Johns Hopkins Medical Campus, Baltimore, MD 21205, USA 3Department of Biostatistics, Johns Hopkins School of Public Health, Baltimore, MD 21205, USA ∗To whom correspondence should be addressed. Associate Editor: XXXXXXX Received on XXXXX; revised on XXXXX; accepted on XXXXX Abstract Motivation: Spatially-resolved transcriptomics has now enabled the quantification of high-throughput and transcriptome-wide gene expression in intact tissue while also retaining the spatial coordinates. Incorporating the precise spatial mapping of gene activity advances our understanding of intact tissue- specific biological processes. In order to interpret these novel spatial data types, interactive visualization tools are necessary. Results: We describe spatialLIBD, an R/Bioconductor package to interactively explore spatially-resolved transcriptomics data generated with the 10x Genomics Visium platform.
    [Show full text]