Requesting Resources on an HPC Facility

Total Page:16

File Type:pdf, Size:1020Kb

Requesting Resources on an HPC Facility Requesting Resources on an HPC Facility (Using the Sun Grid Engine Job Scheduler) Deniz Savas dsavas.staff.shef.ac.uk/teaching June 2017 Outline 1. Using the Job Scheduler 2. Interactive Jobs 3. Batch Jobs 4. Task arrays 5. Running Parallel Jobs 6. GPUs and remote Visualisation 7. Beyond Iceberg Accessing the N8 tier 2 facility Running Jobs A note on interactive jobs • Software that requires intensive computing should be run on the worker nodes and not the head node. • You should run compute intensive interactive jobs on the worker nodes by using the qsh or qrsh command. • Maximum ( and also default) time limit for interactive jobs is 8 hours. Sun Grid Engine • Two iceberg or sharc headnodes are gateways to the cluster of worker nodes. • Headnodes’ main purpose is to allow access to the worker nodes for the logged in users. • All CPU intensive computations must be performed on the worker nodes. This is achieved by using one of the following two commands on the headnode. – qsh or qrsh : To start an interactive session on a worker node. – qsub : To submit a batch job to the cluster • Once you log into iceberg, you are recommended to start working interactively on a worker-node by simply typing qsh and working in the new shell window that is opened. The next set of slides assume that you are already working on one of the worker nodes (qsh session). Practice Session 1 Running Applications on Iceberg (Problem 1) • Case Studies – Analysis of Patient Inflammation Data • Running an R application how to submit jobs and run R interactively • List available and loaded modules load the module for the R package • Start the R Application and plot the inflammation data Other Methods of submitting Batch Jobs on the Sheffield HPC clusters Iceberg has a number of home grown commands for submitting jobs for some of the most popular applications and packages to the batch system. These commands create suitable scripts to submit the users’ job to the cluster automatically These are; runfluent , runansys , runmatlab , runabaqus To get information on how to use these command, simply issue the command name on a worker node without any parameters. Exercise 1: Submit a job via qsub • Create a script file (named example.sh) by using a text editor such as gedit ,vi or emacs and inputing the following lines: #!/bin/sh # echo “This code is running on” /bin/hostname /bin/date • Now Submit this script to SGE using the qsub command: qsub example.sh Tutorials On iceberg copy the contents of the tutorial directory to your user area into a directory named sge: cp –r /usr/local/courses/sge sge cd sge In this directory the file readme.txt contains all the instructions necessary to perform the exercises. Managing Your Jobs Sun Grid Engine Overview SGE is the resource management system, job scheduling and batch control system. (Others available such as PBS, Torque/Maui, Platform LSF ) • Starts up interactive jobs on available workers • Schedules all batch orientated ‘i.e. non-interactive’ jobs • Attempts to create a fair-share environment • Optimizes resource utilization SGE SGE SGE SGE SGE worker worker worker worker worker node node node node node B Slot C Slot C Slot 2 A Slot 1 B Slot 1 C Slot 1 A A Slot 2 B Slot 1 C Slot 1 C Slot 2 C Slot 3 B Slot 1 B Slot 2 B Slot 3 Slot 1 1 1 Queue-A Queue-B Queue-C SGE MASTER node .Queues .Policies .Priorities JOB Y JOB Z .Share/Tickets JOB X JOB O .Resources JOB N JOB U .Users/Projects Scheduling ‘qsub’ batch jobs on the cluster Working with SGE as a user Although the SGE system contains many commands and utilities most of them are for the administration of the scheduling system only. The following list of SGE commands will be sufficient for most users. – qsub : Submits a batch job – qsh or qrsh : Starts an interactive session – qstat : Queries the progress of the jobs – qdel : Removes unwanted jobs. Running interactive jobs on the cluster 1. User asks to run an interactive job (qsh, qrsh ) 2. SGE checks to see if there are resources available to start the job immediately (i.e a free worker ) – If so, the interactive session is started under the control/monitoring of SGE on that worker. – If resources are not available the request is simply rejected and the user notified. This is because by its very nature users can not wait for an interactive session to start. 3. User terminates the job by typing exit or logout or the job is terminated when the queue limits are reached (i.e. currently after 8 hours of wall-clock time usage). Demonstration 1 Running Jobs batch job example Using the R package to analyse patient data qsub example: qsub –l h_rt=10:00:00 –o myoutputfile –j y myjob OR alternatively … the first few lines of the submit script myjob contains - $!/bin/bash $# -l h_rt=10:00:00 $# -o myoutputfile $# -j y and you simply type; qsub myjob Summary table of useful SGE commands Command(s) Description User/System qsub, qresub,qmon Submit batch jobs USER qsh,qrsh Submit Interactive USER Jobs qstat , qhost, qdel, Status of queues and USER qmon jobs in queues , list of execute nodes, remove jobs from queues qacct, qmon, qalter, Monitor/manage SYSTEM ADMIN qdel, qmod accounts, queues, jobs etc Using the qsub command to submit batch jobs In its simplest form any script file can be submitted to the SGE batch queue by simply typing qsub scriptfile . In this way the scriptfile is queued to be executed by the SGE under default conditions and using default amount of resources. Such use is not always desirable as the default conditions provided may not be appropriate for that job . Also, providing a good estimate of the amount of resources needed helps SGE to schedule the tasks more efficiently. There are two alternative mechanisms for specifying the environment & resources; 1) Via parameters to the qsub command 2) Via special SGE comments (#$ ) in the script file that is submitted. The meaning of the parameters are the same for both methods and they control such things as; - cpu time required - number of processors needed ( for multi-processor jobs), - output file names, - notification of job activity. Method 1 Using qsub command-line parameters Format: qsub [qsub_params] script_file [-- script_arguments] Examples: qsub myjob qsub –cwd $HOME/myfolder1 qsub –l h_rt=00:05:00 myjob -- test1 -large Note that the last example passes parameters to the script file following the -- token. Method 2 special comments in script files A script file is a file containing a set of Unix commands written in a scripting language ‘usually Bourne/Bash or C-Shell’. When the job runs these script files are executed as if their contents were typed at the keyboard. In a script file any line beginning with # will normally be treated as a comment line and ignored. However the SGE treats the comment lines in the submitted script, which start with the special sequence #$ ,in a special way. SGE expects to find declarations of the qsub options in these comment lines. At the time of job submission SGE determines the job resources from these comment lines. If there are any conflicts between the actual qsub command-line parameters and the special comment (#$) sge options the command line parameters always override the #$ sge options specified in the script. An example script containing SGE options #!/bin/sh #A simple job script for sun grid engine. # #$ -l h_rt=01:00:00 #$ -m be #$ -M [email protected] benchtest < inputfile > myresults More examples of #$ options in a scriptfile #!/bin/csh # Force the shell to be the C-shell # On iceberg the default shell is the bash-shell #$ -S /bin/csh # Request 8 GBytes of virtual memory #$ -l mem=8G # Specify myresults as the output file #$ -o myresults # Compile the program pgf90 test.for –o mytestprog # Run the program and read the data that program # would have read from the keyboard from file mydata mytestprog < mydata Running Jobs qsub and qsh options -l h_rt=hh:mm:ss The wall clock time. This parameter must be specified, failure to include this parameter will result in the error message: “Error: no suitable queues”. Current default is 8 hours . -l arch=intel* Force SGE to select either Intel or AMD architecture nodes. No -l arch=amd* need to use this parameter unless the code has processor dependency. -l mem=memory sets the virtual-memory limit e.g. –l mem=10G (for parallel jobs this is per processor and not total). Current default if not specified is 6 GB . -l rmem=memory Sets the limit of real-memory required Current default is 2 GB. Note: rmem parameter must always be less than mem. -help Prints a list of options -pe ompigige np Specifies the parallel environment to be used. np is the -pe openmpi-ib np number of processors required for the parallel job. -pe openmp np Running Jobs qsub and qsh options ( continued) -N jobname By default a job’s name is constructed from the job-script-file- name and the job-id that is allocated to the job by SGE. This options defines the jobname. Make sure it is unique because the job output files are constructed from the jobname. -o output_file Output is directed to a named file. Make sure not to overwrite important files by accident. -j y Join the standard output and standard error output streams recommended -m [bea] Sends emails about the progress of the job to the specified email address.
Recommended publications
  • THINC: a Virtual and Remote Display Architecture for Desktop Computing and Mobile Devices
    THINC: A Virtual and Remote Display Architecture for Desktop Computing and Mobile Devices Ricardo A. Baratto Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2011 c 2011 Ricardo A. Baratto This work may be used in accordance with Creative Commons, Attribution-NonCommercial-NoDerivs License. For more information about that license, see http://creativecommons.org/licenses/by-nc-nd/3.0/. For other uses, please contact the author. ABSTRACT THINC: A Virtual and Remote Display Architecture for Desktop Computing and Mobile Devices Ricardo A. Baratto THINC is a new virtual and remote display architecture for desktop computing. It has been designed to address the limitations and performance shortcomings of existing remote display technology, and to provide a building block around which novel desktop architectures can be built. THINC is architected around the notion of a virtual display device driver, a software-only component that behaves like a traditional device driver, but instead of managing specific hardware, enables desktop input and output to be intercepted, manipulated, and redirected at will. On top of this architecture, THINC introduces a simple, low-level, device-independent representation of display changes, and a number of novel optimizations and techniques to perform efficient interception and redirection of display output. This dissertation presents the design and implementation of THINC. It also intro- duces a number of novel systems which build upon THINC's architecture to provide new and improved desktop computing services. The contributions of this dissertation are as follows: • A high performance remote display system for LAN and WAN environments.
    [Show full text]
  • Virtualgl / Turbovnc Survey Results Version 1, 3/17/2008 -- the Virtualgl Project
    VirtualGL / TurboVNC Survey Results Version 1, 3/17/2008 -- The VirtualGL Project This report and all associated illustrations are licensed under the Creative Commons Attribution 3.0 License. Any works which contain material derived from this document must cite The VirtualGL Project as the source of the material and list the current URL for the VirtualGL web site. Between December, 2007 and March, 2008, a survey of the VirtualGL community was conducted to ascertain which features and platforms were of interest to current and future users of VirtualGL and TurboVNC. The larger purpose of this survey was to steer the future development of VirtualGL and TurboVNC based on user input. 1 Statistics 49 users responded to the survey, with 32 complete responses. When listing percentage breakdowns for each response to a question, this report computes the percentages relative to the total number of complete responses for that question. 2 Responses 2.1 Server Platform “Please select the server platform(s) that you currently use or plan to use with VirtualGL/TurboVNC” Platform Number of Respondees (%) Linux/x86 25 / 46 (54%) ● Enterprise Linux 3 (x86) 2 / 46 (4.3%) ● Enterprise Linux 4 (x86) 5 / 46 (11%) ● Enterprise Linux 5 (x86) 6 / 46 (13%) ● Fedora Core 4 (x86) 1 / 46 (2.2%) ● Fedora Core 7 (x86) 1 / 46 (2.2%) ● Fedora Core 8 (x86) 4 / 46 (8.7%) ● SuSE Linux Enterprise 9 (x86) 1 / 46 (2.2%) 1 Platform Number of Respondees (%) ● SuSE Linux Enterprise 10 (x86) 2 / 46 (4.3%) ● Ubuntu (x86) 7 / 46 (15%) ● Debian (x86) 5 / 46 (11%) ● Gentoo (x86) 1 /
    [Show full text]
  • Supporting Distributed Visualization Services for High Performance Science and Engineering Applications – a Service Provider Perspective
    9th IEEE/ACM International Symposium on Cluster Computing and the Grid Supporting distributed visualization services for high performance science and engineering applications – A service provider perspective Lakshmi Sastry*, Ronald Fowler, Srikanth Nagella and Jonathan Churchill e-Science Centre, Science & Technology Facilities Council, Introduction activities, the outcomes, the status and some suggestions as to the way forward. The Science & Technology Facilities Council is home to international Facilities such as the ISIS Workshops and tutorials Neutron Spallation Source, Central Laser Facility and Diamond Light Source, the National The take up of advanced visualization Grid Service including national super techniques within STFC scientists and their computers, Tier1 data service for CERN particle colleagues from the wider academia is quite physics experiment, the British Atmospheric limited despite decades of holding seminars and data Centre and the British Oceanographic Data surgeries to create awareness of the state of the Centre at the Space Science and Technology art. Visualization events generally tend to department. Together these Facilities generate attract practitioners in the field and an several Terabytes of data per month which needs occasional application domain expert. This is a to be handled, catalogued and provided access serious issue limiting the more widespread use to. In addition, the scientists within STFC of advanced visualization tools. In order to departments also develop complex simulations address this deficit, more recently, we have and undertake data analysis for their own begun an escalation of such events by holding experiments. Facilities also have strong ongoing show and tell “Other Peoples Business” to collaborations with UK academic and introduce exemplars from specific domains and commercial users through their involvement then the tools behind the exemplars, advertising with Collaborative Computational Programme, these events exclusively to scientists of various generating very large simulation datasets.
    [Show full text]
  • Release 0.11 Todd Gamblin
    Spack Documentation Release 0.11 Todd Gamblin Feb 07, 2018 Basics 1 Feature Overview 3 1.1 Simple package installation.......................................3 1.2 Custom versions & configurations....................................3 1.3 Customize dependencies.........................................4 1.4 Non-destructive installs.........................................4 1.5 Packages can peacefully coexist.....................................4 1.6 Creating packages is easy........................................4 2 Getting Started 7 2.1 Prerequisites...............................................7 2.2 Installation................................................7 2.3 Compiler configuration..........................................9 2.4 Vendor-Specific Compiler Configuration................................ 13 2.5 System Packages............................................. 16 2.6 Utilities Configuration.......................................... 18 2.7 GPG Signing............................................... 20 2.8 Spack on Cray.............................................. 21 3 Basic Usage 25 3.1 Listing available packages........................................ 25 3.2 Installing and uninstalling........................................ 42 3.3 Seeing installed packages........................................ 44 3.4 Specs & dependencies.......................................... 46 3.5 Virtual dependencies........................................... 50 3.6 Extensions & Python support...................................... 53 3.7 Filesystem requirements........................................
    [Show full text]
  • Vsim User Guide Release 10.1.0-R2780
    VSim User Guide Release 10.1.0-r2780 Tech-X Corporation Mar 12, 2020 2 CONTENTS 1 Overview 1 1.1 What is VSimComposer?........................................1 1.2 VSim Capabilities............................................1 2 Starting VSimComposer 3 2.1 Running Locally.............................................3 2.2 Running VSimComposer On a Remote Computer System.......................4 2.3 Visualizing Remote Data.........................................5 2.4 Welcome Window............................................5 3 Creating or Opening a Simulation7 3.1 Starting a Simulation...........................................7 4 Menus and Menu Items 15 4.1 File Menu................................................. 15 4.2 Edit Menu................................................ 18 4.3 View Menu................................................ 21 4.4 Help Menu................................................ 21 4.5 Tools/VSimComposer Menu (Settings/Preferences)........................... 21 5 Simulation Concepts 31 5.1 Simulation Concepts Introduction.................................... 31 5.2 Grids................................................... 32 5.3 Geometries................................................ 36 5.4 Electric and Magnetic Fields....................................... 36 5.5 Particles................................................. 41 5.6 Reactions................................................. 43 5.7 Histories................................................. 44 6 Visual Setup 45 6.1 Setup Window for Visual-setup Simulations..............................
    [Show full text]
  • HOW to VISUALIZE YOUR GPU-ACCELERATED SIMULATION RESULTS Peter Messmer, NVIDIA
    HOW TO VISUALIZE YOUR GPU-ACCELERATED SIMULATION RESULTS Peter Messmer, NVIDIA RANGE OF ANALYSIS AND VIZ TASKS . Analysis: Focus quantitative . Visualization: Focus qualitative . Monitoring, Steering TRADITIONAL HPC WORKFLOW Workstation Analysis, Setup Visualization Supercomputer Viz Cluster Dump, Checkpointing Visualization, Analysis File System TRADITIONAL WORKFLOW: CHALLENGES Lack of interactivity prevents “intuition” Workstation High-end viz Analysis, neglected due Setup Visualization to workflow complexity Supercomputer Viz Cluster Viz resources need I/O becomes main to scale with simulation Dump, simulation bottleneck Checkpointing Visualization, Analysis File System OUTLINE . Visualization applications . CUDA/OpenGL interop . Remote viz . Parallel viz . In-Situ viz High-level overview. Some parts platform dependent. Check with your sysadmin. VISUALIZATION APPLICATIONS NON-REPRESENTATIVE VIZ TOOLS SURVEY OF 25 HPC SITES Surveyed sites: LLNL LLNL- ORNL- AFRL- NASA- NERSC -OCF SCF LANL CCS DOD-ORC DSCR AFRL ARL ERDC NAVY MHPCC ORS CCAC NAS NASA-NCCS TACC CHPC RZG HLRN Julich CSCS CSC Hector Curie NON-REPRESENTATIVE VIZ TOOLS SURVEY OF 25 HPC SITES Surveyed sites: LLNL LLNL- ORNL- AFRL- NASA- NERSC -OCF SCF LANL CCS DOD-ORC DSCR AFRL ARL ERDC NAVY MHPCC ORS CCAC NAS NASA-NCCS TACC CHPC RZG HLRN Julich CSCS CSC Hector Curie VISIT . Scalar, vector and tensor field data features — Plots: contour, curve, mesh, pseudo-color, volume,.. — Operators: slice, iso-surface, threshold, binning,.. Quantitative and qualitative analysis/vis — Derived fields, dimension reduction, line-outs — Pick & query . Scalable architecture . Open source http://wci.llnl.gov/codes/visit/ VISIT . Cross-platform — Linux/Unix, OSX, Windows . Wide range of data formats — .vtk, .netcdf, .hdf5,.. Extensible — Plugin architecture . Embeddable . Python scriptable VISIT’S SCALABLE ARCHITECTURE .
    [Show full text]
  • Package Name Software Description Project
    A S T 1 Package Name Software Description Project URL 2 Autoconf An extensible package of M4 macros that produce shell scripts to automatically configure software source code packages https://www.gnu.org/software/autoconf/ 3 Automake www.gnu.org/software/automake 4 Libtool www.gnu.org/software/libtool 5 bamtools BamTools: a C++ API for reading/writing BAM files. https://github.com/pezmaster31/bamtools 6 Biopython (Python module) Biopython is a set of freely available tools for biological computation written in Python by an international team of developers www.biopython.org/ 7 blas The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. http://www.netlib.org/blas/ 8 boost Boost provides free peer-reviewed portable C++ source libraries. http://www.boost.org 9 CMake Cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software http://www.cmake.org/ 10 Cython (Python module) The Cython compiler for writing C extensions for the Python language https://www.python.org/ 11 Doxygen http://www.doxygen.org/ FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created. It supports the most obscure ancient formats up to the cutting edge. No matter if they were designed by some standards 12 ffmpeg committee, the community or a corporation. https://www.ffmpeg.org FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and 13 fftw complex data (as well as of even/odd data, i.e.
    [Show full text]
  • Upgrading and Performance Analysis of Thin Clients in Server Based Scientific Computing
    Institutionen för Systemteknik Department of Electrical Engineering Examensarbete Upgrading and Performance Analysis of Thin Clients in Server Based Scientific Computing Master Thesis in ISY Communication System By Rizwan Azhar LiTH-ISY-EX - - 11/4388 - - SE Linköping 2011 Department of Electrical Engineering Linköpings Tekniska Högskola Linköpings universitet Linköpings universitet SE-581 83 Linköping, Sweden 581 83 Linköping, Sweden Upgrading and Performance Analysis of Thin Clients in Server Based Scientific Computing Master Thesis in ISY Communication System at Linköping Institute of Technology By Rizwan Azhar LiTH-ISY-EX - - 11/4388 - - SE Examiner: Dr. Lasse Alfredsson Advisor: Dr. Alexandr Malusek Supervisor: Dr. Peter Lundberg Presentation Date Department and Division 04-02-2011 Department of Electrical Engineering Publishing Date (Electronic version) Language Type of Publication ISBN (Licentiate thesis) X English Licentiate thesis ISRN: Other (specify below) X Degree thesis LiTH-ISY-EX - - 11/4388 - - SE Thesis C-level Thesis D-level Title of series (Licentiate thesis) 55 Report Number of Pages Other (specify below) Series number/ISSN (Licentiate thesis) URL, Electronic Version http://www.ep.liu.se Publication Title Upgrading and Performance Analysis of Thin Clients in Server Based Scientific Computing Author Rizwan Azhar Abstract Server Based Computing (SBC) technology allows applications to be deployed, managed, supported and executed on the server and not on the client; only the screen information is transmitted between the server and client. This architecture solves many fundamental problems with application deployment, technical support, data storage, hardware and software upgrades. This thesis is targeted at upgrading and evaluating performance of thin clients in scientific Server Based Computing (SBC). Performance of Linux based SBC was assessed via methods of both quantitative and qualitative research.
    [Show full text]
  • Remote Visualization
    Remote Visualization Ravi Chityala Mike Knox Nancy Rowe © 2011 Regents of the University of Minnesota. All rights reserved. Remote visualization • Motivation: Large data – Limited local storage – Lengthy data transfer time – Slow local graphics system • Data rendered where calculated • Challenges:Bandwidth – 1280 x 1024 pixels of 24 bits at 30 frames a second = 118 MBps • Latency of network and GPU © 2011 Regents of the University of Minnesota. All rights reserved. What is remote visualization at MSI? • Available on Jay and Koronis • Several ways to access • Experimental © 2011 Regents of the University of Minnesota. All rights reserved. Systems Jay • Nvidia Quadro FX5800 • 64G memory Koronis (SGI Altix UV Constellation); NIH graphics1 graphics[2-4] • Nvidia Quadro FX1800 • Nvidia Quadro FX5800 • 256G memory • 48G memory © 2011 Regents of the University of Minnesota. All rights reserved. Access to Systems Jay • Reservation • qsub -I -q jay • www.msi.umn.edu/hardware/itasca/jay.html Koronis NIH www.msi.umn.edu/hardware/koronis © 2011 Regents of the University of Minnesota. All rights reserved. Systems System Location of Single Aggregate temporary Read/Write Read/Write storage speed speed Jay /lustre ~400MB/s ~10GB/s ~400MB/s ~13.5GB/s Koronis /scratch ~1.5GB/s ~4 - 14GB/s ~1.5GB/s ~7- 16GB/s (highly variable) © 2011 Regents of the University of Minnesota. All rights reserved. Ways to access • KVM • Teradici • VirtualGL • Client/Server © 2011 Regents of the University of Minnesota. All rights reserved. KVM • Keyboard, Video and Mouse • Directly runs software • Installed in SDVL 575 © 2011 Regents of the University of Minnesota. All rights reserved. Teradici card • PCoIP • Host rendering • Just sends pixels • Installed in SDVL 575 © 2011 Regents of the University of Minnesota.
    [Show full text]
  • Chromium Renderserver: Scalable and Open Remote Rendering Infrastructure Brian Paul, Member, IEEE, Sean Ahern, Member, IEEE, E
    1 Chromium Renderserver: Scalable and Open Remote Rendering Infrastructure Brian Paul, Member, IEEE, Sean Ahern, Member, IEEE, E. Wes Bethel, Member, IEEE, Eric Brug- ger, Rich Cook, Jamison Daniel, Ken Lewis, Jens Owen, and Dale Southard Abstract— Chromium Renderserver (CRRS) is software infrastruc- ture that provides the ability for one or more users to run and view image output from unmodified, interactive OpenGL and X11 applications on a remote, parallel computational platform equipped with graphics hardware accelerators via industry-standard Layer 7 network proto- cols and client viewers. The new contributions of this work include a solution to the problem of synchronizing X11 and OpenGL command streams, remote delivery of parallel hardware-accelerated rendering, and a performance anal- ysis of several different optimizations that are generally applicable to a variety of rendering architectures. CRRS is fully operational, Open Source software. Index Terms— remote visualization, remote rendering, parallel rendering, virtual network computer, collaborative Fig. 1. An unmodified molecular docking application run in parallel visualization, distance visualization on a distributed memory system using CRRS. Here, the cluster is configured for a 3x2 tiled display setup. The monitor for the “remote I. INTRODUCTION user machine” in this image is the one on the right. While this example has “remote user” and “central facility” connected via LAN, The Chromium Renderserver (CRRS) is software in- the typical use model is where the remote user is connected to the frastructure that provides access to the virtual desktop central facility via a low-bandwidth, high-latency link. Here, we see the complete 4800x2400 full-resolution image from the application on a remote computing system.
    [Show full text]
  • Deconvolution Guide
    Deconvolution Guide Rumelo Amor QBI Microscopy Facility, Queensland Brain Institute Research Lane, The University of Queensland St Lucia, 4072 QLD, AUSTRALIA <[email protected]> 1 A note on sampling density When we acquire images using a microscope and store these in a computer, we are using the technique of \sampling", that is, we are converting an analog signal (continuous in time or space) into digital form (discrete steps) [1, 2]. Ideally, when acquiring images, one should aim for a sampling density that satisfies the Nyquist criterion: there must be two samples for every structure one wishes to resolve [3]. Scientific Volume Imaging have a Nyquist calculator online [4]. To use the calculator, choose the appropriate microscope type, the numerical aperture of the imaging objective, the excitation and emission wavelengths, the number of excitation photons (1 for wide-field fluorescence, laser-scanning confocal and spinning disk confocal; 2 for two-photon microscopy), and the refractive index of the immersion medium. For example, for images of the Alexa 488 channel acquired on the Yokogawa spinning disk confocal microscope using the 63x/1.4 NA oil-immersion objective, the parameters should be what are shown in Fig. 1a. Click \Calculate". The calculator then shows the results and for this particular example, a sampling rate of 43 nm is required in the lateral (X and Y) direction and 130 nm in the axial (Z) direction (Fig. 1b). The sampling rate in X and Y is a function of the optics in the light path and the relay optics in front of the detectors and therefore is fixed, but the sampling rate in Z is user-defined.
    [Show full text]
  • Remote Visualisation Using Open Source Software & Commodity
    Remote visualisation using open source software & commodity hardware Dell/Cambridge HPC Solution Centre Dr Stuart Rankin, Dr Paul Calleja, Dr James Coomer © Dell Abstract It is commonplace today that HPC users produce large scale multi-gigabyte data sets on a daily basis and that these data sets may require interactive post processing with some form of real time 3D or 2D visualisation in order to help gain insight from the data. The traditional HPC workflow process requires that these data sets be transferred back to the user’s workstation, remote from the HPC data centre over the wide area network. This process has several disadvantages, firstly it requires large I/O transfers out of the HPC data centre which is time consuming, also it requires that the user has significant local disk storage and a workstation setup with the appropriate visualisation software and hardware. The remote visualisation procedures described here removes the need to transfer data out of the HPC data centre. The procedure allows the user to logon interactively to the Dell | NVIDIA remote visualisation server within the HPC data centre and access their data sets directly from the HPC file system and then run the visualisation software on the remote visualisation server in the machine room, sending the visual output over the network to the users remote PC. The visualisation server consists of a T5500 Dell Precision Workstation equipped with a NVIDIA Quadro FX 5800 configured with an open source software stack facilitating sending of the visual output to the remote user. The method described in this whitepaper is an OS-neutral extension of familiar remote desktop techniques using open-source software and it imposes only modest demands on the customer machine and network connection.
    [Show full text]