Run Time Comparison of MATLAB, Scilab and GNU Octave on Various Benchmark Programs

Total Page:16

File Type:pdf, Size:1020Kb

Run Time Comparison of MATLAB, Scilab and GNU Octave on Various Benchmark Programs Version 0.2 – 07/26/2016 Run time comparison of MATLAB, Scilab and GNU Octave on various benchmark programs Roland Baudin, <[email protected]>, july 2016 1. Introduction This document presents a comparison of the run times of MATLAB, Scilab and GNU Octave (abbreviated as “Octave” in the sequel) on 50 benchmark programs found on the Internet or developed by the author. 2. Hardware and software used 2.1. Hardware The computer used for the benchmarks is a HP Z820 PC workstation equipped with a Intel Xeon E5-2687W processor @ 3.10 GHz, with 16 cores, 32 threads and 128 GB RAM. 2.2. Software The operating system used is Ubuntu Linux. More precisely, the system version is Ubuntu 16.04 Xenial Xerus and the architecture is amd64 (64 bits). The MATLAB, Scilab and Octave versions used are given in the following table. It has to be noted that Octave was compiled with the experimental Just In Time (JIT) compiler enabled (see Appendix 7.1 for details). There is no such feature in Scilab, while in MATLAB the JIT feature is available by default. Software Version MATLAB R 2014a (8.3.0.532) Scilab 5.5.2 Octave 4.0.2 (JIT enabled) Table 2.1: Software versions The three packages use the same linear algebra library, called LAPACK. This library, in turn, relies on the BLAS (Basic Linear Algebra Subprograms) library for its internal computations. There are several implementations of the BLAS library that can be used within math packages like MATLAB, Scilab, Octave (and others, like Python SciPy). The four major implementations are called Reference BLAS (RefBLAS), Atlas BLAS (Atlas), OpenBLAS and Intel Math Kernel Library (MKL). 1 Version 0.2 – 07/26/2016 RefBLAS, Atlas and OpenBLAS are free software and are directly available through the package system of the Ubuntu Linux distribution. MKL is a commercial library developed by Intel. It is free for non commercial use and Intel provides a specific bundle that has to be manually installed in Ubuntu. RefBLAS provides a reference single thread implementation of BLAS, but it is slow on some complex linear algebra operations. Atlas and OpenBLAS are highly optimized multithread BLAS implementations and are usually faster. MKL is a multithread implementation of BLAS highly optimized for Intel processors and is usually very fast on such hardware. Scilab and Octave are free software and can be compiled under Ubuntu Linux in such a way that it is easy to switch between the four implementations of the BLAS library (see Appendix 7.2 for the details of the procedure). MATLAB, as a commercial software, is provided with a given version of the MKL, so that it is not possible to switch to another BLAS implementation. The following table gives the version of the various BLAS libraries used in this comparison. BLAS Library Version Package name in Ubuntu RefBLAS 3.6.0 libblas3 Atlas 3.10.2 libatlas3-base OpenBLAS 0.2.18 libopenblas-base MKL 11.3.3.210 N/A MKL MATLAB 11.0.5 N/A Table 2.2: BLAS versions 3. Description of the benchmarks We have used four different sets of benchmark programs. The first three were found on the Internet and only slightly modified for the purpose of this comparison. The fourth was specifically written by the author. These four benchmark sets are named upon their directory name in the benchmark package. They are described below. 3.1. pincon This benchmark set was written by Bruno Pinçon (hence the benchmark name) and is extensively described in [1]. 2 Version 0.2 – 07/26/2016 It consists of 31 different programs that target various operations: loops, random generations, recursive calls, quick sorts, extractions and insertions, matrix operations, permutations, comparisons, cell reads and writes. This benchmark set does not make an intensive use of the BLAS routines because its aim is mainly to test the speed of the interpreter and of the general matrix operations. 3.2. poisson This benchmark set contains only two programs. It was developed by Neeraj Sharma and Matthias K. Gobbert and is fully described in [2]. The purpose of this benchmark is to provide a complex problem that could be encountered in the real life. The implemented problem results from a finite difference discretization of the Poisson equation (hence the name) in two spatial dimensions. The problem is solved using two algorithms: gaussian elimination (GE) and conjugate gradient (CG). 3.3. ncrunch This benchmark set consists of 15 programs. It was designed by Stefan Steinhaus and is described in [3]. The purpose of this benchmark is to be a general “number crunching” benchmark (hence the name). It tests loops, random generation, sorting, FFT, determinants, matrix inverses, eigenvalues, Cholesky decomposition, principal components, special functions and linear regression. This benchmark makes an intensive use of BLAS routines and thus it is very sensitive to the BLAS library implementation. 3.4. optim This benchmark set contains two programs and was developed specifically by the author of this comparison. These programs implement two popular optimization metaheuristics, called Particle Swarm Optimization (PSO) and Differential Evolution (DE). These metaheuristics are described in [4] and [5]. Since there exists many variants of these methods, we choose to implement the “standard” versions, which are the most popular and simple ones. The PSO method is used to find the global minimum of the Rosenbrock function in 30 dimensions. The DE method is used to find the global minimum of the Rastrigin function in 10 dimensions. Rosenbrock and Rastrigin functions are standard test functions used for the evaluation of global optimization methods. The two methods are by nature loop intensives. However, the standard PSO method can easily be vectorized, while only some parts of the DE method can be vectorized. Thus, the DE method is slower than the PSO. 3 Version 0.2 – 07/26/2016 This benchmark set does not make much use of the BLAS routines and is rather insensitive to the choice of the BLAS library. 4. Results 4.1. Detailed results The detailed results (i.e. run times in seconds, lower is better) of the four benchmark sets are given in tables 4.1 to 4.4. We comment below these results for each benchmark set. – pincon The first observation is that the MKL gives by far the worst total results for Scilab and Octave on this benchmark set. Indeed, looking at the individual benchmark results, we see that some programs have nearly the same performances in all BLAS implementations (fibonacci, random generation, quick sort, fannkuch, etc.) while others (subsets, crible, loop calls, etc.) have much higher run times in MKL. We can also see such a phenomenon in MATLAB (though there is no comparison point with another BLAS implementation in MATLAB): for example, it is quite strange that the hmeans (a simple class partitioning algorithm) benchmark has a run time of ~3 s in MATLAB, while it is only ~0.4 s in Scilab and ~0.1 s in Octave. The same observation can be done on the make_perm (permutation algorithm), loop calls, crible or simplexe algorithms. A possible explanation could be that the MKL is well optimized for complex linear algebra operations (as will be seen later) but this optimization has some negative impact on more simple matrix operations. However, this is only a conjecture since we don’t know the internals of the MKL library. It has to be noted that Scilab gives bad results on the cell test (construction and lecture) but this should be fixed in the next generation of the program (Scilab 6). If we disregard the Scilab and Octave results with MKL, we see that the three packages give comparable results on the pincon benchmark set, whatever the BLAS library. Anyway, MATLAB is slightly faster, then comes Scilab and the last is Octave. 4 Version 0.2 – 07/26/2016 Benchmark name MATLAB Scilab Scilab Scilab Scilab Octave Octave Octave Octave (MKL) (RefBLAS) (Atlas) (OpenBLAS) (MKL) (RefBLAS) (Atlas) (OpenBLAS) (MKL) fib(24) 1.11 0.52 0.54 0.59 0.67 2.04 2.06 2.03 1.99 subsets(1:20.7) 0.76 0.51 0.57 0.57 3.31 2.24 2.37 2.25 2.22 irn55_rand(1.1e6) 0.78 0.44 0.44 0.46 0.48 2.28 2.24 2.26 2.25 merge_sort. 10000 elts 0.29 0.42 0.45 0.46 0.47 2.48 2.48 2.44 2.41 qSort. 40000 elts 1.28 0.62 0.66 0.68 0.70 2.83 2.85 2.83 2.73 25 times harmvect(1e6) 0.46 0.22 0.23 0.21 0.22 0.22 0.23 0.21 0.22 harmloop(1e6) 0.02 0.47 0.51 0.51 0.50 0.01 0.01 0.02 0.01 3 times fannkuch(7) 0.46 0.29 0.30 0.31 0.30 2.37 2.39 2.34 2.30 my_lu 400x400 2.04 0.15 0.15 0.14 1.90 0.10 0.10 0.10 1.21 crible(3e6) 1.19 0.12 0.11 0.11 1.46 0.09 0.09 0.09 1.19 p = make_perm(20000) 3.80 0.44 0.44 0.49 2.28 2.21 2.24 2.21 4.50 4 times q = inv_perm(p) 0.01 0.26 0.30 0.28 0.29 0.05 0.06 0.05 0.05 fftx 2^15 elts 1.79 2.76 2.86 2.89 2.92 4.10 4.13 4.08 4.02 30 times pascal(1029) 0.20 0.34 0.36 0.36 0.36 0.50 0.50 0.51 0.50 hmeans.
Recommended publications
  • IADS Group Uses Multiple MATLAB Licenses
    Can We Migrate Our Analysis Routines to Python? Introduction • Can we migrate our analysis routines to Python? - MATLAB is powerful, but it’s expensive. - Capable open-source alternatives exist and are thriving. • Recent developments in scientific Python libraries have made migration from MATLAB to Python possible and attractive. • The IADS Group uses multiple MATLAB licenses. Dominance of MATLAB • MATLAB is the standard language for engineering analysis. • No need to be a programmer to solve engineering problems. • Used for collaboration and development of analysis routines. • MATLAB is required for study in an engineering curriculum - ECE 309 (CSUN), “Numerical Methods in Electrical Engineering”, is now taught using MATLAB. It was taught using Pascal in the 1980s… Dependence on MATLAB • IADS uses MATLAB to prototype new analysis routines. • IADS uses MATLAB to test and maintain data export and import. We need MATLAB! • IADS uses a set of MATLAB scripts to test Autospectrum and PSD results for Every Release. • IADS is dependent upon MATLAB. Problems with Dependency • Budget constraints mean fewer licenses and toolboxes are available. • MATLAB version changes force retest What happens of data interfaces. if they take MATLAB away • Retest requires an active license. from us? • Test Scripts are unusable without a license. • Having no backup plan in place is risky. Requirements for a Replacement • Should have broad industry acceptance. • Should have scientific libraries that mimic functionality that is commonly used in MATLAB by the flight test community. • Should have similar syntax. • Total MATLAB functionality is not necessary for our purposes, but it would be nice for going forward. • Should be relatively free of periodic licensing hassles.
    [Show full text]
  • A Comparative Evaluation of Matlab, Octave, R, and Julia on Maya 1 Introduction
    A Comparative Evaluation of Matlab, Octave, R, and Julia on Maya Sai K. Popuri and Matthias K. Gobbert* Department of Mathematics and Statistics, University of Maryland, Baltimore County *Corresponding author: [email protected], www.umbc.edu/~gobbert Technical Report HPCF{2017{3, hpcf.umbc.edu > Publications Abstract Matlab is the most popular commercial package for numerical computations in mathematics, statistics, the sciences, engineering, and other fields. Octave is a freely available software used for numerical computing. R is a popular open source freely available software often used for statistical analysis and computing. Julia is a recent open source freely available high-level programming language with a sophisticated com- piler for high-performance numerical and statistical computing. They are all available to download on the Linux, Windows, and Mac OS X operating systems. We investigate whether the three freely available software are viable alternatives to Matlab for uses in research and teaching. We compare the results on part of the equipment of the cluster maya in the UMBC High Performance Computing Facility. The equipment has 72 nodes, each with two Intel E5-2650v2 Ivy Bridge (2.6 GHz, 20 MB cache) proces- sors with 8 cores per CPU, for a total of 16 cores per node. All nodes have 64 GB of main memory and are connected by a quad-data rate InfiniBand interconnect. The tests focused on usability lead us to conclude that Octave is the most compatible with Matlab, since it uses the same syntax and has the native capability of running m-files. R was hampered by somewhat different syntax or function names and some missing functions.
    [Show full text]
  • Julia, My New Friend for Computing and Optimization? Pierre Haessig, Lilian Besson
    Julia, my new friend for computing and optimization? Pierre Haessig, Lilian Besson To cite this version: Pierre Haessig, Lilian Besson. Julia, my new friend for computing and optimization?. Master. France. 2018. cel-01830248 HAL Id: cel-01830248 https://hal.archives-ouvertes.fr/cel-01830248 Submitted on 4 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 1 « Julia, my New frieNd for computiNg aNd optimizatioN? » Intro to the Julia programming language, for MATLAB users Date: 14th of June 2018 Who: Lilian Besson & Pierre Haessig (SCEE & AUT team @ IETR / CentraleSupélec campus Rennes) « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 2 AgeNda for today [30 miN] 1. What is Julia? [5 miN] 2. ComparisoN with MATLAB [5 miN] 3. Two examples of problems solved Julia [5 miN] 4. LoNger ex. oN optimizatioN with JuMP [13miN] 5. LiNks for more iNformatioN ? [2 miN] « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 3 1. What is Julia ? Open-source and free programming language (MIT license) Developed since 2012 (creators: MIT researchers) Growing popularity worldwide, in research, data science, finance etc… Multi-platform: Windows, Mac OS X, GNU/Linux..
    [Show full text]
  • Delivering a Professional ®
    Delivering a professional ® We stand on the shoulders of giants (2015) Custodians of OpenFOAM® (2016) … www.esi-group.com 1 Copyright © ESICopyright Group, 2017. © ESI All Group, rights reserved.2017. All rights reserved. OpenFOAM – foreword Greetings from, and thanks to the Team • OpenCFD Core Development‣ Karen Kettle and Supporting Teams ‣ Takashi Minabe (Japan) ‣ Andrew Heather ‣ Mohsen Battoei (North America) ‣ Mattijs Janssens ‣ Ravi Ajjampudi (India) ‣ Sergio Ferraris ‣ Bjorn Landmann, Sebastien Vilfayeau (Germany) ‣ Mark Olesen ‣ Matej Forman (Training Coordinator) ‣ Prashant Sonakar ‣ Roger Almenar ‣ Pawan Ghildiyal ‣ Fred Mendonca OpenFOAM Operation www.esi-group.com 2 Copyright © ESI Group, 2017. All rights reserved. OpenCFD – Commitment to OpenFOAM Users Development and Release Schedule • OpenCFD owns the trademark • Releasing OpenFOAM since 2004 • Professional Six-monthly Development and Release cycle, including ‣ New developments ‣ Consolidated bug-fixes ‣ Overhaulled testing procedure for Quality Assurance ‣ Release and Development repositories in GitLab https://develop.openfoam.com ‣ Master branch ‣ Develop branch (includes > Master > Release ‣ Community Repositories > Develop www.esi-group.com 3 Copyright © ESI Group, 2017. All rights reserved. OpenCFD – Commitment to OpenFOAM Users Development and Release Schedule • OpenFOAM.com releases so far • OpenFOAM-v3.0+ on Jan 13th 2016 • OpenFOAM-v1606+ on June 30th 2016 • OpenFOAM-v1612+ on 23rd December 2016 • OpenFOAM-v1706 on 30th June 2017 www.esi-group.com 4 Copyright
    [Show full text]
  • Ordinary Differential Equations (ODE) Using Euler's Technique And
    Mathematical Models and Methods in Modern Science Ordinary Differential Equations (ODE) using Euler’s Technique and SCILAB Programming ZULZAMRI SALLEH Applied Sciences and Advance Technology Universiti Kuala Lumpur Malaysian Institute of Marine Engineering Technology Bandar Teknologi Maritim Jalan Pantai Remis 32200 Lumut, Perak MALAYSIA [email protected] http://www.mimet.edu.my Abstract: - Mathematics is very important for the engineering and scientist but to make understand the mathematics is very difficult if without proper tools and suitable measurement. A numerical method is one of the algorithms which involved with computer programming. In this paper, Scilab is used to carter the problems related the mathematical models such as Matrices, operation with ODE’s and solving the Integration. It remains true that solutions of the vast majority of first order initial value problems cannot be found by analytical means. Therefore, it is important to be able to approach the problem in other ways. Today there are numerous methods that produce numerical approximations to solutions of differential equations. Here, we introduce the oldest and simplest such method, originated by Euler about 1768. It is called the tangent line method or the Euler method. It uses a fixed step size h and generates the approximate solution. The purpose of this paper is to show the details of implementing of Euler’s method and made comparison between modify Euler’s and exact value by integration solution, as well as solve the ODE’s use built-in functions available in Scilab programming. Key-Words: - Numerical Methods, Scilab programming, Euler methods, Ordinary Differential Equation. 1 Introduction availability of digital computers has led to a Numerical methods are techniques by which veritable explosion in the use and development of mathematical problems are formulated so that they numerical methods [1].
    [Show full text]
  • Running Sagemath (With Or Without Installation)
    Running SageMath (with or without installation) http://www.sagemath.org/ Éric Gourgoulhon Running SageMath 9 Feb. 2017 1 / 5 Various ways to install/access SageMath 7.5.1 Install on your computer: 2 options: install a compiled binary version for Linux, MacOS X or Windows1 from http://www.sagemath.org/download.html compile from source (Linux, MacOS X): check the prerequisites (see here for Ubuntu) and run git clone git://github.com/sagemath/sage.git cd sage MAKE=’make -j8’ make Run on your computer without installation: Sage Debian Live http://sagedebianlive.metelu.net/ Bootable USB flash drive with SageMath (boosted with octave, scilab), Geogebra, LaTeX, gimp, vlc, LibreOffice,... Open a (free) account on SageMathCloud https://cloud.sagemath.com/ Run in SageMathCell Single cell mode: http://sagecell.sagemath.org/ 1requires VirtualBox; alternatively, a full Windows installer is in pre-release stage at https://github.com/embray/sage-windows/releases Éric Gourgoulhon Running SageMath 9 Feb. 2017 2 / 5 Example 1: installing on Ubuntu 16.04 1 Download the archive sage-7.5.1-Ubuntu_16.04-x86_64.tar.bz2 from one the mirrors listed at http://www.sagemath.org/download-linux.html 2 Run the following commands in a terminal: bunzip2 sage-7.5.1-Ubuntu_16.04-x86_64.tar.bz2 tar xvf sage-7.5.1-Ubuntu_16.04-x86_64.tar cd SageMath ./sage -n jupyter A Jupyter home page should then open in your browser. Click on New and select SageMath 7.5.1 to open a Jupyter notebook with a SageMath kernel. Éric Gourgoulhon Running SageMath 9 Feb. 2017 3 / 5 Example 2: using the SageMathCloud 1 Open a free account on https://cloud.sagemath.com/ 2 Create a new project 3 In the second top menu, click on New to create a new file 4 Select Jupyter Notebook for the file type 5 In the Jupyter menu, click on Kernel, then Change kernel and choose SageMath 7.5 Éric Gourgoulhon Running SageMath 9 Feb.
    [Show full text]
  • ARM HPC Ecosystem
    ARM HPC Ecosystem Darren Cepulis HPC Segment Manager ARM Business Segment Group HPC Forum, Santa Fe, NM 19th April 2017 © ARM 2017 ARM Collaboration for Exascale Programs United States Japan ARM is currently a participant in Fujitsu and RIKEN announced that the two Department of Energy funded Post-K system targeted at Exascale will pre-Exascale projects: Data be based on ARMv8 with new Scalable Movement Dominates and Fast Vector Extensions. Forward 2. European Union China Through FP7 and Horizon 2020, James Lin, vice director for the Center ARM has been involved in several of HPC at Shanghai Jiao Tong University funded pre-Exascale projects claims China will build three pre- including the Mont Blanc program Exascale prototypes to select the which deployed one of the first architecture for their Exascale system. ARM prototype HPC systems. The three prototypes are based on AMD, SunWei TaihuLight, and ARMv8. 2 © ARM 2017 ARM HPC deployments starting in 2H2017 Tw o recent announcements about ARM in HPC in Europe: 3 © ARM 2017 Japan Exascale slides from Fujitsu at ISC’16 4 © ARM 2017 Foundational SW Ecosystem for HPC . Linux OS’s – RedHat, SUSE, CENTOS, UBUNTU,… Commercial . Compilers – ARM, GNU, LLVM,… . Libraries – ARM, OpenBLAS, BLIS, ATLAS, FFTW… . Parallelism – OpenMP, OpenMPI, MVAPICH2,… . Debugging – Allinea, RW Totalview, GDB,… Open-source . Analysis – ARM, Allinea, HPCToolkit, TAU,… . Job schedulers – LSF, PBS Pro, SLURM,… . Cluster mgmt – Bright, CMU, warewulf,… Predictable Baseline 5 © ARM 2017 – now on ARM Functional Supported packages / components Areas OpenHPC defines a baseline. It is a community effort to Base OS RHEL/CentOS 7.1, SLES 12 provide a common, verified set of open source packages for Administrative Conman, Ganglia, Lmod, LosF, ORCM, Nagios, pdsh, HPC deployments Tools prun ARM’s participation: Provisioning Warewulf Resource Mgmt.
    [Show full text]
  • 0 BLIS: a Modern Alternative to the BLAS
    0 BLIS: A Modern Alternative to the BLAS FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin We propose the portable BLAS-like Interface Software (BLIS) framework which addresses a number of short- comings in both the original BLAS interface and present-day BLAS implementations. The framework allows developers to rapidly instantiate high-performance BLAS-like libraries on existing and new architectures with relatively little effort. The key to this achievement is the observation that virtually all computation within level-2 and level-3 BLAS operations may be expressed in terms of very simple kernels. Higher-level framework code is generalized so that it can be reused and/or re-parameterized for different operations (as well as different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any and all BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are supported through a straightforward compatibility layer, though calling sequences must be updated for those who wish to access new functionality. Experimental performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans. Math. Softw. 0, 0, Article 0 ( 0000), 31 pages.
    [Show full text]
  • Introduction to Scilab
    www.scilab.org INTRODUCTION TO SCILAB Consortium sCilab Domaine de Voluceau - Rocquencourt - B.P. 105 - 78153 Le Chesnay Cedex France This document has been written by Michaël Baudin from the Scilab Consortium. © November 2010 The Scilab Consortium - Digiteo. All rights reserved. November 2010 Abstract In this document, we make an overview of Scilab features so that we can get familiar with this environment. The goal is to present the core of skills necessary to start with Scilab. In the first part, we present how to get and install this software on our computer. We also present how to get some help with the provided in-line documentation and also thanks to web resources and forums. In the remaining sections, we present the Scilab language, especially its structured programming features. We present an important feature of Scilab, that is the management of real matrices and overview the linear algebra library. The definition of functions and the elementary management of input and output variables is presented. We present Scilab's graphical features and show how to create a 2D plot, how to configure the title and the legend and how to export that plot into a vectorial or bitmap format. Contents 1 Overview5 1.1 Introduction................................5 1.2 Overview of Scilab............................5 1.3 How to get and install Scilab.......................6 1.3.1 Installing Scilab under Windows.................7 1.3.2 Installing Scilab under Linux..................7 1.3.3 Installing Scilab under Mac OS.................8 1.4 How to get help..............................8 1.5 Mailing lists, wiki and bug reports....................9 1.6 Getting help from Scilab demonstrations and macros........
    [Show full text]
  • Scilab for Very Beginners
    Scilab for very beginners Scilab Enterprises S.A.S - 143 bis rue Yves Le Coz - 78000 Versailles (France) - www.scilab-enterprises.com This document has been co-written by Scilab Enterprises and Christine Gomez, mathematics teacher at Lycée Descartes (Descartes HiGh School) in Antony, Hauts-de-Seine (France). © 2013 Scilab Enterprises. All riGhts reserved. Scilab for very beGinners - 2/33 Table of content Introduction About this document 4 Install Scilab 4 MailinG list 4 Complementary resources 4 Chapter 1 – Become familiar with Scilab The General environment and the console 5 Simple numerical calculations 6 The menu bar 7 The editor 8 The Graphics window 9 Windows manaGement and workspace customization 11 Chapter 2 - Programming Variables, assignment and display 12 Loops 16 Tests 17 2 and 3D plots 18 Supplements on matrices and vectors 23 Calculation accuracy 29 SolvinG differential equations 30 Chapter 3 – Useful Scilab functions In analysis 32 In probability and statistics 32 To display and plot 33 Utilities 33 Scilab for very beGinners - 3/33 Introduction About this document The purpose of this document is to Guide you step by step in explorinG the various basic features of Scilab for a user who has never used numerical computation software. This presentation is voluntarily limited to the essential to allow easier handling of Scilab. Computations, Graphs and illustrations are made with Scilab 5.4.0. You can reproduce all those commands from this version. Install Scilab Scilab is numerical computation software that anybody can freely download. Available under Windows, Linux and Mac OS X, Scilab can be downloaded at the followinG address: http://www.scilab.orG/ You can be notified of new releases of Scilab software by subscribinG to our channel notification at the following address: http://lists.scilab.orG/mailman/listinfo/release Mailing list To facilitate the exchanGe between Scilab users, dedicated mailinG lists exist (list in French, list for the education world, international list in English).
    [Show full text]
  • Some Effective Methods for Teaching Mathematics Courses in Technological Universities
    International Journal of Education and Information Studies. ISSN 2277-3169 Volume 6, Number 1 (2016), pp. 11-18 © Research India Publications http://www.ripublication.com Some Effective Methods for Teaching Mathematics Courses in Technological Universities Dr. D. S. Sankar Professor, School of Applied Sciences and Mathematics, Universiti Teknologi Brunei, Jalan Tungku Link, BE1410, Brunei Darussalam E-mail: [email protected] Dr. Rama Rao Karri Principal Lecturer, Petroleum and Chemical Engineering Programme Area, Faculty of Engineering, Universiti Teknologi Brunei, Jalan Tungku Link, Gadong BE1410, Brunei Darussalam E-mail: [email protected] Abstract This article discusses some effective and useful methods for teaching various mathematics topics to the students of undergraduate and post-graduate degree programmes in technological universities. These teaching methods not only equip the students to acquire knowledge and skills for solving real world problems efficiently, but also these methods enhance the teacher’s ability to demonstrate the mathematical concepts effectively along with suitable physical examples. The exposure to mathematical softwares like MATLAB, SCILAB, MATHEMATICA, etc not only increases the students confidential level to solve variety of typical problems which they come across in their respective disciplines of study, but also it enables them to visualize the surfaces of the functions of several variable. Peer learning, seminar based learning and project based learning are other methods of learning environment to the students which makes the students to learn mathematics by themselves. These are higher level learning methods which enhances the students understanding on the mathematical concepts and it enables them to take up research projects. It is noted that the teaching and learning of mathematics with the support of mathematical softwares is believed to be more effective when compared to the effects of other methods of teaching and learning of mathematics.
    [Show full text]
  • 18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance
    18 Restructuring the Tridiagonal and Bidiagonal QR Algorithms for Performance FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin GREGORIO QUINTANA-ORT´I, Universitat Jaume I We show how both the tridiagonal and bidiagonal QR algorithms can be restructured so that they become rich in operations that can achieve near-peak performance on a modern processor. The key is a novel, cache-friendly algorithm for applying multiple sets of Givens rotations to the eigenvector/singular vec- tor matrix. This algorithm is then implemented with optimizations that: (1) leverage vector instruction units to increase floating-point throughput, and (2) fuse multiple rotations to decrease the total number of memory operations. We demonstrate the merits of these new QR algorithms for computing the Hermitian eigenvalue decomposition (EVD) and singular value decomposition (SVD) of dense matrices when all eigen- vectors/singular vectors are computed. The approach yields vastly improved performance relative to tradi- tional QR algorithms for these problems and is competitive with two commonly used alternatives—Cuppen’s Divide-and-Conquer algorithm and the method of Multiple Relatively Robust Representations—while inherit- ing the more modest O(n) workspace requirements of the original QR algorithms. Since the computations performed by the restructured algorithms remain essentially identical to those performed by the original methods, robust numerical properties are preserved. Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: Eigenvalues, singular values, tridiagonal, bidiagonal, EVD, SVD, QR algorithm, Givens rotations, linear algebra, libraries, high performance ACM Reference Format: Van Zee, F.
    [Show full text]