Parallel and Distributed Gr\" Obner Bases Computation In

Total Page:16

File Type:pdf, Size:1020Kb

Parallel and Distributed Gr\ Parallel and distributed Gr¨obnerbases computation in JAS Heinz Kredel IT-Center, University of Mannheim [email protected] 18. July 2010 Abstract Besides the sequential algorithm, we consider three Gr¨obner bases implementations: multiple This paper considers parallel Gr¨obner bases al- threads using shared memory, pure distributed gorithms on distributed memory parallel comput- memory with communication of polynomials be- ers with multi-core compute nodes. We summa- tween compute nodes and distributed memory com- rize three different Gr¨obnerbases implementations: bined with multiple threads on the nodes. The last shared memory parallel, pure distributed memory algorithm, called distributed hybrid, uses only one parallel and distributed memory combined with control communication channel between the master shared memory parallelism. The last algorithm, node and the worker nodes and keeps polynomials called distributed hybrid, uses only one control in shared memory on a node. The polynomials are communication channel between the master node transported asynchronous to the control-flow of the and the worker nodes and keeps polynomials in algorithm in a separate distributed data structure. shared memory on a node. The polynomials are In this paper we present new performance measure- transported asynchronous to the control-flow of the ments on a grid-cluster [6] and discuss performance algorithm in a separate distributed data structure. of the algorithms. The implementation is generic and works for all An object oriented design of a Java computer al- implemented (exact) fields. We present new perfor- gebra system (called JAS) as type safe and thread mance measurements and discuss the performance safe approach to computer algebra is presented in of the algorithms. [31, 23, 24, 26]. JAS provides a well designed software library using generic types for algebraic computations implemented in the Java program- 1 Introduction ming language. The library can be used as any We summarize parallel algorithms for computing other Java software package or it can be used in- Gr¨obnerbases on todays cluster parallel comput- teractively or interpreted through an Jython (Java ing environments in a Java computer algebra sys- Python) front-end. The focus of JAS is at the tem (JAS), which we have developed in the last moment on commutative and solvable polynomials, years [28, 29]. Our target hardware are distributed Gr¨obnerbases, greatest common divisors and ap- arXiv:1008.0011v1 [cs.DC] 30 Jul 2010 memory parallel computers with multi-core com- plications. JAS contains interfaces and classes for pute nodes. Such computing infrastructure is pre- basic arithmetic of integers, rational numbers and dominant in todays high performance computing multivariate polynomials with integer or rational clusters (HPC). The implementation of Gr¨obner number coefficients. bases algorithms is part of the essential building blocks for any computation in algebraic geometry. 1.1 Parallel Gr¨obnerbases Our aim is an implementation in a modern object The computation of Gr¨obnerbases (via the Buch- oriented programming language with generic data berger algorithm) solves an important problem for types, as it is provided by Java programming lan- computer algebra [5]. These bases play the same guage. role for the solution of systems of algebraic equa- introduces the expected and developed infrastruc- tions as the LU-decomposition, obtained by Gaus- ture to implement parallel and distributed Gr¨obner sian elimination, for systems of linear equations. bases. The Gr¨obnerbase algorithms are summa- Unfortunately the computation of such polynomial rized in section 3. Section 4 evaluates several as- bases is notoriously hard, both with sequential and pects of the design, namely termination detection, parallel algorithms. So any improvement of this al- performance, the `workload paradox', and selection gorithm is of great importance. For a discussion strategies. Finally section 5 draws some conclu- of the problems with parallel versions of this algo- sions and shows possible future work. rithm, see the introduction in [28]. For the convenience, this paper contains sum- maries and revised parts of [28, 29] to explain the 1.2 Related work new performance measurements. Performance fig- ures and tables are presented throughout the paper. In this section, we briefly summarize the related Explanations are in section 4.2. work. Related work on computer algebra libraries and an evaluation of the JAS library in comparison to other systems can be found in [24, 25]. 2 Hard- and middle-ware Theoretical studies on parallel computer algebra focus on parallel factoring and problems which can In this section we summarize computing hardware exploit parallel linear algebra [32, 41]. Most re- and middle-ware components required for the im- ports on experiences and results of parallel com- plementation of the presented algorithms. The puter algebra are from systems written from scratch suitability of the Java computing platform for par- or where the system source code was available. A allel computer algebra has been discussed for ex- newer approach of a multi-threaded polynomial li- ample in [28, 29]. brary implemented in C is for example [9]. From the commercial systems some reports are about 2.1 Hardware Maple [8] (workstation clusters), and Reduce [38] Common grid computing infrastructure consists of (automatic compilation, vector processors). Multi- nodes of multi-core CPUs connected by a high- processing support for Aldor (the programming performance network. We have access to the bw- language of Axiom) is presented in [36]. Grid aware GRiD infrastructure [6]. It consists of 2×140 8-core computer algebra systems are for example [35]. CPU nodes at 2:83 GHz with 16 GB main memory The SCIEnce project works on Grid facilities for connected by a 10 Gbit InfiniBand and 1 Gbit Eth- the symbolic computation systems GAP, KANT, ernet network. The operating system is Scientific Maple and MuPAD [40, 44]. Java grid middle-ware Linux 5.0 and has shared Lustre home directories systems and parallel computing platforms are pre- and a PBS batch system with Maui scheduler. sented in [18, 16, 7, 20, 2, 4]. For further overviews The performance of the distributed algorithms see section 2.18 in the report [15] and the tutorial depend on the fast InfiniBand networking hard- [39]. ware. We have done performance tests also with For the parallel Buchberger algorithm the idea normal Ethernet networking hardware. The Ether- of parallel reduction of S-polynomials seems to be net connection of the nodes in the bwGRiD cluster originated by Buchberger and was in the folklore for is 1 Gbit to a switch in a blade center containing a while. First implementations have been reported, 14 nodes and a 1 Gbit connection of the blade cen- for example, by Hawley [17] and others [33, 43, 19]. ters to a central switch. We could not obtain any For triangular systems multi-threaded parallel ap- speedup for the distributed algorithm on an Ether- proaches have been reported by [34, 37]. net connection, only with the InfiniBand connec- tion a speedup can be reported. 1.3 Outline The InfiniBand connection is used with the Due to limited space we must assume that you are TCP/IP protocol. The support for the direct Infini- familiar with Java, object oriented programming Band protocol, by-passing the TCP/IP stack, will and mathematics of Gr¨obnerbases [5]. Section 2 eventually be available in JDK 1.7 in 2010/11. The 2 evaluation of the direct InfiniBand access will be be able to distinguish messages between specific future work. threads on both sides we use tagged messages chan- nels. Each message is send together with a tag 2.2 Execution middle-ware (an unique identifier) and the receiving side can then wait only for messages with specific tags. For In this section we summarize the execution and details on the implementation and alternatives see communication middle-ware used in our implemen- [29]. tation, for details see [28, 29]. The execution middle-ware is general purpose and independent 2.3 Data structure middle-ware of a particular application like Gr¨obnerbases and can thus be used for many kinds of algebraic algo- We try to reduce communication cost by employ- rithms. ing a distributed data structure with asynchronous communication which can be overlapped with com- putation. Using marshalled objects for transport, GBDist ExecutableServer the object serialization overhead is minimized. This Distributed Distributed data structure middle-ware is independent of a par- ThreadPool Thread ticular application like Gr¨obnerbases and can be Reducer Reducer used for many kinds of applications. Server Client The distributed data structure is implemented by GB() clientPart() class DistHashTable, called `DHT client' in figure DHT 1, see also figure 7. It implements the Map inter- Client DHT Client face and extends AbstractMap from the java.util DHT Server package with type parameters and can so be used in a type safe way. In the current program version master node client node we use a centralized control distributed hash table, InfiniBand a decentralized version will be future work. For the usage of the data structure the clients only need Figure 1: Middleware overview. the network node name and port of the master. In addition there are methods like getWait(), which The infrastructure for the distributed partner expose the different semantics of a distributed data processes uses a daemon process, which has to structure as it blocks until an element for the key be setup via the normal cluster computing tools has arrived. or some other means. The cluster tools available at Mannheim use PBS (portable batch system). PBS maintains a list of nodes allocated for a clus- 3 Gr¨obnerbases ter job which is used in a loop with ssh-calls to In this section we summarize the sequential, the start the daemon on the available compute nodes. shared memory parallel and distributed versions of The lowest level class ExecutableServer imple- algorithms to compute Gr¨obnerbases as described ments the daemon processes, see figure 1.
Recommended publications
  • Analysis and Verification of Arithmetic Circuits Using Computer Algebra Approach
    University of Massachusetts Amherst ScholarWorks@UMass Amherst Doctoral Dissertations Dissertations and Theses March 2020 ANALYSIS AND VERIFICATION OF ARITHMETIC CIRCUITS USING COMPUTER ALGEBRA APPROACH TIANKAI SU University of Massachusetts Amherst Follow this and additional works at: https://scholarworks.umass.edu/dissertations_2 Part of the VLSI and Circuits, Embedded and Hardware Systems Commons Recommended Citation SU, TIANKAI, "ANALYSIS AND VERIFICATION OF ARITHMETIC CIRCUITS USING COMPUTER ALGEBRA APPROACH" (2020). Doctoral Dissertations. 1868. https://doi.org/10.7275/15875490 https://scholarworks.umass.edu/dissertations_2/1868 This Open Access Dissertation is brought to you for free and open access by the Dissertations and Theses at ScholarWorks@UMass Amherst. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. ANALYSIS AND VERIFICATION OF ARITHMETIC CIRCUITS USING COMPUTER ALGEBRA APPROACH A Dissertation Presented by TIANKAI SU Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY February 2020 Electrical and Computer Engineering c Copyright by Tiankai Su 2020 All Rights Reserved ANALYSIS AND VERIFICATION OF ARITHMETIC CIRCUITS USING COMPUTER ALGEBRA APPROACH A Dissertation Presented by TIANKAI SU Approved as to style and content by: Maciej Ciesielski, Chair George S. Avrunin, Member Daniel Holcomb, Member Weibo Gong, Member Christopher V. Hollot, Department Head Electrical and Computer Engineering ABSTRACT ANALYSIS AND VERIFICATION OF ARITHMETIC CIRCUITS USING COMPUTER ALGEBRA APPROACH FEBRUARY 2020 TIANKAI SU B.Sc., NORTHEAST DIANLI UNIVERSITY Ph.D., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Maciej Ciesielski Despite a considerable progress in verification of random and control logic, ad- vances in formal verification of arithmetic designs have been lagging.
    [Show full text]
  • The Misfortunes of a Trio of Mathematicians Using Computer Algebra Systems
    The Misfortunes of a Trio of Mathematicians Using Computer Algebra Systems. Can We Trust in Them? Antonio J. Durán, Mario Pérez, and Juan L. Varona Introduction by a typical research mathematician, but let us Nowadays, mathematicians often use a computer explain it briefly. It is not necessary to completely algebra system as an aid in their mathematical understand the mathematics, just to realize that it research; they do the thinking and leave the tedious is typical mathematical research using computer calculations to the computer. Everybody “knows” algebra as a tool. that computers perform this work better than Our starting point is a discrete positive measure people. But, of course, we must trust in the results on the real line, µ n 0 Mnδan (where δa denotes = ≥ derived via these powerful computer algebra the Dirac delta at a, and an < an 1) having P + systems. First of all, let us clarify that this paper is a sequence of orthogonal polynomials Pn n 0 f g ≥ not, in any way, a comparison between different (where Pn has degree n and positive leading computer algebra systems, but a sample of the coefficient). Karlin and Szeg˝oconsidered in 1961 (see [4]) the l l Casorati determinants current state of the art of what mathematicians × can expect when they use this kind of software. Pn(ak)Pn(ak 1):::Pn(ak l 1) Although our example deals with a concrete system, + + − Pn 1(ak)Pn 1(ak 1):::Pn 1(ak l 1) 0 + + + + + − 1 we are sure that similar situations may occur with (1) det .
    [Show full text]
  • Multiprocessing Contents
    Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References .............................................
    [Show full text]
  • Addition and Subtraction
    A Addition and Subtraction SUMMARY (475– 221 BCE), when arithmetic operations were per- formed by manipulating rods on a flat surface that was Addition and subtraction can be thought of as a pro- partitioned by vertical and horizontal lines. The num- cess of accumulation. For example, if a flock of 3 sheep bers were represented by a positional base- 10 system. is joined with a flock of 4 sheep, the combined flock Some scholars believe that this system— after moving will have 7 sheep. Thus, 7 is the sum that results from westward through India and the Islamic Empire— the addition of the numbers 3 and 4. This can be writ- became the modern system of representing numbers. ten as 3 + 4 = 7 where the sign “+” is read “plus” and The Greeks in the fifth century BCE, in addition the sign “=” is read “equals.” Both 3 and 4 are called to using a complex ciphered system for representing addends. Addition is commutative; that is, the order numbers, used a system that is very similar to Roman of the addends is irrelevant to how the sum is formed. numerals. It is possible that the Greeks performed Subtraction finds the remainder after a quantity is arithmetic operations by manipulating small stones diminished by a certain amount. If from a flock con- on a large, flat surface partitioned by lines. A simi- taining 5 sheep, 3 sheep are removed, then 2 sheep lar stone tablet was found on the island of Salamis remain. In this example, 5 is the minuend, 3 is the in the 1800s and is believed to date from the fourth subtrahend, and 2 is the remainder or difference.
    [Show full text]
  • Derivation of Numerical Methods Using Computer Algebra∗
    SIAM REVIEW c 1999 Society for Industrial and Applied Mathematics Vol. 41, No. 3, pp. 577–593 Derivation of Numerical Methods Using Computer Algebra∗ Walter Gandery Dominik Gruntzz Abstract. The use of computer algebra systems in a course on scientific computation is demonstrated. Various examples, such as the derivation of Newton's iteration formula, the secant method, Newton{Cotes and Gaussian integration formulas, as well as Runge{Kutta formulas, are presented. For the derivations, the computer algebra system Maple is used. Key words. numerical methods, computer algebra, Maple, quadrature formulas, nonlinear equations, finite elements, Rayleigh{Ritz, Galerkin method, Runge{Kutta method AMS subject classifications. 65-01, 68Q40, 65D32, 65H05, 65L60, 65L06 PII. S003614459935093X 1. Introduction. At ETH Z¨urich we have redesigned our courses on numerical analysis. We not only teach numerical algorithms but also introduce the students to computer algebra and make heavy use of computer algebra systems in both lectures and assignments. Computer algebra is used to generate numerical algorithms, to compute discretization errors, to derive convergence rates, to simplify proofs, to run examples, and to generate plots. We claim that it is easier for students to follow a derivation carried out with the help of a computer algebra system than one done by hand. Computer algebra systems take over the hard manual work, such as solving systems of equations. Students need not be concerned with all the details (and all the small glitches) of a manual derivation and can understand and keep an overview of the general steps of the derivation. A computer-supported derivation is also more convincing than a presentation of the bare results without any reasoning.
    [Show full text]
  • Supporting Concurrency Abstractions in High-Level Language Virtual Machines
    Faculty of Science and Bio-Engineering Sciences Department of Computer Science Software Languages Lab Supporting Concurrency Abstractions in High-level Language Virtual Machines Dissertation Submitted for the Degree of Doctor of Philosophy in Sciences Stefan Marr Promotor: Prof. Dr. Theo D’Hondt Copromotor: Dr. Michael Haupt January 2013 Print: Silhouet, Maldegem © 2013 Stefan Marr 2013 Uitgeverij VUBPRESS Brussels University Press VUBPRESS is an imprint of ASP nv (Academic and Scientific Publishers nv) Ravensteingalerij 28 B-1000 Brussels Tel. +32 (0)2 289 26 50 Fax +32 (0)2 289 26 59 E-mail: [email protected] www.vubpress.be ISBN 978 90 5718 256 3 NUR 989 Legal deposit D/2013/11.161/010 All rights reserved. No parts of this book may be reproduced or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the author. DON’T PANIC ABSTRACT During the past decade, software developers widely adopted JVM and CLI as multi-language virtual machines (VMs). At the same time, the multicore revolution burdened developers with increasing complexity. Language im- plementers devised a wide range of concurrent and parallel programming concepts to address this complexity but struggle to build these concepts on top of common multi-language VMs. Missing support in these VMs leads to tradeoffs between implementation simplicity, correctly implemented lan- guage semantics, and performance guarantees. Departing from the traditional distinction between concurrency and paral- lelism, this dissertation finds that parallel programming concepts benefit from performance-related VM support, while concurrent programming concepts benefit from VM support that guarantees correct semantics in the presence of reflection, mutable state, and interaction with other languages and libraries.
    [Show full text]
  • Symbolicdata: Sdeval-Benchmarking for Everyone
    SymbolicData:SDEval - Benchmarking for Everyone Albert Heinle1, Viktor Levandovskyy2, Andreas Nareike3 1 Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada 2 Lehrstuhl D f¨urMathematik, RWTH Aachen University, Aachen, Germany 3 HTWK Leipzig, Leipzig, Germany Abstract. In this paper we will present SDeval, a software project that contains tools for creating and running benchmarks with a focus on problems in computer algebra. It is built on top of the Symbolic Data project, able to translate problems in the database into executable code for various computer algebra systems. The included tools are designed to be very flexible to use and to extend, such that they can be utilized even in contexts of other communities. With the presentation of SDEval, we will also address particularities of benchmarking in the field of computer algebra. Furthermore, with SDEval, we provide a feasible and automatizable way of reproducing benchmarks published in current research works, which appears to be a difficult task in general due to the customizability of the available programs. We will simultaneously present the current developments in the Sym- bolic Data project. 1 Introduction Benchmarking of software { i.e. measuring the quality of results and the time resp. memory consumption for a given, standardized set of examples as input { is a common way of evaluating implementations of algorithms in many areas of industry and academia. For example, common benchmarks for satisfiability modulo theorems (SMT) solvers are collected in the standard library SMT-LIB ([BST10]), and the advantages of various solvers like Z3 ([DMB08]) or CVC4 ([BCD+11]) are revealed with the help of those benchmarks.
    [Show full text]
  • Constructing a Computer Algebra System Capable of Generating Pedagogical Step-By-Step Solutions
    DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2016 Constructing a Computer Algebra System Capable of Generating Pedagogical Step-by-Step Solutions DMITRIJ LIOUBARTSEV KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION Constructing a Computer Algebra System Capable of Generating Pedagogical Step-by-Step Solutions DMITRIJ LIOUBARTSEV Examenrapport vid NADA Handledare: Mårten Björkman Examinator: Olof Bälter Abstract For the problem of producing pedagogical step-by-step so- lutions to mathematical problems in education, standard methods and algorithms used in construction of computer algebra systems are often not suitable. A method of us- ing rules to manipulate mathematical expressions in small steps is suggested and implemented. The problem of creat- ing a step-by-step solution by choosing which rule to apply and when to do it is redefined as a graph search problem and variations of the A* algorithm are used to solve it. It is all put together into one prototype solver that was evalu- ated in a study. The study was a questionnaire distributed among high school students. The results showed that while the solutions were not as good as human-made ones, they were competent. Further improvements of the method are suggested that would probably lead to better solutions. Referat Konstruktion av ett datoralgebrasystem kapabelt att generera pedagogiska steg-för-steg-lösningar För problemet att producera pedagogiska steg-för-steg lös- ningar till matematiska problem inom utbildning, är vanli- ga metoder och algoritmer som används i konstruktion av datoralgebrasystem ofta inte lämpliga. En metod som an- vänder regler för att manipulera matematiska uttryck i små steg föreslås och implementeras.
    [Show full text]
  • Algorithms in Algebraic Number Theory
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICALSOCIETY Volume 26, Number 2, April 1992 ALGORITHMS IN ALGEBRAIC NUMBER THEORY H. W. LENSTRA, JR. Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to be done in the area. We hope to show that the study of algorithms not only increases our understanding of algebraic number fields but also stimulates our curiosity about them. The discussion is concentrated of three topics: the determination of Galois groups, the determination of the ring of integers of an algebraic number field, and the computation of the group of units and the class group of that ring of integers. 1. Introduction The main interest of algorithms in algebraic number theory is that they pro- vide number theorists with a means of satisfying their professional curiosity. The praise of numerical experimentation in number theoretic research is as widely sung as purely numerological investigations are indulged in, and for both activities good algorithms are indispensable. What makes an algorithm good unfortunately defies definition—too many extra-mathematical factors af- fect its practical performance, such as the skill of the person responsible for its execution and the characteristics of the machine that may be used. The present paper addresses itself not to the researcher who is looking for a collection of well-tested computational methods for use on his recently acquired personal computer.
    [Show full text]
  • CS 282 Algebraic Algorithms
    UNIVERSITY OF CALIFORNIA Department of Electrical Engineering and Computer Sciences Computer Science Division Prof. R. Fateman Spring 2002 General Course Information: CS 282 Algebraic Algorithms Personnel Instructor: Richard J. Fateman, 789 Soda Office Hours: immediately after class, WF 11-12 but I am in Soda Hall most days: send email to be sure. Phone, email: 642-1879, [email protected] Teaching Assistants: none Information: See URL http://www-inst.eecs.berkeley.edu/~cs282 (the class home page). Catalog Description Prerequisites: Some knowledge of modern algebra, some programming experience, interest in applied mathematics and/or computational aspects of pure mathematics. Suggested courses: Math 113 and CS 164 or graduate standing in mathematics or computer science, or permission of instructor. ) Theory and construction of symbolic algebraic computer programs. Polynomial arith- metic, GCD, factorization. Integration of elementary functions. Analytic approximation. Simplification. Design of computer systems and languages for symbolic mathematics. Pro- gramming Exercises. (3 units) Scheduling Meeting time/place: 9:30-11 Wed, Friday, 320 Soda Hall. Some but not all of the lectures will be available on power-point slides. There will be notes handed out in class, and posted on the class web page. 1 General Course Information: CS 282 Algebraic Algorithms 2 Grades, course assignments, homeworks, exam schedule Your grade in this course will depend on three components: 1. Problem Sets: There will be 5 sets, plus a take-home final. Some will involve computer usage. (problem sets determine 50% of grade) 2. Project: We’ve attached a list of possible projects. Some projects require significant programming background; some require some familiarity with moderately advanced mathematics, (algebra, analysis, applications) and some require both.
    [Show full text]
  • Teaching Physics with Computer Algebra Systems
    AC 2009-334: TEACHING PHYSICS WITH COMPUTER ALGEBRA SYSTEMS Radian Belu, Drexel University Alexandru Belu, Case Western Reserve University Page 14.1147.1 Page © American Society for Engineering Education, 2009 Teaching Physics with Computer Algebra Systems Abstract This paper describes some of the merits of using algebra systems in teaching physics courses. Various applications of computer algebra systems to the teaching of physics are given. Physicists started to apply symbolic computation since their appearance and, hence indirectly promoted the development of computer algebra in its contemporary form. It is therefore fitting that physics is once again at the forefront of a new and exciting development: the use of computer algebra in teaching and learning processes. Computer algebra systems provide the ability to manipulate, using a computer, expressions which are symbolic, algebraic and not limited to numerical evaluation. Computer algebra systems can perform many of the mathematical techniques which are part and parcel of a traditional physics course. The successful use of the computer algebra systems does not imply that the mathematical skills are no longer at a premium: such skills are as important as ever. However, computer algebra systems may remove the need for those poorly understood mathematical techniques which are practiced and taught simply because they serve as useful tools. The conceptual and reasoning difficulties that many students have in introductory and advanced physics courses is well-documented by the physics education community about. Those not stemming from students' failure to replace Aristotelean preconceptions with Newtonian ideas often stem from difficulties they have in connecting physical concepts and situations with relevant mathematical formalisms and representations, for example, graphical representations.
    [Show full text]
  • Comparative Programming Languages CM20253
    We have briefly covered many aspects of language design And there are many more factors we could talk about in making choices of language The End There are many languages out there, both general purpose and specialist And there are many more factors we could talk about in making choices of language The End There are many languages out there, both general purpose and specialist We have briefly covered many aspects of language design The End There are many languages out there, both general purpose and specialist We have briefly covered many aspects of language design And there are many more factors we could talk about in making choices of language Often a single project can use several languages, each suited to its part of the project And then the interopability of languages becomes important For example, can you easily join together code written in Java and C? The End Or languages And then the interopability of languages becomes important For example, can you easily join together code written in Java and C? The End Or languages Often a single project can use several languages, each suited to its part of the project For example, can you easily join together code written in Java and C? The End Or languages Often a single project can use several languages, each suited to its part of the project And then the interopability of languages becomes important The End Or languages Often a single project can use several languages, each suited to its part of the project And then the interopability of languages becomes important For example, can you easily
    [Show full text]