Margaret Jacobs Phd Thesis

Total Page:16

File Type:pdf, Size:1020Kb

Margaret Jacobs Phd Thesis TRANSLATION OF APL TO OTHER HIGH-LEVEL LANGUAGES Margaret M. Jacobs A Thesis Submitted for the Degree of PhD at the University of St Andrews 1975 Full metadata for this item is available in St Andrews Research Repository at: http://research-repository.st-andrews.ac.uk/ Please use this identifier to cite or link to this item: http://hdl.handle.net/10023/13417 This item is protected by original copyright The code generated corresponding to a particular APL routine will not at first be very efficient. However, methods of optimising the generated code are discussed at length in the thesis. A brief comparison is made with other possible methods of conversion. There are certain restrictions on the types of APL statements able to be handled by the translation method. These restrictions a/re listed in an accompanying appendix. Throughout the text, several examples are given of the code which will be generated from particular APL statements or expressions. Some more lengthy examples of conversion of APL routines to FORTRAN are provided as an appendix. TRANSLATION OP APL TO OTHER HIGH-LEVEL LANGUAGES MARGARET M. JACOBS ,\x ABSTRACT The research work required to produce this thesis was carried out in the Department of Computational Science, University of St. Andrews. Financial assistance was provided by the Science Research Council. The thesis describes a method of translating the computer language APL to other high-level languages. Particular reference is made to FORTRAN, a language widely available to computer users. Although gaining in popularity, APL is not at present so readily available, and the main aim of the translation process is to enable the more desirable features of APL to be at the disposal of a far greater number of users. The translation process should also speed up the running of routines, since compilation in general leads to greater efficiency than interpretive techniques. Some inefficiencies of the APL language have been removed by the translation process. The above reasons for translating APL to other high-level languages are discussed in the introduction to the thesis. A description of the method of translation forms the main part of the thesis. The APL input code is first lexically scanned, a process whereby the subsequent phases are greatly simplified. An intermediate code form is produced in which bracketing is used to group operators and operands together, and to assign priorities to operators such that sub-expressions will be handled in the correct order. By scanning the intermediate code form, information is stacked until required later. The information is used to make possible a process of macro expansion. Each of the above processes is discussed in the main text of the thesis. The format of all information which can or must be supplied at translation time is clearly > outlined in the text. To MY HUSBAND AND PARENTS A description of a method of translating APL into other high-level languages, with particular reference to FORTRAN. The work for this project was carried out in the Department of Computational Science, University of St. Andrews, for the degree of Ph.D. The project commenced on the 10th October, 1971* The research for the subject matter of this thesis has been carried out by myself, and the thesis has been composed by myself. The thesis has not been accepted in fulfilment of the requirements of any other degree or professional qualification. ACKNOWLEDGEMENTS I would like to thank Professor A.J. Cole for his excellent supervision of my work throughout the course of my studies in St. Andrews and for his advice on the preparation of my thesis. I am extremely grateful to all the members of staff of the John Honey Building at the University of St. Andrews for their willing assistance and encouragement, and to my fellow research students for their helpful comments. I also must thank Mrs. V. Butterworth for typing the text of my thesis, and the Science Research Council for their financial assistance. CONTENTS page INTRODUCTION 1 Chapter I INPUT PHASE AND METHOD OF STORAGE ALLOCATION ' 5 II LEXICAL SCANNING- PHASE 31 III RIGHT-TO-LEFT SCAN AND PRODUCTION OF INTERMEDIATE CODE 59 IV LEFT-TO-RIGHT SCAN, PRODUCTION OF STACKED INFORMATION 76 AND ORGANISATION OF MACRO EXPANSIONS V THE MACRO METHOD 109 VI LABELS AND JUMPS 154 VII PROCESSING OF INITIAL INFORMATION I 61 VIII CODE OPTIMISATION 1?1 Appendix 1 SYMBOLS USED FOR INPUT OF APL DURING TESTING 2 TABLE OF USEFUL INFORMATION FOR APL OPERATORS 3 LIST OF MACRO INSTRUCTIONS AND THEIR FUNCTIONS 4 LIST OF MACRO BODIES 5 COMPARISON OF BRACKETING METHOD AND REVERSE POLISH METHOD 6 RESTRICTIONS 7 EASE OF CONVERSION TO OTHER LANGUAGES 8 FUNCTION OF GLOBAL VARIABLES (RUN-TIME) 9 EXAMPLES OF CONVERTED ROUTINES 10 DESCRIPTION OF FINITE-STATE AUTOMATON FOR REMOVAL OF UNNECESSARY "FIND" CALLS INTRODUCTION The following text describes a method of translation of APL to other high-level languages. The version of APL able to be translated is that described in the IBM APL 360-OS and APL 360-DOS User's Manual, with a few restrictions. These restrictions are listed in Appendix 6. Throughout the text, the target language is assumed to be FORTRAN, but similar techniques can be applied to translate from APL to ALGOL or to Pl/1. In generating the target language code, only a subset of the permissible FORTRAN statements has been used. The subset was chosen such that its members (as far as possible) have counterparts in ALGOL and Pl/1. This facilitates conversion to either of these languages instead of FORTRAN. The ease of conversion to ALGOL or to Pl/1 is discussed in Appendix 7• The translation was intended in the first place to handle conversion of APL subroutines and functions, but main programs may also be translated. The APL routines are not intended to remain interactive after conversion, but to be run under a batch-processing system. There have previously been some attempts to produce a batch-processor for APL. One such attempt was made by H. Van Hedel, who implemented an APL batch-process or in Pl/1 for the IBbl/36st). The only restriction he imposed was that function names and local variable names should be distinct. (This restriction, among others, has been placed on the types of APL state- 7 ments able to be converted to FORTRAN.) In Van Hedel the following example is quoted: 2 V r <-fi x R <~@XV V&1 ; F1 H V V G2 hV Yh Z <- F1 -A V Such an example creates ambiguity in the source text, for H returns differ­ ent values in G-1 and G-2 . For an interactive interpreter it is important that each operation is executed as soon as sufficient information is gathered. For a batch- processor, as much as possible of the analysis has to be done before the execution takes place. The above example is ambiguous to a compiler, but not to an interpreter. It is intended that only working routines be converted to other languages. Thus, the amount of checking required during conversion is greatly reduced. It can be assumed, for example, that all dimensions are conformable in matrix operations. The reasons behind the translation (see Sayers^) are as follows; (i) It is intended to provide a more easily transportable system. There are at present more FORTRAN compilers than APL interpreters. Since APL is highly suited to the development of algorithms (Smillie^), it would be very convenient to be able to use these algorithms on a larger scale. This would be possible if the AFL routines wore translated to FORTRAN. (The 3 same argument can be applied to the translation of APL to ALG-OL or PL/1. ) To make transport of the converted routines as convenient as possible, the user is provided with an option to specify the output medium for the converted routines. (ii) A secondary aim was to improve run-time efficiency by using compilation rather than interpretation. The amount of code to be interpreted is reduced if the user supplies some information about non-scalar variables. The more information supplied, the greater the amount of compilation possible. In many cases, the types and dimensions of variables will not change, and such examples readily lend themselves to the improvement of run-time effic­ iency. The method of supplying extra information to improve run-time efficiency is described in Chapter I. (iii) It is hoped that code can be optimised during the course of translation by the removal of some of the inefficiencies of APL. An example of an inefficient APL expression is 4 4* A+B where A and B are non-scalar. This is obviously inefficient as all the elements of A and B are summed, whereas only four summations are essential. The method of removing the above inefficiency is outlined in the following text. In December, 1971, V.L.Moruzzi gave a set of simple rules for translating from APL to FORTRAN by hand. He estimated that mechanical APB/FORTRAN trans- ■ lations could achieve a 3,^-fold reduction in CPU time. This is discussed in . 3 Moruzzx . At a private meeting, Dr. J.L.Alty of Liverpool University remarked that, after visiting various APL installations in the U.S.A. and Canada, he found APL three to four times faster than-other languages for program development, but one hundred times slower for execution. This result, he stated, emphasized the need for interchangeability between APL and other languages. The translation from APL to FORTRAN is effected by a series of macro expansions. The order of expansion of macros is determined by the order of the operators in the APL source test.
Recommended publications
  • Pathscale™ Ekopath™ Compiler Suite User Guide
    PATHSCALE™ EKOPATH™ COMPILER SUITE USER GUIDE VERSION 2.4 2 Copyright © 2004, 2005, 2006 PathScale, Inc. All Rights Reserved. PathScale, EKOPath, the PathScale logo, and Accelerating Cluster Performance are trademarks of PathScale, Inc. All other trademarks belong to their respective owners. In accordance with the terms of their valid PathScale customer agreements, customers are permitted to make electronic and paper copies of this document for their own exclusive use. All other forms of reproduction, redistribution, or modification are prohibited without the prior express written permission of PathScale, Inc. Document number: 1-02404-10 Last generated on March 24, 2006 Release version New features 1.4 New Sections 3.3.3.1, 3.7.4, 10.4, 10.5 Added Appendix B: Supported Fortran in- trinsics 2.0 New Sections 2.3, 8.9.7, 11.8 Added Chapter 8: Using OpenMP in For- tran New Appendix B: Implementation depen- dent behavior for OpenMP Fortran Expanded and updated Appendix C: Sup- ported Fortran intrinsics 2.1 Added Chapter 9: Using OpenMP in C/C++, Appendix E: eko man page Expanded and updated Appendix B and Ap- pendix C 2.2 New Sections 3.5.1.4, 3.5.2; combined OpenMP chapters 2.3 Added to reference list in Chapter 1, new Section 8.2 on autoparallelization 2.4 Expanded and updated Section 3.4, Section 3.5, and Section 7.9, Updated Section C.3, added Section C.4 (Fortran intrinstic exten- sions) Contents 1 Introduction 11 1.1 Conventions used in this document . 12 1.2 Documentation suite . 12 2 Compiler Quick Reference 15 2.1 What you installed .
    [Show full text]
  • Runtime Storage
    Run-time storage layout: focus on compilation, not interpretation n Plan how and where to keep data at run-time CSE401: Storage Layout n Representation of n int, bool, etc. n arrays, records, etc. n procedures Larry Ruzzo n Placement of Spring 2001 n global variables n local variables Slides by Chambers, Eggers, Notkin, Ruzzo, and others n parameters © W.L. Ruzzo and UW CSE, 1994-2001 n results 1 2 Data layout of scalars Based on machine representation Data layout of aggregates n Aggregate scalars together Integer Use hardware representation n (2, 4, and/or 8 bytes of memory, maybe Different compilers make different decisions aligned) n Decisions are sometimes machine dependent n Note that through the discussion of the front-end, Bool 1 byte or word we never mentioned the target machine Char 1-2 bytes or word n We didn’t in interpretation, either Pointer Use hardware representation n But now it’s going to start to come up constantly (2, 4, or 8 bytes, maybe two words if n Necessarily, some of what we will say will be segmented machine) "typical", not universal. 3 4 Layout of records Layout of arrays n Concatenate layout r : record n Repeated layout of s : array [5] of b : bool; record; of fields i : int; element type i : int; n Respect alignment m : record n Respect alignment of c : char; restrictions b : bool; element type end; c : char; n Respect field order, if end n How is the length of required by language j : int; end; the array handled? n Why might a language choose to do this or not do this? n Respect contiguity? 5 6 CSE 401, © W.
    [Show full text]
  • Lecture Materials
    Page 1 of 7 10. EQUILIBRIUM OF A RIGID BODY AND ANALYSIS OF ETRUCTURAS III 10.1 reactions in supports and joints of a two-dimensional structure: The simplest type of data structure is a linear array. This is also called one-dimensional array. In computer science, an array data structure or simply an array is a data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index tupleby a mathematical formula.[1][2][3] For example, an array of 10 32-bit integer variables, with indices 0 through 9, may be stored as 10 words at memory addresses 2000, 2004, 2008, … 2036, so that the element with index ihas the address 2000 + 4 × i.[4] Because the mathematical concept of a matrix can be represented as a two- dimensional grid, two-dimensional arrays are also sometimes called matrices. In some cases the term "vector" is used in computing to refer to an array, although tuples rather than vectors are more correctly the mathematical equivalent. Arrays are often used to implement tables, especially lookup tables; the word table is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as lists andstrings. They effectively exploit the addressing logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses.
    [Show full text]
  • Symmetric Data Objects and Remote Memory Access Communication for Fortran 95 Applications
    Symmetric Data Objects and Remote Memory Access Communication for Fortran 95 Applications J. Nieplocha D. Baxter V. Tipparaju Pacific Northwest National Laboratory C. Rassmunsen Los Alamos National Laboratory Robert W. Numrich Minnesota Super Computing Institute University of Minnesota, Minneapoilis, MN. Abstract. Symmetric data objects have been introduced by Cray Inc. in context of SHMEM remote memory access communication on Cray T3D/E systems and later adopted by SGI for their Origin servers. Symmetric data objects greatly simplify parallel programming by allowing programmers to reference remote instance of a data structure by specifying address of the local counterpart. The current paper describes how symmetric data objects and remote memory access communication could be implemented in Fortran 95 without requiring specialized hardware or compiler support. NAS Multi-Grid parallel benchmark was used as an application example and demonstrated competitive performance to the standard MPI implementation. 1. Introduction Fortran is an integral part of the computing environment at major scientific institutions. It is often the language of choice for developing applications that model complex physical, chemical, and biological systems. In addition, Fortran is an evolving language [1]. The Fortran 90/95 standard introduced many new constructs, including derived-data types, new array features and operations, pointers, increased support for code modularization, and enhanced type safety. These features are advantageous to scientific applications and improve the programmer productivity. Remote memory access (RMA) operations facilitate an intermediate programming model between message passing and shared memory. This model combines some advantages of shared memory, such as direct access to shared/global data, and the message-passing model, namely the control over locality and data distribution.
    [Show full text]
  • Programming Language 10CS666 Dept of CSE,SJBIT 1 MODEL
    Programming Language 10CS666 MODEL Question paper and solution 1 a) With diagrams, explain the compilation and interpretation. Compare the two. (7 marks) Ans: The compiler translates the high-level source program into an equivalent target program typically in machine language and then goes away. At some arbitrary later time, the user tells the operating system to run the target program. An alternative style of implementation for high-level languages is known as interpretation output Unlike a compiler, an interpreter stays around for the execution of the application the interpreter reads statements in that language more or less one at a time, executing them as it goes along. Compilation, by contrast, generally leads to better performance. In general, a decision made at compile time is a decision that does not need to be made at run time. For example, if the compiler can guarantee that variable x will always lie at location 49378, it can generate machine language instructions that access this location whenever the source program refers to x. By contrast, an interpreter may need to look x up in a table every time it is accessed, in order to find its location. While the conceptual difference between compilation and interpretation is clear, most language implementations include a mixture of both. They typically look like this: We generally say that a language is interpreted when the initial translator is simple. If the translator is complicated, we say that the language is compiled. b) What is a frame with respect to stack based allocation? With relevant diagram, explain the contents and importance of activation record.
    [Show full text]
  • Sparcompiler Ada Programmer's Guide
    SPARCompiler Ada Programmer’s Guide A Sun Microsystems, Inc. Business 2550 Garcia Avenue Mountain View, CA 94043 U.S.A. Part No.: 802-3641-10 Revision A November, 1995 1995 Sun Microsystems, Inc. All rights reserved. 2550 Garcia Avenue, Mountain View, California 94043-1100 U.S.A. This product and related documentation are protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or related documentation may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Portions of this product may be derived from the UNIX® and Berkeley 4.3 BSD systems, licensed from UNIX System Laboratories, Inc., a wholly owned subsidiary of Novell, Inc., and the University of California, respectively. Third-party font software in this product is protected by copyright and licensed from Sun’s font suppliers. RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the United States Government is subject to the restrictions set forth in DFARS 252.227-7013 (c)(1)(ii) and FAR 52.227-19. The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications. TRADEMARKS Sun, the Sun logo, Sun Microsystems, Solaris, are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and certain other countries. UNIX is a registered trademark in the United States and other countries, exclusively licensed through X/Open Company, Ltd. OPEN LOOK is a registered trademark of Novell, Inc. PostScript and Display PostScript are trademarks of Adobe Systems, Inc.
    [Show full text]
  • Session 6: Data Types and Representation
    Programming Languages Session 6 – Main Theme Data Types and Representation and Introduction to ML Dr. Jean-Claude Franchitti New York University Computer Science Department Courant Institute of Mathematical Sciences Adapted from course textbook resources Programming Language Pragmatics (3rd Edition) Michael L. Scott, Copyright © 2009 Elsevier 1 Agenda 11 SessionSession OverviewOverview 22 DataData TypesTypes andand RepresentationRepresentation 33 MLML 44 ConclusionConclusion 2 What is the course about? Course description and syllabus: » http://www.nyu.edu/classes/jcf/g22.2110-001 » http://www.cs.nyu.edu/courses/fall10/G22.2110-001/index.html Textbook: » Programming Language Pragmatics (3rd Edition) Michael L. Scott Morgan Kaufmann ISBN-10: 0-12374-514-4, ISBN-13: 978-0-12374-514-4, (04/06/09) Additional References: » Osinski, Lecture notes, Summer 2010 » Grimm, Lecture notes, Spring 2010 » Gottlieb, Lecture notes, Fall 2009 » Barrett, Lecture notes, Fall 2008 3 Session Agenda Session Overview Data Types and Representation ML Overview Conclusion 4 Icons / Metaphors Information Common Realization Knowledge/Competency Pattern Governance Alignment Solution Approach 55 Session 5 Review Historical Origins Lambda Calculus Functional Programming Concepts A Review/Overview of Scheme Evaluation Order Revisited High-Order Functions Functional Programming in Perspective Conclusions 6 Agenda 11 SessionSession OverviewOverview 22 DataData TypesTypes andand RepresentationRepresentation 33 MLML 44 ConclusionConclusion 7 Data Types and Representation
    [Show full text]
  • Complex Data Management Tech Report
    Complex Data Management in OpenACCTM Programs Technical Report TR-14-1 OpenACC-Standard.org November, 2014 OpenACCTM Complex Data Management 2 A preeminent problem blocking the adoption of OpenACC by many programmers is support for user-defined types: classes and structures in C/C++ and derived types in Fortran. This problem is particularly challenging for data structures that involve pointer indirection, since transferring these data structures between the disjoint host and accelerator memories found on most modern accelerators requires deep-copy semantics. Even as architectures begin offering virtual or physical unified memory between host and device, the need to map and relocate complex data structures in memory is likely to remain important, since controlling data locality is fundamental to achieving high performance in the presence of complex memory hierarchies. This technical report presents a directive-based solution for describing the shape and layout of complex data structures. These directives are pending adoption in an upcoming OpenACC standard, where they will extend existing transfer mechanisms with the ability to relocate complex data structures. OpenACCTM Complex Data Management This is a preliminary document and may be changed substantially prior to any release of the software implementing this standard. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of the authors. c 2014 OpenACC-Standard.org.
    [Show full text]
  • Improving Dynamic Binary Optimization
    Compiler Technology of Programming Languages Code Generation Part I:Run-Time Support Prof. Farn Wang 2019/02/14 Prof. Farn Wang, Department of Electrical Engineering 1 Storage Allocation Static allocation – Storage allocation was fixed during the entire execution of a program Stack allocation – Space is pushed and popped on a run-time stack during program execution such as procedure calls and returns. Space efficiency Heap allocation Recursive procedures – Allow space to be allocated and freed at any time. 2019/02/14 Prof. Farn Wang, Department of Electrical Engineering 2 Static Stack Dynamic Allocation time Compile time Procedure malloc() at invocation runtime Variable lifetime Program start Procedure call From Alloc() to execution to end to return Free() Space efficiency Less efficient Space reuse is Depends on user high management Implementation Data segments Runtime stack Heap Access efficiency Fast (gp+offset) Fast (fp+offset) Indirect reference Less locality Compiler Easier Easier More difficult optimization 2019/02/14 Prof. Farn Wang, Department of Electrical Engineering 3 Storage Organization Code Code Static data Virtual Stack Static Data BSS address Heap Heap Stack 2019/02/14 Prof. Farn Wang, Department of Electrical Engineering 4 Storage Organization Code header Static Data text Heap data (init) bss (uninit) Initial stack top a.out file format Stack Prof. Farn Wang, Department of Electrical Engineering 5 ELF Object File Format ELF: ELF header Program header table Executable and (required for executables) Linkable Format .text section .data section A common .bss section standard file .symtab format on Unix .rel.txt or Unix-like sys .rel.data for executables, .debug object code, Section header table shared lib, and (required for relocatables) core dumps %readelf –a a.out 2019/02/14 Prof.
    [Show full text]
  • Chapter 7:: Data Types
    Chapter 7:: Data Types 1 2 Data Types • most programming languages include a notion of types for expressions, variables, and values • two main purposes of types – imply a context for operations Programming Language Pragmatics • programmer does not have to state explicitly Michael L. Scott • example: a + b means mathematical addition if a and b are integers – limit the set of operations that can be performed • prevents the programmer from making mistakes • type checking cannot prevent all meaningless operations • it catches enough of them to be useful Copyright © 2005 Elsevier Copyright © 2005 Elsevier 3 4 Type Systems Type Systems • bits in memory can be interpreted in different • a type system consists of ways – a way to define types and associate them with language – instructions constructs • constructs that must have values include constants, variables, – addresses record fields, parameters, literal constants, subroutines, and – characters complex expressions containing these – integers, floats, etc. – rules for type equivalence, type compatibility, and type inference • bits themselves are untyped, but high-level • type equivalence: when the types of two values are the same languages associate types with values • type compatibility: when a value of a given type can be used in a – useful for operator context particular context • type inference: type of an expression based on the types of its parts – allows error checking Copyright © 2005 Elsevier Copyright © 2005 Elsevier 5 6 Type Checking Type Checking • type checking ensures a program
    [Show full text]
  • Data Structures
    Data structures PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Thu, 17 Nov 2011 20:55:22 UTC Contents Articles Introduction 1 Data structure 1 Linked data structure 3 Succinct data structure 5 Implicit data structure 7 Compressed data structure 8 Search data structure 9 Persistent data structure 11 Concurrent data structure 15 Abstract data types 18 Abstract data type 18 List 26 Stack 29 Queue 57 Deque 60 Priority queue 63 Map 67 Bidirectional map 70 Multimap 71 Set 72 Tree 76 Arrays 79 Array data structure 79 Row-major order 84 Dope vector 86 Iliffe vector 87 Dynamic array 88 Hashed array tree 91 Gap buffer 92 Circular buffer 94 Sparse array 109 Bit array 110 Bitboard 115 Parallel array 119 Lookup table 121 Lists 127 Linked list 127 XOR linked list 143 Unrolled linked list 145 VList 147 Skip list 149 Self-organizing list 154 Binary trees 158 Binary tree 158 Binary search tree 166 Self-balancing binary search tree 176 Tree rotation 178 Weight-balanced tree 181 Threaded binary tree 182 AVL tree 188 Red-black tree 192 AA tree 207 Scapegoat tree 212 Splay tree 216 T-tree 230 Rope 233 Top Trees 238 Tango Trees 242 van Emde Boas tree 264 Cartesian tree 268 Treap 273 B-trees 276 B-tree 276 B+ tree 287 Dancing tree 291 2-3 tree 292 2-3-4 tree 293 Queaps 295 Fusion tree 299 Bx-tree 299 Heaps 303 Heap 303 Binary heap 305 Binomial heap 311 Fibonacci heap 316 2-3 heap 321 Pairing heap 321 Beap 324 Leftist tree 325 Skew heap 328 Soft heap 331 d-ary heap 333 Tries 335 Trie
    [Show full text]
  • C Scheme Examination: Second Year Semester III December 2020 Course Code: ITC305 Course Name: PCPF Time: 2 Hour Max
    Program: SE(Information Technology) Curriculum Scheme: C Scheme Examination: Second Year Semester III December 2020 Course Code: ITC305 Course Name: PCPF Time: 2 hour Max. Marks: 80 ============================================================================= Q1 MCQs 2 Marks each 1. rainy(seattle). rainy(rochester). cold(seattle). snowy(X) :- rainy(X), cold(X). the query?- snowy(C). will return what * Option A: Seattle Option B: Rochester Option C: Null Option D: Can’t say 2. What will be the output of below code? [first,second,third] = [A|B]. * Option A: A = first Option B: B = [second, third]. Option C: A=first, B = [second, third]. Option D: Null 3. take 10 (repeat 5) What will be the output of above code? Option A: 5 Option B: [5,5,5,5,5,5,5,5,5,5] Option C: [5] Option D: [5,5,5,5,5] 4. [x*2| x<-[1..5]] What will be the output of above code? Option A: [2,4,6,8,10] Option B: [1,2,3,4,5] Option C: [2,4,6,8,10,12,14,16,18,20] Option D: [1,5] 5. Which of the following concepts means determining at runtime what method to invoke? Option A: Data hiding Option B: Dynamic Typing 1 | Page Option C: Dynamic binding Option D: Dynamic Loading 6. Which of the following access specifier is used as a default in a class definition? Option A: Protected Option B: Public Option C: Private Option D: Friend 7. A function that is called automatically each time an object is destroyed is a Option A: Destructor Option B: Destroyer Option C: Remover Option D: Terminator 8.
    [Show full text]