Computer Architecture

Total Page:16

File Type:pdf, Size:1020Kb

Computer Architecture А.М. Sergiyenko Computer Architecture Part 2. Parallel Architectures Kyiv-2016 0 CONTENTS CONTENTS.....................................................................................................................1 ABBREVIATIONS...........................................................................................................3 3 PARALLELISM OF ................................................................................................6 SINGLE-PROCESSOR COMPUTERS .................................................................6 3.1 Pipelined and vector data processing.....................................................................................6 3.2. Vector pipelined computers.................................................................................................16 3.3 Superscalar processors..........................................................................................................21 3.4. Methods of superscalar processor performance increasing .............................................25 3.5. Multithreading......................................................................................................................35 3.6. Memory access parallelism..................................................................................................39 4. MULTITASKING AND DATA PROTECTION ...........................................................47 4.1. Virtual memory ....................................................................................................................47 4.2. Computer multitasking........................................................................................................52 4.3 Memory protection................................................................................................................55 4.4. Memory management in the IA-32 architecture ...............................................................58 4.5. Virtual memory of the IA-64 architecture .........................................................................69 4.6. Problems................................................................................................................................71 5. MULTIPROCESSOR ARCHITECTURE ...................................................................73 5.1. Fundamentals of multiprocessor architectures .................................................................73 5.2. Processors with the SIMD parallelism ...............................................................................89 5.3 Multiprocessor Architectures...............................................................................................97 5.4 Graphics accelerator architecture .....................................................................................108 6 APPLICATION SPECIFIC PROCESSORS .............................................................111 6.1 Introduction .........................................................................................................................111 6.2 Microcontrollers ..................................................................................................................112 1 6.3 DSP microprocessors...........................................................................................................119 6.4 Processors with hardware control. Configurable computers..........................................122 6.5 System on chip in FPGA .....................................................................................................128 7 COMPUTER ARCHITECTURE PROSPECTS.........................................................132 LIST OF RECOMMENDED LITERATURE .................................................................137 ANNEX 1.....................................................................................................................139 2 ABBREVIATIONS ALU — arithmetic and logic unit APIC — Advanced Programmable Interrupt Controller ARM — Acorn (Advanced) RISC Machines AS — Architecture State BHT — Branch History Table BTB — Branch Target Buffer CISC — complex instruction set computer COMA — Cache-Only Memory Architecture CPL — Current Privilege Level CPU — central processing unit CRC — Cyclic Redundancy Check CUDA — Compute Unified Device Architecture DMA — Direct Memory Access DPL — Descriptor Privilege Level DRAM — Dynamic Random Access Memory DRIS — Deferred scheduling register Renaming Instruction Shelf DSMA — Distributed Shared Memory Architecture DSP — Digital Signal Processing GDT — Global Descriptor Table GPIO — General Purpose Input-Output GPU — Graphic Processing Unit HDD — Hard Disc Drive HSA — Heterogeneous System Architecture IA-32 — 32-bit Intel architecture IDT — Interrupt Descriptor Table IoT — Internet of Things 3 IP — Instruction Pointer ITLB — Instruction Translation Lookaside Buffer I2C — Inter-Integrated Circuit JTAG — Joint Test Action Group LDT — local descriptor table LP — Logical Processor MCM — MultiChip Module MIMD — Multiple Instruction flows — Multiple Data flows MIPS — million instructions per second, microprocessor without interlocked pipeline stages MFLOPS — mega floating point operations per second MMU — Memory Management Unit MMX — MultiMedia extension MPI — Message Passing Interface MRMW — Multiple Readers — Multiple Writers MRSW — Multiple Readers — Single Writer MT mode — Multi-Task mode MTA — Multi-Threaded Architecture NUMA — Non-Uniform Memory Access architecture OpenCL — Open Computing Language OS — Operational System PC — Program Counter, Personal Computer PKR — Protection Key Register PRAM — Parallel Random Access Machine PU — Processing Unit PVM — Parallel Virtual Machine RAM — Random Access Memory RAS — Row Address Select, Return Address Stack 4 RAT — Register Alias Table RID — Region IDdentifier RISC — Reduced Instruction Set Computer ROM — Read-Only Memory RPL — Request Privilege Level SAXPY — Single precision A multiplied by X Plus Y SIMD — Single Instruction flow — Multiple Data flows SIMT — Single Instruction — Multiple Threads SM — Streaming Multiprocessor SMP — Symmetric MultiProcessor SMT — Simultaneous MultiThreading SoC — System on a Chip SPI — Serial Peripheral Interface SPMD — Single Program — Multiple Data ST mode — Single-Task mode T-cache — Trace cache TLB — Translation Lookaside Buffer TR — Task Register TSS — Task State Segment UART — Universal Asynchronous Receiver-Transceiver UMA — Uniform Memory Access architecture VLIW — Very Large Instruction Word VPN — Virtual Page Number VSMA — Virtual Shared Memory Architecture µop — micro-operation 5 3 PARALLELISM OF SINGLE-PROCESSOR COMPUTERS 3.1 Pipelined and vector data processing 3.1.1 Basic types of parallelism in computers The main feature of the von Neumann computer architecture reviewed above is the inseparable implementation of all actions. This means that all the instructions and microinstructions are indivisible and are strictly implemen- ted in sequence. The instruction inseparability or atomicity means that its implementation cannot be slowed down and another action be performed during this situation with some elements of the memory. For example, the following instruction could not start running before the end of the previous instruction, the datum is read by the running instruction from RAM strictly after it was written there by the previous instruction A set of instructions can not be executed in several ALUs, i.e. in parallel. Therefore, the speed limit of the von Neumann processor is determined by the period of the instruction execution. It is limited by the amount of delays of instruction decoder, ALU, program RAM, data RAM. In this section, the architecture details are considered, which help to do without the action atomicity and to speed up the computation due to the parallelization. First, some principles of the computer design are recalled. The parallel processing principle . To achieve the high performance and (or) the reliability of computer computations, the independent control or calculation steps are distributed among several operating and control machines (or CPUs), which are connected through some switching system. The pipeline processing principle of information means that complex operation sequence is divided into several successively performed steps 6 (micro steps) so that they can be performed in parallel for the flow of such operations. A few definitions are added: Parallelism is an ability of simultaneous execution of independent arithmetic and logical operations or service. There are three main forms of parallelism: — natural or vector parallelism; — parallelism of independent branches; — parallelism of related operations or scalar parallelism. The essence of the parallelism of independent branches is that the independent software branches can be allocated in a program which solves the large problems, which are processed in parallel. Under natural parallelism , the ability to process the independent data by the same algorithm is understood above all. This is, for example, the ability of simultaneous processing the elements of data vectors ( vector parallelism ), performing a number of identical or similar tasks for different sets of input data. The program for the von Neumann machine is a list of operations performed in series. The scalar parallelism means that the subsets of these operations can be performed in parallel if there are no dependencies between them. Synchronization act is a moment of time, when a portion of calculati- ons transfers the control to other portions, such as the moment, when an
Recommended publications
  • Multiprocessing Contents
    Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References .............................................
    [Show full text]
  • A Lecture Note on Csc 322 Operating System I by Dr
    A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I BY DR. S. A. SODIYA 1 SECTION ONE 1.0 INTRODUCTION TO OPERATING SYSTEMS 1.1 DEFINITIONS OF OPERATING SYSTEMS An operating system (commonly abbreviated OS and O/S) is the infrastructure software component of a computer system; it is responsible for the management and coordination of activities and the sharing of the limited resources of the computer. An operating system is the set of programs that controls a computer. The operating system acts as a host for applications that are run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Operating Systems can be viewed from two points of views: Resource manager and Extended machines. From Resource manager point of view, Operating Systems manage the different parts of the system efficiently and from extended machines point of view, Operating Systems provide a virtual machine to users that is more convenient to use. 1.2 HISTORICAL DEVELOPMENT OF OPERATING SYSTEMS Historically operating systems have been tightly related to the computer architecture, it is good idea to study the history of operating systems from the architecture of the computers on which they run. Operating systems have evolved through a number of distinct phases or generations which corresponds roughly to the decades. The 1940's - First Generation The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that programs were often entered one bit at a time on rows of mechanical switches (plug boards).
    [Show full text]
  • Computer History a Look Back Contents
    Computer History A look back Contents 1 Computer 1 1.1 Etymology ................................................. 1 1.2 History ................................................... 1 1.2.1 Pre-twentieth century ....................................... 1 1.2.2 First general-purpose computing device ............................. 3 1.2.3 Later analog computers ...................................... 3 1.2.4 Digital computer development .................................. 4 1.2.5 Mobile computers become dominant ............................... 7 1.3 Programs ................................................. 7 1.3.1 Stored program architecture ................................... 8 1.3.2 Machine code ........................................... 8 1.3.3 Programming language ...................................... 9 1.3.4 Fourth Generation Languages ................................... 9 1.3.5 Program design .......................................... 9 1.3.6 Bugs ................................................ 9 1.4 Components ................................................ 10 1.4.1 Control unit ............................................ 10 1.4.2 Central processing unit (CPU) .................................. 11 1.4.3 Arithmetic logic unit (ALU) ................................... 11 1.4.4 Memory .............................................. 11 1.4.5 Input/output (I/O) ......................................... 12 1.4.6 Multitasking ............................................ 12 1.4.7 Multiprocessing .........................................
    [Show full text]
  • COMPUTER ORGANIZATION Subject Code: 10CS46
    COMPUTER ORGANIZATION 10CS46 COMPUTER ORGANIZATION Subject Code: 10CS46 PART – A UNIT-1 6 Hours Basic Structure of Computers: Computer Types, Functional Units, Basic Operational Concepts, Bus Structures, Performance – Processor Clock, Basic Performance Equation, Clock Rate, Performance Measurement, Historical Perspective Machine Instructions and Programs: Numbers, Arithmetic Operations and Characters, Memory Location and Addresses, Memory Operations, Instructions and Instruction Sequencing, UNIT - 2 7 Hours Machine Instructions and Programs contd.: Addressing Modes, Assembly Language, Basic Input and Output Operations, Stacks and Queues, Subroutines, Additional Instructions, Encoding of Machine Instructions UNIT - 3 6 Hours Input/Output Organization: Accessing I/O Devices, Interrupts – Interrupt Hardware, Enabling and Disabling Interrupts, Handling Multiple Devices, Controlling Device Requests, Exceptions, Direct Memory Access, Buses UNIT-4 7 Hours Input/Output Organization contd.: Interface Circuits, Standard I/O Interfaces – PCI Bus, SCSI Bus, USB www.vtucs.com Dept of CSE,SJBIT Page 1 COMPUTER ORGANIZATION 10CS46 PART – B UNIT - 5 7 Hours Memory System: Basic Concepts, Semiconductor RAM Memories, Read Only Memories, Speed, Size, and Cost, Cache Memories – Mapping Functions, Replacement Algorithms, Performance Considerations, Virtual Memories, Secondary Storage UNIT - 6 7 Hours Arithmetic: Addition and Subtraction of Signed Numbers, Design of Fast Adders, Multiplication of Positive Numbers, Signed Operand Multiplication, Fast Multiplication, Integer Division, Floating-point Numbers and Operations UNIT - 7 6 Hours Basic Processing Unit: Some Fundamental Concepts, Execution of a Complete Instruction, Multiple Bus Organization, Hard-wired Control, Microprogrammed Control UNIT - 8 6 Hours Multicores, Multiprocessors, and Clusters: Performance, The Power Wall, The Switch from Uniprocessors to Multiprocessors, Amdahl’s Law, Shared Memory Multiprocessors, Clusters and other Message Passing Multiprocessors, Hardware Multithreading, SISD, IMD, SIMD, SPMD, and Vector.
    [Show full text]
  • Redalyc.Optimization of Operating Systems Towards Green Computing
    International Journal of Combinatorial Optimization Problems and Informatics E-ISSN: 2007-1558 [email protected] International Journal of Combinatorial Optimization Problems and Informatics México Appasami, G; Suresh Joseph, K Optimization of Operating Systems towards Green Computing International Journal of Combinatorial Optimization Problems and Informatics, vol. 2, núm. 3, septiembre-diciembre, 2011, pp. 39-51 International Journal of Combinatorial Optimization Problems and Informatics Morelos, México Available in: http://www.redalyc.org/articulo.oa?id=265219635005 How to cite Complete issue Scientific Information System More information about this article Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal Journal's homepage in redalyc.org Non-profit academic project, developed under the open access initiative © International Journal of Combinatorial Optimization Problems and Informatics, Vol. 2, No. 3, Sep-Dec 2011, pp. 39-51, ISSN: 2007-1558. Optimization of Operating Systems towards Green Computing Appasami.G Assistant Professor, Department of CSE, Dr. Pauls Engineering College, Affiliated to Anna University – Chennai, Villupuram, Tamilnadu, India E-mail: [email protected] Suresh Joseph.K Assistant Professor, Department of computer science, Pondicherry University, Pondicherry, India E-mail: [email protected] Abstract. Green Computing is one of the emerging computing technology in the field of computer science engineering and technology to provide Green Information Technology (Green IT / GC). It is mainly used to protect environment, optimize energy consumption and keeps green environment. Green computing also refers to environmentally sustainable computing. In recent years, companies in the computer industry have come to realize that going green is in their best interest, both in terms of public relations and reduced costs.
    [Show full text]
  • Concurrency in .NET: Modern Patterns of Concurrent and Parallel
    Modern patterns of concurrent and parallel programming Riccardo Terrell Sample Chapter MANNING www.itbook.store/books/9781617292996 Chapter dependency graph Chapter 1 Chapter 2 • Why concurrency and definitions? • Solving problems by composing simple solutions • Pitfalls of concurrent programming • Simplifying programming with closures Chapter 3 Chapter 6 • Immutable data structures • Functional reactive programming • Lock-free concurrent code • Querying real-time event streams Chapter 4 Chapter 7 • Big data parallelism • Composing parallel operations • The Fork/Join pattern • Querying real-time event streams Chapter 5 Chapter 8 • Isolating and controlling side effects • Parallel asynchronous computations • Composing asynchronous executions Chapter 9 • Cooperating asynchronous computations • Extending asynchronous F# computational expressions Chapter 11 Chapter 10 • Agent (message-passing) model • Task combinators • Async combinators and conditional operators Chapter 13 Chapter 12 • Reducing memory consumption • Composing asynchronous TPL Dataflow blocks • Parallelizing dependent tasks • Concurrent Producer/Consumer pattern Chapter 14 • Scalable applications using CQRS pattern www.itbook.store/books/9781617292996 Concurrency in .NET Modern patterns of concurrent and parallel programming by Riccardo Terrell Chapter 1 Copyright 2018 Manning Publications www.itbook.store/books/9781617292996 brief contents Part 1 Benefits of functional programming applicable to concurrent programs ............................................ 1 1 ■ Functional
    [Show full text]
  • Are Central to Operating Systems As They Provide an Efficient Way for the Operating System to Interact and React to Its Environment
    1 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in OPERATING SYSTEMS DESIGN Topic Objective: At the end of this topic student will be able to understand: Understand the operating system Understand the Program execution Understand the Interrupts Understand the Supervisor mode Understand the Memory management Understand the Virtual memory Understand the Multitasking Definition/Overview: An operating system: An operating system (commonly abbreviated to either OS or O/S) is an interface between hardware and applications; it is responsible for the management and coordination of activities and the sharing of the limited resources of the computer. The operating system acts as a host for applications that are run on the machine. Program execution: The operating system acts as an interface between an application and the hardware. Interrupts: InterruptsWWW.BSSVE.IN are central to operating systems as they provide an efficient way for the operating system to interact and react to its environment. Supervisor mode: Modern CPUs support something called dual mode operation. CPUs with this capability use two modes: protected mode and supervisor mode, which allow certain CPU functions to be controlled and affected only by the operating system kernel. Here, protected mode does not refer specifically to the 80286 (Intel's x86 16-bit microprocessor) CPU feature, although its protected mode is very similar to it. Memory management: Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. www.bsscommunitycollege.in www.bssnewgeneration.in www.bsslifeskillscollege.in 2 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in Key Points: 1.
    [Show full text]
  • IJMEIT// Vol.04 Issue 08//August//Page No:1729-1735//ISSN-2348-196X 2016
    IJMEIT// Vol.04 Issue 08//August//Page No:1729-1735//ISSN-2348-196x 2016 Investigation into Gang scheduling by integration of Cache in Multi core processors Authors Rupali1, Shailja Kumari2 1Research Scholar, Department of CSA, CDLU Sirsa 2Assistant Professor, Department of CSA, CDLU Sirsa, Email- [email protected] ABSTRACT Objective of research is increase efficiency of scheduling dependent task using enhanced multithreading. gang scheduling of parallel implicit-deadline periodic task systems upon identical multiprocessor platforms is considered. In this scheduling problem, parallel tasks use several processors simultaneously. first algorithm is based on linear programming & is first one to be proved optimal for considered gang scheduling problem. Furthermore, it runs in polynomial time for a fixed number m of processors & an efficient implementation is fully detailed. second algorithm is an approximation algorithm based on a fixed- priority rule that is competitive under resource augmentation analysis in order to compute an optimal schedule pattern. Precisely, its speedup factor is bounded by (2−1/m). In computer architecture, multithreading is ability of a central processing unit (CPU) or a single core within a multi-core processor to execute multiple processes or threads concurrently, appropriately supported by operating system. This approach differs from multiprocessing, as with multithreading processes & threads have to share resources of a single or multiple cores: computing units, CPU caches, & translation lookaside buffer (TLB). Multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by using thread-level as well as instruction-level parallelism. Keywords: TLP, Response Time, Latency, throughput, multithreading, Scheduling threaded processor would switch execution to 1.
    [Show full text]
  • Research Methodologies for Multitasking Studies
    329 Chapter 17 Research Methodologies for Multitasking Studies Lin Lin University of North Texas, USA Patricia Cranton University of New Brunswick, Canada Jennifer Lee University of North Texas, USA ABSTRACT The research on multitasking is scattered across disciplines, and the definitions of multitasking vary according to the discipline. As a result, the research is not coherent nor consistent in the approaches taken to understanding this phenomenon. In this chapter, the authors review studies on multitasking in different disciplines with a focus on the research methodologies used. The three main research paradigms (empirical-analytical, interpretive, and critical) are used as a framework to understand the nature of the research. The strengths and weaknesses of the research in each of the paradigms are examined, and suggestions are made for utilizing different research methodologies to bring clarity to the research in this field. Such an endeavour will help to build interdisciplinary and multidisciplinary research and help guide future research and theory building. INTRODUCTION all times of the day in millions of households be- came an all too familiar sight. Families ate, played, Research on multitasking has a long history. In read, and congregated around television sets as a the 1930s, researchers found that Americans often precursor to other forms of multitasking. By the performed more than one activity at the same end of 1980s, personal computers and portable time (Cantril & Allport, 1935). Two decades later music devices became common fixtures at home American households became enamored with tele- and work. The next decade saw a meteoric rise vision as a source of entertainment.
    [Show full text]
  • Define Term Multitasking with Example
    Define Term Multitasking With Example Overseas and gentling Arnold never toning hurryingly when Hyatt monographs his flirt. Ewan remains severest after Stew jostle healthfully horse-tradingor mutualised rolling.any phototype. Appropriative and unstack Ewart peoples while noted Westbrook surfaces her phlogopite deceitfully and Most jobs from inception to gui program to run is thicker than the multitasking with an operating systems Medium for such part of human anatomy i immediately bore those described next. It defines a big difference between statically typed languages and acronyms in? Limited size based in rtoss is defined memory used quite frequently used on. Let cpu can take a program while other. Embrace a multitasking with different requirements. But it must ring up; typically sport sparcs made with multitasking term was not play two optimization problems in others were rated their exposure to better as they are. Initiates a term. It defines a term has transformed thousands of our model or terms share common tasking is defined as. Cpu busy with applications at it defines a moment and magnitude in visualization. Sometimes specified performance with multitasking term was that multitaskers are some special releases of? When they know how they are absolutely necessary to define the terms as. Earth at the. As multitasking term used only one example to multitask settings to allow. Anything you with automated and terms. Can define multitasking operating system is defined it defines the editor and starts, i know and whenever a desktop. On the term effects of human performance with multiple programs cooperate for your abilities and understanding this simply focused on one or compatibles and requirements.
    [Show full text]
  • UNIT-1 Basic Structure of Computers: Computer Types, Functional Units, Basic
    www.edutechlearners.com COMPUTER ORGANIZATION UNIT-1 Basic Structure of Computers: Computer Types, Functional Units, Basic Operational Concepts, Bus Structures, Performance – Processor Clock, Basic Performance Equation, Clock Rate, Performance Measurement, Historical Perspective. Machine Instructions and Programs: Numbers, Arithmetic Operations and Characters, Memory Location and Addresses, Memory Operations, Instructions and Instruction Sequencing. 1 of 268 www.edutechlearners.com COMPUTER ORGANIZATION CHAPTER – 1 BASIC STRUCTURE OF COMPUTERS 1.1 Computer types A computer can be defined as a fast electronic calculating machine that accepts the (data) digitized input information process it as per the list of internally stored instructions and produces the resulting information. List of instructions are called programs & internal storage is called computer memory. The different types of computers are 1. Personal computers: - This is the most common type found in homes, schools, Business offices etc., It is the most common type of desk top computers with processing and storage units along with various input and output devices. 2. Note book computers: - These are compact and portable versions of PC 3. Work stations: - These have high resolution input/output (I/O) graphics capability, but with same dimensions as that of desktop computer. These are used in engineering applications of interactive design work. 4. Enterprise systems: - These are used for business data processing in medium to large corporations that require much more computing power and storage capacity than work stations. Internet associated with servers have become a dominant worldwide source of all types of information. 5. Super computers: - These are used for large scale numerical calculations required in the applications like weather forecasting etc., 1.2 Functional unit A computer consists of five functionally independent main parts input, memory, arithmetic logic unit (ALU), output and control unit.
    [Show full text]
  • Multitasking: Impact of Ict on Learning (Luas)
    MULTITASKING: IMPACT OF ICT ON LEARNING (LUAS) LAHTI UNIVERSITY OF APPLIED SCIENCES Degree programme in Business Information Technology Bachelor’s Thesis Spring 2012 Peter Olayinka Oluwasegun Ajao i Lahti University of Applied Sciences Degree programme in Business Information Technology AJAO, PETER OLAYINKA OLUWASEGUN: Multitasking-Impact of ICT on learning Case Study (LUAS). Thesis in fulfillment of Bachelor of Business Information Technology, 41 pages, 10 pages of appendices Summer 2012 ABSTRACT University students of the present generation consume information technologies much more at a faster rate compared to the previous generations. The availability of the multitasking Laptops, Smartphones, and so on has impacted learning in the present world. It is assumed that these technologies enhance productivity in students learning. There is probability that it is true that these technologies improve productivity, it is also assumed that in certain cases, these technologies could be diabolical. Multitasking using ICT (Information and Communications Technology) has by and large created a big impact in learning. However, quite few studies have been carried out to investigate the impacts that multitasking using ICT has on learning. There is a big need for studies to look into the direction at which these technologies are actually impacting students learning. The purpose of this paper is to use a questionnaire/survey, interview, and observations, and a test to examine how multitasking using various technologies impact or affects students during their course of learning in the classroom, and also when they are away from school, and proposes a suggestion for further improvement or investigation. Keywords: multitasking, effectiveness, efficiency, productivity, ICT, students, learner, learning, evaluation, monotasking, LUAS, participants.
    [Show full text]