Matrox Imaging Library (MIL) 9.0 Update 58

Total Page:16

File Type:pdf, Size:1020Kb

Matrox Imaging Library (MIL) 9.0 Update 58 ------------------------------------------------------------------------------- Matrox Imaging Library (MIL) 9.0 Update 58. Release Notes (Whatsnew) September 2012 (c) Copyright Matrox Electronic Systems Ltd., 1992-2012. ------------------------------------------------------------------------------- For more information and what's new in processing, display, drivers, Linux, ActiveMIL, and all MIL 9 updates, consult their respective readme files. Main table of contents Section 1 : What's new in Mil 9.0 Update 58 Section 2 : What's new in MIL 9.0 Release 2. Section 3 : What's new in MIL 9.0. Section 4 : Differences between MIL Lite 8.0 and 7.5 Section 5 : Differences between MIL Lite 7.5 and 7.1 Section 6 : Differences between MIL Lite 7.1 and 7.0 ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Section 1: What's new in MIL 9.0 Update 58. Table of Contents for Section 1 1. Overview. 2. Mseq API function definition 2.1 MseqAlloc 2.2 MseqControl 2.3 MseqDefine 2.4 MseqFeed 2.5 MseqFree 2.6 MseqGetHookInfo 2.7 MseqHookFunction 2.8 MseqInquire 2.9 MseqProcess 3. Examples 4. Operating system information 1. Overview. The main goal for MIL 9.0 Update 58 is to add a new module called Mseq, which offers a user-friendly interface for H.264 compression. 2. Mseq API function definition 2.1 MseqAlloc - Synopsis: Allocate a sequence context. - Syntax: MIL_ID MseqAlloc( MIL_ID SystemID, MIL_INT64 SequenceType, MIL_INT64 Operation, MIL_UINT32 OutputFormat, MIL_INT64 InitFlag, MIL_ID* ContextSeqIdPtr) - Parameters: * SystemID: Specifies the identifier of the system on which to allocate the sequence context. This parameter must be given a valid system identifier. * SequenceType: Specifies the type of sequence to allocate: Values: M_DEFAULT - Specifies the sequence as a context in which the related operation should be performed. * Operation: Specifies the operation to be performed by the context. Values: M_SEQ_COMPRESS - Specifies that the context should encode the buffers as they are fed. M_SEQ_DECOMPRESS - Specifies that the context should decode the source and notify by a hook function that the buffer is ready. * OutputFormat: Specifies the data format to be generated. Values: M_DEFAULT - Specifies the type of data to be generated by the context * InitFlag: Specifies the engine to be used for the operation. Values: M_DEFAULT - Specifies to use the default engine(M_QSV) M_QSV + - Specifies that the engine to be used is the Intel Hardware acceleration built in 2nd and 3rd generation Intel® Core™ processors. Can be combines with either: M_HARDWARE - Force the engine to be hardware accelerated M_SOFTWARE - Force the engine to use the software implementation * ContextSeqIdPtr: Specifies the address of the variable in which to write the sequence context identifier. Since MseqAlloc() also returns the sequence context identifier, you can set this parameter to M_NULL. If allocation fails, M_NULL is written as the identifier. 2.2 MseqControl - Synopsis: Control the context operation. - Syntax: void MseqControl( MIL_ID ContextSeqId, MIL_INT SequenceIndex, MIL_INT64 ControlType, MIL_DOUBLE ControlValue); - Parameters: * ContextSeqId: Specifies the identifier of the sequence context. * SequenceIndex: Specifies the context or the index of the container Values: M_SEQ_SRC(n): - Specifies the source container at index 'n', where n must be between 0 to 32. M_SEQ_DEST(n): - Specifies the destination container at index 'n', where n must be between 0 to 32. M_ALL: - Specifies to control all sources and destinations at once. M_CONTEXT: - Specifies that the control will affect the context itself * ControlType: Specifies the sequence context setting to control. See the Parameter associations section for possible values that can be specified. * ControlValue: Specifies the value to assign to the sequence context setting. See the Parameter associations section for possible values that can be specified. - Parameter Associations: Control types: Control Values: * M_BUFFER_SAMPLE: Sets the information concerning the input buffer format to validate operation parameters. - MIL buffer identifier: Specifies the identifier of an M_IMAGE buffer used as input for the compression. * M_STREAM_BIT_RATE: Sets the expected encoded stream bitrate. - Value(MIL_INT): Specifies the expected encoded stream bitrate (in kbps). * M_STREAM_BIT_RATE_MAX: Sets the maximum allowed encoded stream bitrate. - Value(MIL_INT): Specifies the maximum encoded stream bit rate (in kbps). * M_STREAM_BIT_RATE_MODE: Sets the bitrate control method. - M_VARIABLE: Specifies that the engine uses a variable bitrate control algorithm. - M_CONSTANT: Specifies that the engine uses a constant bitrate control algorithm. * M_STREAM_FRAME_RATE + : Sets the frame rate of the generated output stream. - Value(MIL_DOUBLE): Specifies the value for the output stream framerate. For example if you are using an NTSC-M(23.97) framerate use the following equation to set the exact framerate: 24.0 * (1000.0/1001.0) * M_STREAM_GROUP_OF_PICTURE_SIZE: Sets the I-frame interval, which is also known as the Group of Picture size. - Value(MIL_INT): Specifies the distance between 2 I-frames in the encoded stream. * M_STREAM_LEVEL: Sets the H.264 encoding level. - M_LEVEL_1: - M_LEVEL_1B: - M_LEVEL_1_1: - M_LEVEL_1_2: - M_LEVEL_1_3: - M_LEVEL_2: - M_LEVEL_2_1: - M_LEVEL_2_2: - M_LEVEL_3: - M_LEVEL_3_1: - M_LEVEL_3_2: - M_LEVEL_4: - M_LEVEL_4_1: - M_LEVEL_4_2: - M_LEVEL_5: - M_LEVEL_5_1: Specifies the level that the encoding engine should use. * M_STREAM_PROFILE: Sets the H.264 encoding profile. - M_PROFILE_BASELINE: - M_PROFILE_MAIN: - M_PROFILE_HIGH: Specifies the profile that the encoding engine should use. * M_STREAM_QUALITY: Sets the H.264 encoding engine priority. - 0 to 100: Specifies the speed/quality priority where 0 is for best speed and 100 for best quality. * M_SETTING_AUTO_ADJUSTMENT: Sets the switch to ignore an error if the engine managed to fix the conflict between parameters. - M_ENABLE: Conflict will be solved automaticaly and no error will be raised. - M_DISABLE: An error will be raised if a conflict occurs between parameters. Combination constant for M_STREAM_FRAME_RATE; * M_VARIABLE: Specifies that the timestamp used for encoding will be inquired on each call to MseqFeed(). Plus, when closing the file, it will specify the average frame rate (AVI only). 2.3 MseqDefine - Synopsis: Associate an input or an output container to the specific sequence context or control parameters of an input or output container inside a specific sequence context. - Syntax: void MseqDefine( MIL_ID ContextSeqId, MIL_INT SequenceIndex, MIL_INT64 SequenceType, const void *Param1Ptr, MIL_DOUBLE Param2); - Parameters: * ContextSeqId: Specifies the identifier of the sequence context. * SequenceIndex: Specifies the container to access. Values: M_SEQ_SRC(n): - Specifies the source container at index 'n', where n must be between 0 to 32. M_SEQ_DEST(n): - Specifies the destination container at index 'n', where n must be between 0 to 32. * SequenceType: - M_FILE: Specifies that the container is a file. Param1Ptr Data Type: MIL_CONST_TEXT_PTR * Param1Ptr: - When specifying SequenceType as M_FILE: * Value: Specifies the file path of the file to be used. * Param2: - M_FILE_FORMAT_H264: Generates a raw H.264 Elementary Stream. - M_FILE_FORMAT_AVI: Generates a AVI containing the encoded stream. - M_FILE_FORMAT_MP4: Generates a MP4 containing the encoded stream. 2.4 MseqFeed - Synopsis: Enqueue a buffer for the context operation. The operation can wait for the buffer to be queued if the operation context's FIFO is full. - Syntax: void MseqFeed( MIL_ID ContextSeqId, MIL_ID ImageBufId, MIL_INT64 InitFlag) - Parameters: * ContextSeqId: Specifies the identifier of the sequence context. * ImageBufId: Specifies the identifier of the buffer to add to the queue for encoding. * InitFlag: Reserved for future use. Should be set to M_DEFAULT. 2.5 MseqFree - Synopsis: Free a sequence context. - Syntax: void MseqFree( MIL_ID ContextSeqId) - Parameters: * ContextSeqId: Specifies the sequence identifier to be freed. 2.6 MseqGetHookInfo - Synopsis: Retrieves information about the sequence hook event. - Syntax: void MseqGetHookInfo( MIL_ID EventID, MIL_INT InfoType, void* UserVarPtr ) - Parameters: * EventID: Specifies the sequence event identifier received by the hook-handler function to be called. MseqGetHookInfo() should only be called within the scope of a sequence hook-handler identification. * InfoType: Specifies a combination of two values: the event type and the type of information about the event to return. See the Parameter associations section for possible values that can be specified. * UserVarPtr: Accepts the address of one of the following (see the Parameter associations section for specifics on which is expected): - MIL_ID Specifies the address in which to write the requested information. See the Parameter associations section for possible values that can be returned. - Parameter Associations: Control types: Control Values: * M_MODIFIED_BUFFER+: Retrieves information about a modified buffer type of event. This event only occurs if the hook-handler function was hooked using MseqProcess(). -UserVarPtr data type info: MIL_ID * Combination constants for M_MODIFIED_BUFFER You must add one of the following values to the above-mentioned value to specify the type of information to return. - M_BUFFER_ID: Returns the MIL identifier of the buffer modified by MseqProcess().
Recommended publications
  • A Taxonomy of Accelerator Architectures and Their
    A taxonomy of accelerator C. Cas$caval S. Chatterjee architectures and their H. Franke K. J. Gildea programming models P. Pattnaik As the clock frequency of silicon chips is leveling off, the computer architecture community is looking for different solutions to continue application performance scaling. One such solution is the multicore approach, i.e., using multiple simple cores that enable higher performance than wide superscalar processors, provided that the workload can exploit the parallelism. Another emerging alternative is the use of customized designs (accelerators) at different levels within the system. These are specialized functional units integrated with the core, specialized cores, attached processors, or attached appliances. The design tradeoff is quite compelling because current processor chips have billions of transistors, but they cannot all be activated or switched at the same time at high frequencies. Specialized designs provide increased power efficiency but cannot be used as general-purpose compute engines. Therefore, architects trade area for power efficiency by placing in the design additional units that are known to be active at different times. The resulting system is a heterogeneous architecture, with the potential of specialized execution that accelerates different workloads. While designing and building such hardware systems is attractive, writing and porting software to a heterogeneous platform is even more challenging than parallelism for homogeneous multicore systems. In this paper, we propose a taxonomy that allows us to define classes of accelerators, with the goal of focusing on a small set of programming models for accelerators. We discuss several types of currently popular accelerators and identify challenges to exploiting such accelerators in current software stacks.
    [Show full text]
  • WWW 2013 22Nd International World Wide Web Conference
    WWW 2013 22nd International World Wide Web Conference General Chairs: Daniel Schwabe (PUC-Rio – Brazil) Virgílio Almeida (UFMG – Brazil) Hartmut Glaser (CGI.br – Brazil) Research Track: Ricardo Baeza-Yates (Yahoo! Labs – Spain & Chile) Sue Moon (KAIST – South Korea) Practice and Experience Track: Alejandro Jaimes (Yahoo! Labs – Spain) Haixun Wang (MSR – China) Developers Track: Denny Vrandečić (Wikimedia – Germany) Marcus Fontoura (Google – USA) Demos Track: Bernadette F. Lóscio (UFPE – Brazil) Irwin King (CUHK – Hong Kong) W3C Track: Marie-Claire Forgue (W3C Training, USA) Workshops Track: Alberto Laender (UFMG – Brazil) Les Carr (U. of Southampton – UK) Posters Track: Erik Wilde (EMC – USA) Fernanda Lima (UNB – Brazil) Tutorials Track: Bebo White (SLAC – USA) Maria Luiza M. Campos (UFRJ – Brazil) Industry Track: Marden S. Neubert (UOL – Brazil) Proceedings and Metadata Chair: Altigran Soares da Silva (UFAM - Brazil) Local Arrangements Committee: Chair – Hartmut Glaser Executive Secretary – Vagner Diniz PCO Liaison – Adriana Góes, Caroline D’Avo, and Renato Costa Conference Organization Assistant – Selma Morais International Relations – Caroline Burle Technology Liaison – Reinaldo Ferraz UX Designer / Web Developer – Yasodara Córdova, Ariadne Mello Internet infrastructure - Marcelo Gardini, Felipe Agnelli Barbosa Administration– Ana Paula Conte, Maria de Lourdes Carvalho, Beatriz Iossi, Carla Christiny de Mello Legal Issues – Kelli Angelini Press Relations and Social Network – Everton T. Rodrigues, S2Publicom and EntreNós PCO – SKL Eventos
    [Show full text]
  • An FPGA-Accelerated Embedded Convolutional Neural Network
    Master Thesis Report ZynqNet: An FPGA-Accelerated Embedded Convolutional Neural Network (edit) (edit) 1000ch 1000ch FPGA 1000ch Network Analysis Network Analysis 2x512 > 1024 2x512 > 1024 David Gschwend [email protected] SqueezeNet v1.1 b2a ext7 conv10 2x416 > SqueezeNet SqueezeNet v1.1 b2a ext7 conv10 2x416 > SqueezeNet arXiv:2005.06892v1 [cs.CV] 14 May 2020 Supervisors: Emanuel Schmid Felix Eberli Professor: Prof. Dr. Anton Gunzinger August 2016, ETH Zürich, Department of Information Technology and Electrical Engineering Abstract Image Understanding is becoming a vital feature in ever more applications ranging from medical diagnostics to autonomous vehicles. Many applications demand for embedded solutions that integrate into existing systems with tight real-time and power constraints. Convolutional Neural Networks (CNNs) presently achieve record-breaking accuracies in all image understanding benchmarks, but have a very high computational complexity. Embedded CNNs thus call for small and efficient, yet very powerful computing platforms. This master thesis explores the potential of FPGA-based CNN acceleration and demonstrates a fully functional proof-of-concept CNN implementation on a Zynq System-on-Chip. The ZynqNet Embedded CNN is designed for image classification on ImageNet and consists of ZynqNet CNN, an optimized and customized CNN topology, and the ZynqNet FPGA Accelerator, an FPGA-based architecture for its evaluation. ZynqNet CNN is a highly efficient CNN topology. Detailed analysis and optimization of prior topologies using the custom-designed Netscope CNN Analyzer have enabled a CNN with 84.5 % top-5 accuracy at a computational complexity of only 530 million multiply- accumulate operations. The topology is highly regular and consists exclusively of convolu- tional layers, ReLU nonlinearities and one global pooling layer.
    [Show full text]
  • AI Chips: What They Are and Why They Matter
    APRIL 2020 AI Chips: What They Are and Why They Matter An AI Chips Reference AUTHORS Saif M. Khan Alexander Mann Table of Contents Introduction and Summary 3 The Laws of Chip Innovation 7 Transistor Shrinkage: Moore’s Law 7 Efficiency and Speed Improvements 8 Increasing Transistor Density Unlocks Improved Designs for Efficiency and Speed 9 Transistor Design is Reaching Fundamental Size Limits 10 The Slowing of Moore’s Law and the Decline of General-Purpose Chips 10 The Economies of Scale of General-Purpose Chips 10 Costs are Increasing Faster than the Semiconductor Market 11 The Semiconductor Industry’s Growth Rate is Unlikely to Increase 14 Chip Improvements as Moore’s Law Slows 15 Transistor Improvements Continue, but are Slowing 16 Improved Transistor Density Enables Specialization 18 The AI Chip Zoo 19 AI Chip Types 20 AI Chip Benchmarks 22 The Value of State-of-the-Art AI Chips 23 The Efficiency of State-of-the-Art AI Chips Translates into Cost-Effectiveness 23 Compute-Intensive AI Algorithms are Bottlenecked by Chip Costs and Speed 26 U.S. and Chinese AI Chips and Implications for National Competitiveness 27 Appendix A: Basics of Semiconductors and Chips 31 Appendix B: How AI Chips Work 33 Parallel Computing 33 Low-Precision Computing 34 Memory Optimization 35 Domain-Specific Languages 36 Appendix C: AI Chip Benchmarking Studies 37 Appendix D: Chip Economics Model 39 Chip Transistor Density, Design Costs, and Energy Costs 40 Foundry, Assembly, Test and Packaging Costs 41 Acknowledgments 44 Center for Security and Emerging Technology | 2 Introduction and Summary Artificial intelligence will play an important role in national and international security in the years to come.
    [Show full text]
  • The Developer's Guide to Azure
    E-book Series The Developer’s Guide to Azure Published May 2019 May The Developer’s 2 2019 Guide to Azure 03 / 40 / 82 / Introduction Chapter 3: Securing Chapter 6: Where your application and how to deploy We’re here to help your Azure services How can Azure help secure 05 / your app? How can Azure deploy your Encryption services? Chapter 1: Getting Azure Security Center Infrastructure as Code started with Azure Logging and monitoring Azure Blueprints Containers in Azure What can Azure do for you? Azure Stack Where to host your 51 / Where to deploy, application and when? Chapter 4: Adding Azure App Service Features Azure Functions intelligence to Azure Logic Apps your application 89 / Azure Batch Containers How can Azure integrate AI Chapter 7: Share your What to use, and when? into your app? code, track work, and ship Making your application Azure Search software more performant Cognitive Services Azure Front Door Azure Bot Service How can Azure help you plan Azure Content Delivery Azure Machine Learning smarter, collaborate better, and ship Network Studio your apps faster? Azure Redis Cache Developer tooling for AI Azure Boards AI and mixed reality Azure Repos Using events and messages in Azure Pipelines 22 / your application Azure Test Plans Azure Artifacts Chapter 2: Connecting your app with data 72 / 98 / What can Azure do for Chapter 5: Connect your your data? business with IoT Chapter 8: Azure in Action Where to store your data Azure Cosmos DB How can Azure connect, secure, Walk-through: Azure portal Azure SQL Database manage, monitor,
    [Show full text]
  • Scheduling Dataflow Execution Across Multiple Accelerators
    Scheduling Dataflow Execution Across Multiple Accelerators Jon Currey, Adam Eversole, and Christopher J. Rossbach Microsoft Research ABSTRACT mon: GPU-based super-computers are the norm [1], and Dataflow execution engines such as MapReduce, DryadLINQ cluster-as-service systems like EC2 make systems with po- and PTask have enjoyed success because they simplify devel- tentially many GPUs widely available [2]. Compute-bound opment for a class of important parallel applications. Ex- algorithms are abundant, even at cluster scale, making ac- pressing the computation as a dataflow graph allows the celeration with specialized hardware an attractive and viable runtime, and not the programmer, to own problems such as approach. synchronization, data movement and scheduling - leverag- Despite the rapid evolution of front-end programming tools ing dynamic information to inform strategy and policy in and runtimes for such systems [14, 44, 35, 16, 22, 36, 43, 12], a way that is impossible for a programmer who must work programming them remains a significant challenge. Writ- only with a static view. While this vision enjoys consider- ing correct code for even a single GPU requires familiar- able intuitive appeal, the degree to which dataflow engines ity with different programming and execution models, while can implement performance profitable policies in the general writing performant code still generally requires considerable case remains under-evaluated. systems- and architectural-level expertise. Developing code We consider the problem of scheduling in a dataflow en- that can effectively utilize multiple, potentially diverse ac- gine on a platform with multiple GPUs. In this setting, celerators is still very much an expert's game.
    [Show full text]
  • The Jabberwocky Programming Environment for Structured Social Computing
    The Jabberwocky Programming Environment for Structured Social Computing Salman Ahmad Alexis Battle Zahan Malkani Sepandar D. Kamvar [email protected] [email protected] [email protected] [email protected] ABSTRACT Dog ManReduce Dormouse We present Jabberwocky, a social computing stack that con- sists of three components: a human and machine resource management system called Dormouse, a parallel program- API ming framework for human and machine computation called Deploy Dog Compiler ManReduce Script Runtime ManReduce, and a high-level programming language on top Dormouse Master of ManReduce called Dog. Dormouse is designed to enable cross-platform programming languages for social computa- Library tion, so, for example, programs written for Mechanical Turk Dormouse can also run on other crowdsourcing platforms. Dormouse Compute Clusters Crowd Workers Dog Script User-Defined ManReduce also enables a programmer to easily combine crowdsourcing Functions Library platforms or create new ones. Further, machines and peo- ple are both first-class citizens in Dormouse, allowing for Figure 1: Overview of Jabberwocky natural parallelization and control flows for a broad range of data-intensive applications. And finally and importantly, Dormouse includes notions of real identity, heterogeneity, has been used to address large-scale goals ranging from la- and social structure. We show that the unique properties beling images [23], to finding 3-D protein structures [3], to of Dormouse enable elegant programming models for com- creating a crowdsourced illustrated book [8], to classifying plex and useful problems, and we propose two such frame- galaxies in Hubble images [1]. works. ManReduce is a framework for combining human In existing paradigms, human workers are often treated as and machine computation into an intuitive parallel data flow homogeneous and interchangeable, which is useful in han- that goes beyond existing frameworks in several important dling issues of scale and availability.
    [Show full text]
  • Windows GUI Context Extraction
    CS2-AA4X Windows GUI Context Extraction a Major Qualifying Project Report submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Bachelor of Science by ________________________ Austin T. Rose March 21, 2017 ________________________ Professor Craig A. Shue 1 1. ABSTRACT In any computer system an intelligent policy for allowing or disallowing low-level actions is critical to security. Such low-level actions may include opening up new connections to the Internet, installing new drivers, or executing downloaded files. In determining whether to allow a given action, it is necessary to collect some context regarding how the action was triggered. Is this connection to an address we have never seen before? Where was this file downloaded from? An important part of that context is whether or not a human user actu- ally requested the action in some way, through their interactions with the Graphical User Interface (GUI). That is an abstract question, which is not as straightforward to answer as others. We seek to determine a user's high-level intentions by extracting and relating properties of the GUI as a user interacts with it. We have created a system that automatically generates information about user activity in a programmatic way, monitoring a Windows computer in real time with a low perfor- mance overhead. The information generated is well structured for consumption by security tools, and to inform policy. Deployed across an organization, this system has the potential to effectively white-list broad categories of user work-flows, in order to easily alert about any concerning anomalous behavior which would warrant further investigation.
    [Show full text]
  • Final Copy 2021 06 24 Foyer
    This electronic thesis or dissertation has been downloaded from Explore Bristol Research, http://research-information.bristol.ac.uk Author: Foyer, Clement M Title: Abstractions for Portable Data Management in Heterogeneous Memory Systems General rights Access to the thesis is subject to the Creative Commons Attribution - NonCommercial-No Derivatives 4.0 International Public License. A copy of this may be found at https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode This license sets out your rights and the restrictions that apply to your access to the thesis so it is important you read this before proceeding. Take down policy Some pages of this thesis may have been removed for copyright restrictions prior to having it been deposited in Explore Bristol Research. However, if you have discovered material within the thesis that you consider to be unlawful e.g. breaches of copyright (either yours or that of a third party) or any other law, including but not limited to those relating to patent, trademark, confidentiality, data protection, obscenity, defamation, libel, then please contact [email protected] and include the following information in your message: •Your contact details •Bibliographic details for the item, including a URL •An outline nature of the complaint Your claim will be investigated and, where appropriate, the item in question will be removed from public view as soon as possible. Abstractions for Portable Data Management in Heterogeneous Memory Systems Clément Foyer supervised by Simon McIntosh-Smith and Adrian Tate and Tim Dykes A dissertation submitted to the University of Bristol in accordance with the requirements for award of the degree of Doctor of Philosophy in the Faculty of Engineering, School of Computer Science.
    [Show full text]
  • SDP Memo 50: the Accelerator Support of Execution Framework
    SDP Memo 50: The Accelerator Support of Execution Framework Document number…………………………………………………………………SDP Memo 50 Document Type…………………………………………………………………………….MEMO Revision………………………………………………………………………………………..1.00 Author………………………………………Feng Wang, Shoulin Wei, Hui Deng and Ying Mei Release Date………………………………………………………………………….2018-09-18 Document Classification……………………………………………………………. Unrestricted Lead Author Designation Affiliation Feng Wang Guangzhou University/Kunming University of Sci & Tech Signature & Date: Document No: 50 Unrestricted Revision: 1.00 Author: Feng Wang et al. Release Date: 2018-9-9 Page 1 of 13 SDP Memo Disclaimer The SDP memos are designed to allow the quick recording of investigations and research done by members of the SDP. They are also designed to raise questions about parts of the SDP design or SDP process. The contents of a memo may be the opinion of the author, not the whole of the SDP. Table of Contents SDP MEMO DISCLAIMER.................................................................................................................................................... 2 TABLE OF CONTENTS ......................................................................................................................................................... 2 LIST OF FIGURES .................................................................................................................................................................. 3 LIST OF TABLES ...................................................................................................................................................................
    [Show full text]
  • Memory-Efficient Pipeline-Parallel DNN Training
    Memory-Efficient Pipeline-Parallel DNN Training Deepak Narayanan 1 * Amar Phanishayee 2 Kaiyu Shi 3 Xie Chen 3 Matei Zaharia 1 Abstract ever, model parallelism, when traditionally deployed, can either lead to resource under-utilization (Narayanan et al., Many state-of-the-art ML results have been ob- 2019) or high communication overhead with good scaling tained by scaling up the number of parameters in only within a multi-GPU server (Shoeybi et al., 2019), and existing models. However, parameters and acti- consequently an increase in training time and dollar cost. vations for such large models often do not fit in the memory of a single accelerator device; this Recent work has proposed pipelined model parallelism means that it is necessary to distribute training to accelerate model-parallel training. For example, of large models over multiple accelerators. In GPipe (Huang et al., 2019) and PipeDream (Harlap et al., this work, we propose PipeDream-2BW, a sys- 2018; Narayanan et al., 2019) push multiple inputs in se- tem that supports memory-efficient pipeline par- quence through a series of workers that each manage one allelism. PipeDream-2BW uses a novel pipelin- model partition, allowing different workers to process dif- ing and weight gradient coalescing strategy, com- ferent inputs in parallel. Na¨ıve pipelining can harm model bined with the double buffering of weights, to convergence due to inconsistent weight versions between ensure high throughput, low memory footprint, the forward and backward passes of a particular input. Ex- and weight update semantics similar to data par- isting techniques trade off memory footprint and throughput allelism.
    [Show full text]
  • VICTORIA UNIVERSITY of WELLINGTON Te Whare W
    VICTORIAUNIVERSITYOFWELLINGTON Te Whare Wananga¯ o te UpokooteIkaaM¯ aui¯ School of Engineering and Computer Science Te Kura Matai¯ Pukaha,¯ Purorohiko¯ PO Box 600 Tel: +64 4 463 5341 Wellington Fax: +64 4 463 5045 New Zealand Internet: offi[email protected] Verifying Whiley Programs using an Off-the-Shelf SMT Solver Henry J. Wylde Supervisors: Dr. David J. Pearce, Dr. David Streader October 22, 2014 Submitted in partial fulfilment of the requirements for Engineering 489 - Engineering Project. Abstract This project investigated the integration of external theorem proving tools with Whiley—specifically, Satisfiability Modulo Theories (SMT) solvers—to increase the number of verifiable Whiley programs. The current verifier, the Whiley Constraint Solver (WyCS), is limited and hence there is a difficulty in verifying Whiley pro- grams. This project designed and implemented an extension that supported the use of arbitrary SMT solvers with the Whiley compiler. The evaluation of this ex- tension used the Z3 SMT solver. The evaluation confirmed the value of using external SMT solvers with Whiley by emphasising the extension’s ability to ver- ify simple Whiley programs. However additional research would be required for applying this solution to more complex programs. This project also conducted an experiment that analysed WyCS’s rewrite rules. This research may be used to educate WyCS’s rewrite rule selection criteria, improving its verification abilities. Acknowledgments I wish to thank my supervisors, Dr. David J. Pearce and Dr. David Streader, for all the time and effort they put into providing feedback, advice and help throughout this project. Their assistance has been invaluable. I further wish to express my gratitude to those who helped proofread this report.
    [Show full text]