
SPINE= .47” P PARALLEL PROGRAMMING WITH MICROSOFT VISUAL C++® ARALLEL Your CPU meter shows a problem. One core is running at 100 percent, but all the patterns & practices other cores are idle. Your application is CPU-bound, but you are using only a frac- tion of the computing power of your multicore system. Is there a way to get better Proven practices for predictable results performance? Save time and reduce risk on your The answer, in a nutshell, is parallel programming. Where you once would have writ- P ten the kind of sequential code that is familiar to all programmers, you now fi nd that software development projects by ROGRAMMING P ARALLEL this no longer meets your performance goals. To use your system’s CPU resources incorporating patterns & practices, effi ciently, you need to split your application into pieces that can run at the same time. Microsoft’s applied engineering guidance that includes both production Of course, this is easier said than done. Parallel programming has a reputation for being the domain of experts and a minefi eld of subtle, hard-to-reproduce software quality source code and documentation. P ROGRAMMING defects. Everyone seems to have a favorite story about a parallel program that did not behave as expected because of a mysterious bug. The guidance is designed to help software development teams: WITH These stories should inspire a healthy respect for the diffi culty of the problems you will face in writing your own parallel programs. Fortunately, help has arrived. The Make critical design and technology Parallel Patterns Library (PPL) and the Asynchronous Agents Library introduce a selection decisions by highlighting new programming model for parallelism that signifi cantly simplifi es the job. Behind the appropriate solution architectures, ICROSOFT M the scenes are sophisticated algorithms that dynamically distribute computations on technologies, and Microsoft products WITH multicore architectures. In addition, Microsoft® Visual Studio® 2010 development for common scenarios system includes debugging and analysis tools to support the new parallel program- ming model. Understand the most important M V ISUAL C++ Proven design patterns are another source of help. This guide introduces you to the concepts needed for success by most important and frequently used patterns of parallel programming and provides explaining the relevant patterns and ICROSOFT executable code samples for them, using PPL. When thinking about where to begin, a prescribing the important practices good place to start is to review the patterns in this book. See if your problem has any attributes that match the six patterns presented in the following chapters. If it does, Design Patterns for delve more deeply into the relevant pattern or patterns and study the sample code. Get started with a proven code base by providing thoroughly tested software and source that embodies Decomposition and Coordination Microsoft’s recommendations V on Multicore Architectures The patterns & practices team consists of experienced architects, developers, ISUAL writers, and testers. We work openly with the developer community and industry experts, on every project, to Colin Campbell C++ ensure that some of the best minds in Ade Miller the industry have contributed to and reviewed the guidance as it is being developed. ® Forewords by We also love our role as the bridge Tony Hey between the real world needs of our Herb Sutter customers and the wide range of products and technologies that Microsoft provides. • • • • • • • • • • • • • • U.S.A. $29.99 [Recommended] • • • • • • • • • • • • For more information explore: msdn.microsoft.com/practices Software Development Download from Wow! eBook <www.wowebook.com> parallel programming with microsoft visual c++ visual microsoft with programming parallel ® Parallel Programming with Microsoft Visual C++® Design Patterns for Decomposition and Coordination on Multicore Architectures Colin Campbell Ade Miller ISBN 978-0-7356-5175-3 This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet website references, may change without notice. You bear the risk of using it. Unless otherwise noted, the companies, organizations, products, domain names, email addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, email address, logo, person, place, or event is intended or should be inferred. Comply- ing with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. © 2011 Microsoft Corporation. All rights reserved. Microsoft, MSDN, Visual Basic, Visual C++, Visual C#, Visual Studio, Windows, Windows Live, Windows Server, and Windows Vista are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners. Contents foreword xi Tony Hey foreword xiii Herb Sutter preface xv Who This Book Is For xv Why This Book Is Pertinent Now xvi What You Need to Use the Code xvi How to Use This Book xvii Introduction xviii Parallelism with Control Dependencies Only xviii Parallelism with Control and Data Dependencies xviii Dynamic Task Parallelism and Pipelines xviii Supporting Material xix What Is Not Covered xx Goals xx acknowledgments xxi 1 Introduction 1 The Importance of Potential Parallelism 2 Decomposition, Coordination, and Scalable Sharing 3 Understanding Tasks 3 Coordinating Tasks 4 Scalable Sharing of Data 5 Design Approaches 6 Selecting the Right Pattern 7 A Word about Terminology 8 The Limits of Parallelism 8 A Few Tips 10 Exercises 11 For More Information 11 vi 2 Parallel Loops 13 The Basics 14 Parallel for Loops 14 parallel_for_each 15 What to Expect 16 An Example 17 Sequential Credit Review Example 18 Credit Review Example Using parallel_for_each 18 Performance Comparison 19 Variations 19 Breaking out of Loops Early 19 Exception Handling 20 Special Handling of Small Loop Bodies 21 Controlling the Degree of Parallelism 22 Anti-Patterns 23 Hidden Loop Body Dependencies 23 Small Loop Bodies with Few Iterations 23 Duplicates in the Input Enumeration 23 Scheduling Interactions with Cooperative Blocking 24 Related Patterns 24 Exercises 24 Further Reading 25 3 Parallel Tasks 27 The Basics 28 An Example 29 Variations 31 Coordinating Tasks with Cooperative Blocking 31 Canceling a Task Group 33 Handling Exceptions 35 Speculative Execution 36 Anti-Patterns 37 Variables Captured by Closures 37 Unintended Propagation of Cancellation Requests 38 The Cost of Synchronization 39 Design Notes 39 Task Group Calling Conventions 39 Tasks and Threads 40 How Tasks Are Scheduled 40 Structured Task Groups and Task Handles 41 Lightweight Tasks 41 Exercises 42 Further Reading 42 vii 4 Parallel Aggregation 45 The Basics 46 An Example 49 Variations 55 Considerations for Small Loop Bodies 55 Other Uses for Combinable Objects 55 Design Notes 55 Related Patterns 57 Exercises 58 Further Reading 58 5 Futures 61 The Basics 62 Futures 63 Example: The Adatum Financial Dashboard 65 The Business Objects 66 The Analysis Engine 67 Variations 70 Canceling Futures 70 Removing Bottlenecks 70 Modifying the Graph at Run Time 71 Design Notes 72 Decomposition into Futures 72 Functional Style 72 Related Patterns 72 Pipeline Pattern 73 Master/Worker Pattern 73 Dynamic Task Parallelism Pattern 73 Discrete Event Pattern 73 Exercises 73 6 Dynamic Task Parallelism 75 The Basics 75 An Example 77 Variations 80 Parallel While-Not-Empty 80 Adding Tasks to a Pending Wait Context 81 Exercises 83 Further Reading 83 7 Pipelines 85 Types of Messaging Blocks 86 The Basics 86 viii An Example 92 Sequential Image Processing 92 The Image Pipeline 94 Performance Characteristics 96 Variations 97 Asynchronous Pipelines 97 Canceling a Pipeline 101 Handling Pipeline Exceptions 102 Load Balancing Using Multiple Producers 104 Pipelines and Streams 106 Anti-Patterns 107 Copying Large Amounts of Data between Pipeline Stages 107 Pipeline Stages that Are Too Small 107 Forgetting to Use Message Passing for Isolation 107 Infinite Waits 107 Unbounded Queue Growth 107 More Information 107 Design Notes 108 Related Patterns 109 Exercises 109 Further Reading 109 appendices a the task scheduler and resource manager 111 Resource Manager 113 Why It’s Needed 113 How Resource Management Works 113 Dynamic Resource Management 115 Oversubscribing Cores 116 Querying the Environment 116 Kinds of Tasks 116 Lightweight Tasks 117 Tasks Created Using PPL 117 Task Schedulers 118 Managing Task Schedulers 118 Creating and Attaching a Task Scheduler 119 Detaching a Task Scheduler 120 Destroying a Task Scheduler 120 Scenarios for Using Multiple Task Schedulers 120 Implementing a Custom Scheduling Component 121 ix The Scheduling Algorithm 121 Schedule Groups 121 Adding Tasks 122 Running Tasks 123 Enhanced Locality Mode 124 Forward
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages195 Page
-
File Size-