Finding and Tolerating Concurrency Bugs

Total Page:16

File Type:pdf, Size:1020Kb

Finding and Tolerating Concurrency Bugs Finding and Tolerating Concurrency Bugs by Jie Yu A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science and Engineering) in The University of Michigan 2013 Doctoral Committee: Assistant Professor Satish Narayanasamy, Chair Associate Professor Robert Dick Associate Professor Jason Nelson Flinn Professor Scott Mahlke Cristiano Pereira To my family. ii ACKNOWLEDGEMENTS First and foremost, I would like to thank my advisor Professor Satish Narayanasamy, for his endless support during the past five years. It has been an honor to work with him and to be one of his first batch of PhD students. From him, I have learnt how to write papers, make presentations, and most importantly, be confident and strong. Satish has given me the freedom to pursue various projects that match my interests. He has also provided countless insightful discussions about my research during our weekly meetings. I appreciate all his contributions of time, ideas, and funding to make my PhD experience productive and stimulating. I also would like to thank other members of my PhD dissertation committee, Professor Robert Dick, Professor Jason N. Flinn, Professor Scott Mahlke and Dr. Cristiano Pereira. It is no easy task, reviewing a thesis, and I am grateful for their invaluable comments and constructive criticisms that greatly improved my disserta- tion. This dissertation is partially funded by National Science Foundation (NSF) and Intel Corporation, and I would like to thank both organizations for their generous support. I would like to thank Dr. Cristiano Pereira and Dr. Gilles Pokam from Intel for their valuable discussions and insights from industry perspectives. I also would like to thank all my labmates. I would like to thank Dongyoon Lee for sharing ideas and brainstorming with me throughout my PhD studies, Abhayen- dra Singh for clarifying my uncertainties about memory consistency issues in many iii occasions, and Chun-Hung Hsiao for working with me on the smartphone project offering me unremitting assistance. Thanks also go to other members in the lab, including Shaizeen Aga and Gaurav Chadha, for making the lab such a comfortable home for me. My time at Michigan was made enjoyable in large part due to the many friends and groups that became a part of my life. I would like to thank Lujun Fang and Yunjing Xu for always hanging out with me, sharing their visions and big ideas. Special thanks go to Yunjing for taking me to the hospital in midnight when I was bleeding badly. I would like to thank Lujun Fang and Yudong Gao for being my roommates for years, making a sweet home for me. Special thanks go to Yudong for playing basketball with me all the time. Thanks also go to my friends who have not been mentioned yet, including Junxian Huang, Yi Li, Feng Qian, Li Qian, Zhiyun Qian, Zhaoguang Wang, Qiang Xu and Xinyu Zhang, for providing support and friendship that I needed. Lastly, I would like to thank my family for their unconditional love and support. My hard-working parents, Jianzu Yu and Shuyan Li, have sacrificed their lives for me and provided unconditional love and care. I love them so much, and I would not have made it this far without them. And most of all, I would like to thank my loving, supportive and encouraging wife Panmei Chen. For this PhD, we have been apart for almost three years. This dissertation would not be possible without the love and support from her. iv TABLE OF CONTENTS DEDICATION .......................................... ii ACKNOWLEDGEMENTS .................................. iii LIST OF FIGURES ...................................... viii LIST OF TABLES ....................................... x ABSTRACT ........................................... xi CHAPTER I. Introduction ....................................... 1 1.1 TwoHypothesesandInterleavingIdioms . .. 3 1.2 Finding Concurrency Bugs by Exposing Untested Interleavings . ....... 5 1.3 Tolerating Concurrency Bugs by Avoiding Untested Interleavings ...... 6 1.4 Contributions ................................... 9 1.5 Structure...................................... 11 II. Background and Related Work ........................... 12 2.1 DetectingConcurrencyBugs . 12 2.1.1 DataRaceDetection .......................... 13 2.1.2 Atomicity Violation Detection . 14 2.2 ExposingConcurrencyBugs . 14 2.2.1 CoverageDrivenTesting . 14 2.2.2 StressTestingandRandomTesting. 15 2.2.3 SystematicTesting ........................... 15 2.2.4 ActiveTesting.............................. 16 2.2.5 TestInputGeneration .. .. .. .. .. .. .. .. .. .. .. 17 2.3 ToleratingConcurrencyBugs. 17 III. Encoding Tested Interleavings Using Interleaving Idioms ........... 20 3.1 TwoHypothesesaboutConcurrencyBugs . ... 21 3.2 InterleavingIdiom................................. 22 3.3 CanonicalIdioms ................................. 24 3.4 RelationwithConcurrencyBugs. 25 3.5 EmpiricalAnalysis................................. 27 IV. Exposing Untested Interleavings: Maple ..................... 30 4.1 Overview...................................... 30 v 4.2 Online Profiling For Predicting iRoots . 34 4.2.1 NotationsandTerminology . 34 4.2.2 NaiveApproach............................. 35 4.2.3 Non-MutexHappens-BeforeAnalysis . 35 4.2.4 Mutual Exclusion Analysis . 36 4.2.5 Online Profiling Algorithm . 38 4.2.6 Predicting iRoots for Compound Idioms . 41 4.3 Actively Testing Predicted iRoots . 43 4.3.1 ANaiveApproach ........................... 43 4.3.2 Non-preemptive and Strict Priority Scheduler . 44 4.3.3 ComplementarySchedules . 46 4.3.4 WatchModeOptimization. 47 4.3.5 CandidateArbitration .. .. .. .. .. .. .. .. .. .. .. 49 4.3.6 Dealing with Asynchronous External Events . 50 4.3.7 CompoundIdioms ........................... 52 4.3.8 ExposingPre-conditions . 53 4.4 MemoizationofiRoots .............................. 54 4.5 Evaluation ..................................... 55 4.5.1 MapleConfiguration .......................... 55 4.5.2 Usage Scenario 1: Exposing Bugs with Bug Triggering Inputs . 56 4.5.3 Usage Scenario 2: Coverage-Driven Testing . 61 4.5.4 CharacteristicsofMaple . 66 4.6 Summary...................................... 68 V. Avoiding Untested Interleavings I: PSet ..................... 69 5.1 Overview...................................... 69 5.2 EncodingTestedInterleavings . 73 5.2.1 PredecessorSets(PSets) . 73 5.2.2 Effectiveness of PSets in Avoiding Concurrency Bugs . 75 5.2.3 Deriving and Encoding PSets Constraints . 78 5.2.4 Limitations . 79 5.3 EnforcingTestedInterleavings . .. 79 5.3.1 Detecting and Enforcing PSet Constraints . 80 5.3.2 ArchitecturalSupport . 81 5.4 Evaluation ..................................... 83 5.4.1 Bug Avoidance Capability . 84 5.4.2 LearningPSetConstraints. 85 5.4.3 PSet Constraint Violations in Bug Free Executions . 88 5.4.4 MemorySpaceOverhead .. .. .. .. .. .. .. .. .. .. 91 5.5 Summary...................................... 92 VI. Avoiding Untested Interleavings II: LifeTx .................... 94 6.1 Overview...................................... 94 6.2 Algorithm for Determining LifeTxes . 98 6.2.1 Lifeguard Transactions (LifeTxes) and Profiling Algorithm Overview 98 6.2.2 Checking Conflict Serializability for LifeTxes . 100 6.2.3 Splitting LifeTxes On a Conflict . 102 6.2.4 PracticalIssues ............................. 104 6.2.5 Discussion and Limitations . 106 6.3 RuntimeSupportforLifeTxes . 107 6.3.1 LifeTx-StallDesign.. .. .. .. .. .. .. .. .. .. .. .. 108 6.3.2 LifeTx-CSDesign............................ 112 vi 6.4 Evaluation .....................................113 6.4.1 ExperimentalSetup. 113 6.4.2 LearningLifeTxes............................ 116 6.4.3 Characteristics of LifeTxes . 116 6.4.4 Bug Avoidance Capability . 119 6.4.5 PerformanceStudy .. .. .. .. .. .. .. .. .. .. .. .. 121 6.5 Summary......................................124 VII. Future Work .......................................126 VIII. Conclusion ........................................131 BIBLIOGRAPHY ........................................ 134 vii LIST OF FIGURES Figure 3.1 The canonical idioms for two inter-thread dependencies and two threads.. 25 3.2 Anidiom1concurrencybug............................. ... 26 3.3 Arealidiom4concurrencybugfromMySQL. ..... 27 4.1 Overviewoftheframework. ........................... .... 32 4.2 Infeasible iRoots due to non-mutex happens-before relations. ............ 35 4.3 Infeasible iRoots due to mutual exclusion. ...... 37 4.4 Predicting iRoots for compound idioms. .... 42 4.5 The ideal situation for exposing an idiom1 iRoot A ⇒ B. .............. 44 4.6 The naive approach could deadlock when exposing an idiom1 iRoot A ⇒ B..... 44 4.7 The situation in which the watch mode is turned on for exposing an idiom1 iRoot A ⇒ B. .......................................... 48 4.8 Problemwithasynchronousexternalevents. ......... 51 4.9 Expose a compound idiom iRoot A ⇒ B...C ⇒ D. .................. 51 4.10 A pre-condition exists when trying to expose iRoot A ⇒ B. ............. 51 4.11 Comparison with different testing methods using the same amount of time. M stands for Maple, P stands for PCT, L stands for PCTLarge, D stands for RandDelay, and C stands for CHESS.................................. 63 4.12 Comparison with different testing methods. X-axis is the number of test runs, and Y-axisisthetotalnumberofiRootsexposed. ..... 65 4.13 Memoization saves testing time. Y axis is normalized to the execution time without memoization. ....................................... 66 5.1 PSet constraints for an interleaving. ...... 74 5.2 A
Recommended publications
  • Effectiveness of Software Testing Techniques in Enterprise: a Case Study
    MYKOLAS ROMERIS UNIVERSITY BUSINESS AND MEDIA SCHOOL BRIGITA JAZUKEVIČIŪTĖ (Business Informatics) EFFECTIVENESS OF SOFTWARE TESTING TECHNIQUES IN ENTERPRISE: A CASE STUDY Master Thesis Supervisor – Assoc. Prof. Andrej Vlasenko Vilnius, 2016 CONTENTS INTRODUCTION .................................................................................................................................. 7 1. THE RELATIONSHIP BETWEEN SOFTWARE TESTING AND SOFTWARE QUALITY ASSURANCE ........................................................................................................................................ 11 1.1. Introduction to Software Quality Assurance ......................................................................... 11 1.2. The overview of Software testing fundamentals: Concepts, History, Main principles ......... 20 2. AN OVERVIEW OF SOFTWARE TESTING TECHNIQUES AND THEIR USE IN ENTERPRISES ...................................................................................................................................... 26 2.1. Testing techniques as code analysis ....................................................................................... 26 2.1.1. Static testing ...................................................................................................................... 26 2.1.2. Dynamic testing ................................................................................................................. 28 2.2. Test design based Techniques ...............................................................................................
    [Show full text]
  • Effective Testing for Concurrency Bugs
    Effective Testing for Concurrency Bugs Thesis for obtaining the title of Doctor of Engineering Science of the Faculties of Natural Science and Technology I of Saarland University From Pedro Fonseca Saarbr¨ucken, 13. April 2015 Date of Colloquium: 24/06/2015 Dean of Faculty: Univ.-Prof. Dr. Markus Bl¨aser Chair of the Committee: Prof. Dr. Andreas Zeller First Reviewer: Prof. Dr. Rodrigo Rodrigues Second Reviewer: Prof. Dr. Peter Druschel Third Reviewer: Prof. Dr. George Candea Forth Reviewer: Dr. Bj¨orn Brandenburg Academic Assistant: Dr. Przemyslaw Grabowicz ii c 2015 Pedro Fonseca ALL RIGHTS RESERVED iii Dedicated to my parents and brother v This work was supported by the Foundation for Science and Technology of Portugal (SFRH/BD/45309/2008) and the Max Planck Society vi Abstract In the current multi-core era, concurrency bugs are a serious threat to software reliability. As hardware becomes more parallel, concurrent programming will become increasingly pervasive. However, correct concurrent programming is known to be extremely chal- lenging for developers and can easily lead to the introduction of concurrency bugs. This dissertation addresses this challenge by proposing novel techniques to help developers expose and detect concurrency bugs. We conducted a bug study to better understand the external and internal effects of real-world concurrency bugs. Our study revealed that a significant fraction of concur- rency bugs qualify as semantic or latent bugs, which are two particularly challenging classes of concurrency bugs. Based on the insights from the study, we propose a con- currency bug detector, PIKE that analyzes the behavior of program executions to infer whether concurrency bugs have been triggered during a concurrent execution.
    [Show full text]
  • Professional Software Testing Using Visual Studio 2019
    Professional Software Testing Using Visual Studio 2019 Learn practical steps to reduce quality issues and meet release dates and budgets Shift your testing activities left to find bugs sooner Improve collaboration between the developers and testers Get hands-on experience with Visual Studio 2019, Azure DevOps Services, and common marketplace tools. Through a combination of lecture, demonstrations, and team-based exercises, learn to deliver high-quality increments of software on regular iterations using the tools found in Visual Studio, Azure DevOps Services, and the community marketplace. This three-day course provides students with practical software testing techniques and technical skills, as well as exposure to Test-Driven Development (TDD), Acceptance Test-Driven Development (ATDD), Test Impact Analysis, Continuous Integration (CI), and more. Understand and practice development testing acceptance testing, and exploratory testing. Create automated acceptance tests in Visual Studio, use SpecFlow to automate acceptance testing, and learn to use Microsoft Test Runner. Learn to plan and track work, manage test cases, and more utilizing Azure DevOps Services and associated tools. Who Should Attend This course is appropriate for all members of a software development team, especially those performing testing activities. This course also provides value for non-testers (developers, designers, managers, etc.) who want a better understanding of what agile software testing involves. This is an independent course and is neither affiliated with, nor authorized, sponsored, or approved by, Microsoft Corporation. Course Outline Agile Software Testing Exploratory Tests Overview of agile software development Introduction to exploratory tests The agile tester and agile testing practices Using the Microsoft Test & Feedback extension Different types of testing Connected mode vs.
    [Show full text]
  • Professional Software Testing Using Visual Studio 2019 PTVS2019 | 3 Days
    Professional Software Testing Using Visual Studio 2019 PTVS2019 | 3 Days This three-day course will introduce you to the contemporary testing principles and practices used by agile teams to deliver high-quality increments of software on regular iterations. Through a combination of lecture, demonstrations, and team-based exercises, students will experience how to do this by leveraging the tools found in Visual Studio, Azure DevOps Services, and the community marketplace. Course Objectives At course completion, attendees will have had exposure to: ✓ Agile software development and testing ✓ Acceptance Test-Driven Development (ATDD) ✓ The role of the agile tester ✓ Creating automated acceptance tests in Visual Studio ✓ Developer and tester collaboration ✓ Using SpecFlow to automate acceptance testing ✓ Agile software requirements ✓ Using Microsoft Test Runner ✓ Introduction to Azure DevOps Services ✓ Testing web and desktop applications ✓ Using Azure Boards to plan and track work ✓ Capturing screenshots and video while testing ✓ Creating, managing, and refining a product backlog ✓ Viewing and charting test run results ✓ Defining and planning for quality software ✓ Using Selenium for automated web UI testing ✓ Using Azure Test Plans for test case management ✓ Using Appium for automated desktop UI testing ✓ Creating and managing test plans ✓ Performance and load testing ✓ Organizing test cases into test suites ✓ Introduction to exploratory testing ✓ Test configurations and configuration variables ✓ Using the Microsoft Test & Feedback extension ✓ Creating and managing test cases ✓ Creating a work item during a testing session ✓ Creating parameterized test cases ✓ Exploratory testing tours ✓ Leveraging shared steps ✓ Requesting and providing stakeholder feedback ✓ Importing and exporting test artifacts ✓ Introduction to Azure Pipelines ✓ Triaging and reporting bugs ✓ Building, testing, & releasing code using Azure Pipelines ✓ Extending Azure Test Plans ✓ Hosted vs.
    [Show full text]
  • Software Engineering
    SOFTWARE ENGINEERING FUNDAMENTALS OF SOFTWARE TESTING Software testing is the evaluation of a system with the intention of finding an error or fault or a bug. It also checks for the functionalities of the system so that it meets the specified requirements. LEARNING OBJECTIVES • To execute a program with the intent of finding an error. • To check if the system meets the requirements and be executed successfully in the intended environment. • To check if the system is “Fit for purpose”. • To check if the system does what it is expected to do. STAGES OF TESTING Various testing is done during various phases of the development cycle such as: Module or unit testing. Integration testing, Function testing. Performance testing. Acceptance testing. Installation testing. UNIT TESTING Unit testing is the testing and validation at the unit level. Unit testing validates and tests the following: Algorithms and logic Data structures (global and local) Interfaces Independent paths Boundary conditions Error handling Formal verification. Testing the program itself by performing black box and white box testing. INTEGRATION TESTING One module can have an adverse effect on another. Sub-functions, when combined, may not produce the desired major function. Individually acceptable imprecision in calculations may be magnified to unacceptable levels. Interfacing errors not detected in unit testing may appear in this phase. Timing problems (in real-time systems) and resource contention problems are not detectable by unit testing. Top-Down Integration The main control module is used as a driver, and stubs are substituted for all modules directly subordinate to the main module. Depending on the integration approach selected (depth or breadth first), subordinate stubs are replaced by modules one at a time.
    [Show full text]
  • Correctness and Execution of Concurrent Object-Oriented Programs
    DISS. ETH NO. 22101 Correctness and Execution of Concurrent Object-Oriented Programs A thesis submitted to attain the degree of DOCTOR OF SCIENCES OF ETH ZURICH (DR. SC. ETH ZURICH) presented by SCOTT GREGORY WEST Master of Science, McMaster University, Canada born on December 5th, 1983 citizen of Canada accepted on the recommendation of Prof. Dr. Bertrand Meyer, examiner Dr. Sebastian Nanz, co-examiner Prof. Dr. Jonathan Ostroff, co-examiner Prof. Dr. Mauro Pezz`e, co-examiner 2014 Acknowledgements This work is the product of long days and nights, and was only made possible by the help and support of numerous people in my life. First I must thank my parents, Gina and Garry, who have always encouraged me and given me the support and encouragement to pursue my goals. Without them I could have never started this journey, let alone completed it. I also greatly appreciate the wisdom and support of my grandparents, Charles, Mary, and Barbara. Wolfram Kahl, who first got me started on the idea of continuing in academia and provided a perfect start to the process by supervising my Masters thesis, has my deepest appreciation. There are numerous people around me who have supported me over the years: Marco P., Benjamin, Marco T., Claudia, Michela, Chris, Georgiana, Carlo, Martin, Max, Cristiano, Jason, Stephan, Mischael, Juri, Jiwon, Andrey, and Alexey. The relationship I have with my office-mates, Nadia and Christian, is one of the things that I will treasure most from this time. I couldn’t have hoped for a more perfect pair of people to work with every day.
    [Show full text]
  • Automated Conformance Testing for Javascript Engines Via Deep Compiler Fuzzing
    Automated Conformance Testing for JavaScript Engines via Deep Compiler Fuzzing Guixin Ye Zhanyong Tang Shin Hwei Tan [email protected] [email protected] [email protected] Northwest University Northwest University Southern University of Science and Xi’an, China Xi’an, China Technology Shenzhen, China Songfang Huang Dingyi Fang Xiaoyang Sun [email protected] [email protected] [email protected] Alibaba DAMO Academy Northwest University University of Leeds Beijing, China Xi’an, China Leeds, United Kingdom Lizhong Bian Haibo Wang Zheng Wang Alipay (Hangzhou) Information & [email protected] [email protected] Technology Co., Ltd. University of Leeds University of Leeds Hangzhou, China Leeds, United Kingdom Leeds, United Kingdom Abstract all tested JS engines. We had identified 158 unique JS en- JavaScript (JS) is a popular, platform-independent program- gine bugs, of which 129 have been verified, and 115 have ming language. To ensure the interoperability of JS pro- already been fixed by the developers. Furthermore, 21 of the grams across different platforms, the implementation ofa JS Comfort-generated test cases have been added to Test262, engine should conform to the ECMAScript standard. How- the official ECMAScript conformance test suite. ever, doing so is challenging as there are many subtle defini- CCS Concepts: • Software and its engineering ! Com- tions of API behaviors, and the definitions keep evolving. pilers; Language features; • Computing methodologies We present Comfort, a new compiler fuzzing framework ! Artificial intelligence. for detecting JS engine bugs and behaviors that deviate from the ECMAScript standard. Comfort leverages the recent Keywords: JavaScript, Conformance bugs, Compiler fuzzing, advance in deep learning-based language models to auto- Differential testing, Deep learning matically generate JS test code.
    [Show full text]
  • Manual Testing
    MANUAL TESTING Introduction Introduction to software Testing Software Development Process Project Vs Product Objectives of Testing Testing Principles Software Development Life Cycle SDLC SDLC Models Waterfall Model Spiral Model V-Model Prototype Model Agile Model(Scrum) How to choose model for a project Software Testing-Methods White Box Testing Black Box Testing Gray Box Testing Levels of Testing Unit Testing Integration Testing Structural Testing . Statement Coverage Testing . Condition Coverage Testing . Path Coverage Testing Big Bang Integration Top down approach Bottom up approach Stubs and Drivers System Testing Functional Testing Non Functional Testing Compatibility Testing Performance Testing . Load Testing . Volume Testing . Stress Testing Recovery Testing Installation Testing Security Testing Usability Testing Accessibility Testing What is the difference B/W Performance Load & Stress Testing User Acceptance Testing Alpha Testing Beta Testing Testing Terminology End-End Testing Ad-hoc Testing Risk Based Testing Sanity/Smoke Testing Re-Testing Regression Testing Exploratory Testing Parallel Testing Concurrent Testing Globalization Testing . II8N . LI0N Windows & Web Application Testing Check List for Window App Testing Check List for Web Application Testing Web APP Testing Terminologies Software Testing Life Cycle (STLC) Test Strategy Test Planning Test planning Test Cases Design Error guessing Equivalence Partition Boundary Value Analysis Test case Authoring Functional Test case
    [Show full text]
  • Manual Testing Syllabus Real Time Process
    Manual Testing Syllabus Real time Process 11/2/2012 Softwaretestingonlinetraining Ahamad Software Testing Introduction Introduction to Software Testing Role of a Tester What is need for testing? What is defect? Why Software has defects What is Quality? Software Development Life Cycle (SDLC) SDLC Phases SDLC Models - Waterfall Model - Prototype Model - Spiral Model - V Model - Agile Model Software Testing Methodologies Static Testing White Box Testing Black Box Testing Gray Box Testing Software Testing Techniques Reviews Types of Reviews Inspections & Audits Walkthroughs White Box Testing Unit Testing Mutation Testing Integration Testing - Big-bang approach - Top-down approach - Bottom-up approach Black Box Testing System Testing UAT - Alpha Testing - Beta Testing Black Box Test Design Techniques ECP BVA Decision Table Testing Use Case Testing Software Testing Life Cycle (STLC) Test Plan Test Scenarios Designing Test Cases & Generating Test Data Test Execution & Types of Testing Smoke / Sanity Testing Risk Based Testing Ad-hoc Testing Re-Testing Regression Testing End-to-End Testing Exploratory Testing Monkey Testing UI Testing Usability Testing Security Testing Performance Testing Load Testing Stress Testing Compatibility Testing Installation Testing Globalization Testing Localization Testing Recovery Testing Acceptance Testing Concurrent Testing Benchmark Testing Database Testing Defect / Bug Life Cycle & Defect Management What is defect? Defect Classification Defect report Template Test Closure Criteria for Test Closure Test Summary Report Testing Certifications details Prototype design process Mockup design How to design sample poc Real time Design mockup process Resume Preparation Support Interview Questions and Answers .
    [Show full text]
  • Testing Tools Magnitia Content
    Manual Testing This course is designed assuming that the participants have no prior IT background and covers all the necessary concepts of Software Testing, Day to Day work of a Software Test Engg in the project environment. Apart from intro to Software Testing on Different technologies like Cloud, WebServices, and Mobile Testing. Introduction o Volume Testing • Software Development Process o Stress Testing • Project Vs Product o Recovery Testing • Objectives of Testing o Installation Testing • Testing Principals o Globalization Testing • SDLC o I18N o L10N SDLC Models o Security Testing • Waterfall Model o Usability Testing • Incremental Models o Accessibility Testing • V Model • Prototype Model User Acceptance Testing • Agile Model (Scrum) • Alpha Testing • Agile with DevOps • Beta Testing • How to Choose Model for a Project Testing Terminology Software Testing-Methods • Functional Testing • White Box Testing • End-End Testing • Block Box Testing • Ad-hoc Testing • Gray Box Testing • Risk Based Testing • Sanity/Smoke Testing Levels of Testing • Re-Testing • Unit Testing • Regression Testing o Structural Testing • Exploratory Testing o Statement Coverage Testing • Parallel Testing o Condition Coverage Testing • Concurrent Testing o Branch Coverage Testing o Path Coverage Testing Windows & WeB Application Testing • Integration Testing • Check List for Window App Testing o Big Bang Integration • Check List for Web Application o Top Down Approach Testing o Bottom up approach • Web App Testing Terminology o Stubs and Drives Software Testing
    [Show full text]
  • Generating Unit Tests for Concurrent Classes
    Generating Unit Tests for Concurrent Classes Sebastian Steenbuck Gordon Fraser Saarland University – Computer Science University of Sheffield Saarbrucken,¨ Germany Sheffield, UK [email protected] gordon.fraser@sheffield.ac.uk Abstract—As computers become more and more powerful, 1 public Set entrySet() { programs are increasingly split up into multiple threads to 2 Set result = entrySet; leverage the power of multi-core CPUs. However, writing cor- 3 return (entrySet == null) ? rect multi-threaded code is a hard problem, as the programmer 4 entrySet = new AsMapEntries() has to ensure that all access to shared data is coordinated. : Existing automated testing tools for multi-threaded code mainly 5 result; focus on re-executing existing test cases with different sched- 6 } ules. In this paper, we introduce a novel coverage criterion Figure 1. Concurrency bug #339 in the HashMultimap.AsMap class that enforces concurrent execution of combinations of shared in the Guava library: entrySet() may return null if two threads call memory access points with different schedules, and present the method at the same time. an approach that automatically generates test cases for this HashMultimap multi0 = new HashMultimap(); coverage criterion. Our CONSUITE prototype demonstrates HashMultimap.AsMap map0 = m.asMap(); that this approach can reliably reproduce known concurrency errors, and evaluation on nine complex open source classes Thread 1: Thread 2: revealed three previously unknown data-races. map0.entrySet(); map0.entrySet(); Keywords-concurrency coverage; search based software en- Figure 2. Concurrent unit test produced by CONSUITE reveal- gineering; unit testing ing the bug in Figure1: Two threads independently access a shared HashMultimap.AsMap instance, and are executed with all interleavings of the three accesses to the entrySet field in Figure1.
    [Show full text]
  • Professional Software Testing Using Visual Studio 2017 Course PSTUVS2017 – 3 Day Instructor Led Class
    Course Outline Professional Software Testing Using Visual Studio 2017 Course PSTUVS2017 – 3 Day Instructor Led Class About This Course This three-day course will introduce you to contemporary testing principles and practices used by Agile teams to deliver high-quality increments of software on regular iterations. Audience Profile: This course is appropriate for all members of a software development team, especially those team members performing testing activities – regardless of skill level. This course also provides value for non-testers (managers, Scrum Masters, coaches, etc.) who want a better understanding of what Agile software testing involves. You should take this class if any of these issues sound familiar: • Release dates and budgets are missed due to low quality and bugs • Testing activities are performed at the end of the sprint/iteration or release • No collective ownership or collaboration exists between the developers and testers • The team tests the wrong things at the wrong time • No automated tests, no regression tests, and no idea of the quality of your software! Course Objectives • Visual Studio Team Services • Selenium UI tests • Team Projects • Web performance tests • Working and testing as a team • Load tests • Managing a backlog • Exploratory testing • Planning a sprint • Test and Feedback extension • Test case management • Installing and configuring agents • Reporting a bug • Running an automated build • Creating/running a unit test • Running automated tests • Calculating code coverage • Configuring Continuous
    [Show full text]