Software testing PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 24 Aug 2011 09:57:44 UTC Contents Articles Introduction 1 Software testing 1 Black-box testing 16 Black-box testing 16 Exploratory testing 17 San Francisco depot 19 Session-based testing 20 Scenario testing 22 Equivalence partitioning 22 Boundary-value analysis 24 All-pairs testing 25 Fuzz testing 26 Cause-effect graph 30 Model-based testing 31 Web testing 35 Installation testing 37 White-box testing 38 White-box testing 38 Code coverage 39 Modified Condition/Decision Coverage 44 Fault injection 45 Bebugging 50 Mutation testing 51 Testing of non functional software aspects 55 Non-functional testing 55 Software performance testing 56 Stress testing 62 Load testing 63 Volume testing 67 Scalability testing 67 Compatibility testing 68 Portability testing 69 Security testing 69 Attack patterns 71 Localization testing 75 Pseudolocalization 76 Recovery testing 77 Soak testing 78 Characterization test 78 Unit testing 80 Unit testing 80 Self-testing code 85 Test fixture 86 Method stub 88 Mock object 89 Lazy systematic unit testing 92 Test Anything Protocol 93 xUnit 96 List of unit testing frameworks 98 SUnit 129 JUnit 130 CppUnit 132 Test::More 133 NUnit 134 NUnitAsp 136 csUnit 138 HtmlUnit 140 Test automation 141 Test automation 141 Test bench 145 Test execution engine 146 Test stubs 148 Testware 149 Test automation framework 150 Data-driven testing 151 Modularity-driven testing 152 Keyword-driven testing 152 Hybrid testing 154 Lightweight software test automation 155 Testing process 156 Software testing controversies 156 Test-driven development 158 Agile testing 165 Bug bash 166 Pair Testing 166 Manual testing 167 Regression testing 169 Ad hoc testing 171 Sanity testing 171 Integration testing 173 System testing 174 System integration testing 176 Acceptance testing 178 Risk-based testing 182 Software testing outsourcing 183 Tester driven development 185 Test effort 185 Testing artefacts 187 IEEE 829 187 Test strategy 189 Test plan 192 Traceability matrix 194 Test case 195 Test data 197 Test suite 198 Test script 199 Test harness 200 Static testing 201 Static testing 201 Software review 202 Software peer review 204 Software audit review 205 Software technical review 206 Management review 207 Software inspection 208 Fagan inspection 210 Software walkthrough 213 Code review 214 Automated code review 216 Code reviewing software 217 Static code analysis 218 List of tools for static code analysis 220 GUI testing and review 226 GUI software testing 226 Usability testing 229 Think aloud protocol 234 Usability inspection 235 Cognitive walkthrough 235 Heuristic evaluation 238 Pluralistic walkthrough 241 Comparison of usability evaluation methods 244 References Article Sources and Contributors 246 Image Sources, Licenses and Contributors 252 Article Licenses License 253 1 Introduction Software testing Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects). Software testing can be stated as the process of validating and verifying that a software program/application/product: 1. meets the requirements that guided its design and development; 2. works as expected; and 3. can be implemented with the same characteristics. Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed. Overview Testing can never completely identify all the defects within software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment. A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[4] History The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[5] Although his attention was on breakage testing ("a successful test is one that finds a bug"[5] [6] ) it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[7] • Until 1956 - Debugging oriented[8] Software testing 2 • 1957–1978 - Demonstration oriented[9] • 1979–1982 - Destruction oriented[10] • 1983–1987 - Evaluation oriented[11] • 1988–2000 - Prevention oriented[12] Software testing topics Scope A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[13] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.[14] Functional vs non-functional testing Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work". Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Defects and failures Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer.[15] A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security. Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.[16] Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software.[16] A single defect may result in a wide range of failure symptoms. Software testing 3 Finding faults early It is commonly believed that the earlier a defect is found the cheaper it is to fix it.[17] The following table shows the cost of fixing the defect depending on the stage it was found.[18] For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. Cost to fix a defect Time detected Requirements Architecture Construction System test Post-release Time introduced
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages258 Page
-
File Size-