Software Testing Techniques in Software Development Life Cycle
Total Page:16
File Type:pdf, Size:1020Kb
D.Meenakshi et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (3) , 2014, 3729-3731 Software Testing Techniques in Software Development Life Cycle D.Meenakshi , J. Swami Naik , M.Raghavendra Reddy Computer Science Department, G.Pulla Reddy Engineering College Kurnool,AndhraPradesh, India Abstract: Software testing provides a means to minimize by the user. [Verification: Are we building the product errors, cut maintenance and decrease overall software costs. right?] Different software development and testing methodologies, 2. Error Detection: Testing makes an attempt to go tools, and techniques have emerged to enhance software things wrong and to determine if things happen when quality. At each and every phase of software development life they shouldn‟t or things don‟t happen when they cycle testing is performed with different tools and techniques. One of the major problems within software testing area is how should. to get a suitable set of cases to test a software system. The 3. Validation looks at the system correctness – i.e. is software should assure maximum effectiveness with the least defined as checking that what has been specified is hat possible number of test cases. At present there are many the user actually wanted. [Validation: Are we building testing techniques available for generating test cases. the right prduct?] Keywords: Software Testing, Level of Testing, Testing The purpose of testing is verification, validation and error Technique, Testing Process detection in order to detect problems – and the purpose of finding those problems is to get them set. Most generic I. INTRODUCTION Software problems are : Inadequate software performance, Software testing is as getting on as the hills in the the past Data searches that yields incorrect results. Incorrect data digital computers. Testing the software is the only means entry & unsuccessful data edits, wrong coding / for assessing the quqlity of software . in view of the fact implementation of business rules, Incorrect calculation, that testing typically consumes 40 - 50% of effort in ineffective data edits, Incorrect processing of data devolopment, and requires more effort for systems that relationship, Incorrect or inadequate interfaces with other require more reliability, it is an important t part of the programs, Inadequate performance and security controls, software engineering. Contemporary software systems Incorrect file handling, insufficient support of business must be more reliable and correct. Automatic methods are needs, Unreliable results or performance, Confusing or used for ensuring software correctness which range from misleading data, Software usability by end users & static techniques, such as (software) model checking or Obsolete Software, Inconsistent processing. static analysis, to dynamic techniques , such as testing. All these testing techniques have strengths and weaknesses: Terminology: model checking (with abstraction) is automatic, exhaustive, Mistake – A manual action done which produce an but may suffer from scalability issues. Static analysis, on incorrect result. the other hand, scales to very large programs but may give Fault [or Defect] – An incorrect step, process, or too many fake warnings, while testing alone may miss data definition in a program. significant errors, since it is intrinsically incomplete. Failure – The lack of ability of a system or Software Testing: component to execute its Required function within Software testing is not only error detection; Testing the precise Performance requirement. software means operating the programs under controlled Error – The variation between a computed, conditions, to (1) checks whether it behaves “as specified”; observed, or Metric or condition and the (2) to detect errors, and (3) to validate that what has been true,specified, or theoretically correct value or specified is what the user actually wants. condition. Specification – A document that specifies in a Verifica Error Validation complete, accurate, confirmable manner, the tion Detection requirements, design, performance, or other feature of a system or component, and often the Procedures for determining whether these Activities in software testing provisions have been Satisfied. We observe errors, which can often be connected with failures. But 1. Verification is defined as checking or testing of items, the decisive cause of the fault is often very hard to including software, for conformance and reliability by find. evaluating the results against requirements specified www.ijcsit.com 3729 D.Meenakshi et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (3) , 2014, 3729-3731 II. OBJECTIVES : To verify if the system is “Fit for function”. A good test case is one that has a probability of To confirm if the system meets the requirements detecting an error. and be executed successfully in the Intended A superior test is not redundant. environment. A successful test is one that uncovers a yet Executing a program with the intention of finding undiscovered error. an error. A good test supposed to be “best of breed”. To check if the method does what it is predictable to do. Testing type Specification Scope Opacity Who do it? Low level design Actual Unit Classes White box programmer code Low level and high level Integration Multiple classes White and Black box programmer design Function High level design Whole product Black box Independent test group Whole product in System Requirement analysis Black box Independent test group environment Whole product in Acceptance Requirement analysis Black box Customer environment Whole product in Beta Adhoc Black box Customer environment Whole product in Independent test group Regression Changed software White and Black box environment programmer `III. SOFTWARE TESTING LIFECYCLE – PHASES Testing Cycle starts with the study of user‟s Requirement requirements. study Understanding of the requirements is very essential for testing the product Component Identification Design Test Specification Design Test Specification Review Test Execution Construction Code Review , Test execution and evaluation Performance and simulation Testing Test Closure Deployment Test summary report ,Project De-brief , Project Documentation Test Process Analysis Analysis done on the reports and improving the application‟s performance by implementing new technology and additional IV LEVELS OF TESTING: Unit testing: The initial level of testing. Tests done on particular methods or components. Requires information about internal program design and code. Test wll be done by Programmers (not by testers). www.ijcsit.com 3730 D.Meenakshi et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5 (3) , 2014, 3729-3731 To test the function of a program or unit of Acceptance testing: code such as a program or module,To test To check whether the software meets customers internal logic,To verify internal design,To requirements or not. test path & conditions coverage,To test Beta testing: exception conditions & error handling,To test Testing the software under the customers environment. Objectives the function of a program or unit of code such Regression testing: as a program or module,To test internal Testing done to the software after making the changes or logic,To verify internal design,To test path & enhancements conditions coverage,To test exception conditions & error handling V. TEST PLAN When After modules are coded Purpose of preparing a Test Plan Who Developer Validate the acceptability of a software product. Internal Application Design,Master Test Help the people outside the test group to Input Plan,Unit Test Plan understand „why‟ and „how‟ of product Output Unit Test Report validation. White Box testing techniques,Test Coverage A Test Plan should be thorough enough (Overall Methods techniques coverage of test to be conducted) Debug,Re-structure,Code Scope Tools Analyzers,Path/statement coverage tools The areas to be tested by the QA team. Specify the areas which are out of scope (screens, Incremental Integration Testing database, mainframe processes etc). It is done based on design and Continuous testing Test Approach of an application as and when a new functionality Details on how the testing is to be performed. is added. Any specific strategy is to be followed for testing Application’s functionality aspects are required to (including configuration management). be independent enough to work separately before completion of development. VI. CONCLUSION : Done by programmers or testers. Testing can show the presence of faults in a system; it Top-down integration: cannot prove that there are no remaining faults. Component Develop the skeleton of the system and integrates developers are dependable for component testing; system it with components. testing is the responsibility of a separate team. Integration Bottom-up integration testing is testing increments of the system; release testing Integrate infrastructure components then add involves testing a system to be released to a customer. Use functional components. experience and guidelines to design test cases in defect To simplify error localization, systems should be testing. Interface testing is designed to discover defects in incrementally integrated the interfaces of composite components. Equivalence partitioning is a way of discovering test cases - all cases in To technically verify proper interfaces a partition