different models(waterfall,v model,spiral etc…)

maturity models.

1. : Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is usually performed by the customer.

2. Accessibility Testing: Type of testing which determines the usability of a product to the people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is conducted by persons having disabilities.

3. Active Testing: Type of testing consisting in introducing test data and analyzing the execution results. It is usually conducted by the testing teams.

4. Agile Testing: practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. It is usually performed by the QA teams.

5. Age Testing: Type of testing which evaluates a system's ability to perform in the future. The evaluation process is conducted by testing teams.

6. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to 'break' the system by randomly trying the system's functionality. It is performed by the testing teams.

7. Alpha Testing: Type of testing a software product or system conducted at the developer's site. Usually it is performed by the end user.

8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the product requirements. It is performed by the testing teams.

9. API Testing: Testing technique similar to in that it targets the code level. API Testing differs from unit testing in that it is typically a QA task and not a developer task.

10. All-pairs Testing: Combinatorial testing method that tests all possible discrete combinations of input parameters. It is performed by the testing teams.

11. Automated Testing: Testing technique that uses automation testing tools to control the environment set-up, test execution and results reporting. It is performed by a computer and is used inside the testing teams.

12. Basis Path Testing: A testing mechanism which derives a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. It is used by testing teams when defining test cases.

13. Backward : Testing method which verifies the behavior of the developed software with older versions of the test environment. It is performed by testing teams.

14. Beta Testing: Final testing before releasing application for commercial purpose. It is typically done by end-users or others.

15. Testing: Testing technique that uses representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. It is performed by testing teams.

16. Big Bang : Testing technique which integrates individual program modules only when everything is ready. It is performed by the testing teams.

17. Binary Portability Testing: Technique that tests an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. It is performed by the testing teams.

18. Boundary Value Testing: Software testing technique in which tests are designed to include representatives of boundary values. It is performed by the QA testing teams.

1

19. Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. It is usually performed by the testing teams.

20. Branch Testing: Testing technique in which all branches in the program source code are tested at least once. This is done by the developer.

21. Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. It is performed by testing teams.

22. Black box Testing: A method of software testing that verifies the functionality of an application without having specific knowledge of the application's code/internal structure. Tests are based on requirements and functionality. It is performed by QA teams.

23. Code-driven Testing: Testing technique that uses testing frameworks (such as xUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. It is performed by the development teams.

24. Compatibility Testing: Testing technique that validates how well a software performs in a particular hardware/software/operating system/network environment. It is performed by the testing teams.

25. Comparison Testing: Testing technique which compares the product strengths and weaknesses with previous versions or other similar products. Can be performed by tester, developers, product managers or product owners.

26. Component Testing: Testing technique similar to unit testing but with a higher level of integration - testing is done in the context of the application instead of just directly testing a specific method. Can be performed by testing or development teams.

27. Configuration Testing: Testing technique which determines minimal and optimal configuration of hardware and software, and the effect of adding or modifying resources such as memory, disk drives and CPU. Usually it is performed by the performance testing engineers.

28. Condition Coverage Testing: Type of software testing where each condition is executed by making it true and false, in each of the ways at least once. It is typically made by the automation testing teams.

29. Compliance Testing: Type of testing which checks whether the system was developed in accordance with standards, procedures and guidelines. It is usually performed by external companies which offer "Certified OGC Compliant" brand.

30. Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. It it usually done by performance engineers.

31. : The process of testing that an implementation conforms to the specification on which it is based. It is usually performed by testing teams.

32. Context Driven Testing: An Agile Testing technique that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization at a specific moment. It is usually performed by Agile testing teams.

33. Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems. It is usually performed by the QA teams.

34. Decision Coverage Testing: Type of software testing where each condition/decision is executed by setting it on true/false. It is typically made by the automation testing teams.

35. : Type of testing in which the tests are carried out to the specimen's failure, in order to understand a specimen's structural performance or material behaviour under different loads. It is usually performed by QA teams.

36. Dependency Testing: Testing type which examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality. It is usually performed by testing teams.

37. Dynamic Testing: Term used in software engineering to describe the testing of the dynamic behavior of code. It is typically performed by testing teams.

2

38. Domain Testing: White box testing technique which contains checkings that the program accepts only valid input. It is usually done by software development teams and occasionally by automation testing teams.

39. Error-Handling Testing: Software testing type which determines the ability of the system to properly process erroneous transactions. It is usually performed by the testing teams.

40. End-to-end Testing: Similar to , involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. It is performed by QA teams.

41. Endurance Testing: Type of testing which checks for memory leaks or other problems that may occur with prolonged execution. It is usually performed by performance engineers.

42. : Black box testing technique performed without planning and documentation. It is usually performed by manual testers.

43. Equivalence Partitioning Testing: Software testing technique that divides the input data of a software unit into partitions of data from which test cases can be derived. it is usually performed by the QA teams.

44. Fault injection Testing: Element of a comprehensive test strategy that enables the tester to concentrate on the manner in which the application under test is able to handle exceptions. It is performed by QA teams.

45. Formal verification Testing: The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. It is usually performed by QA teams.

46. Functional Testing: Type of black box testing that bases its test cases on the specifications of the software component under test. It is performed by testing teams.

47. Fuzz Testing: Software testing technique that provides invalid, unexpected, or random data to the inputs of a program - a special area of . Fuzz testing is performed by testing teams.

48. Gorilla Testing: Software testing technique which focuses on heavily testing of one particular module. It is performed by quality assurance teams, usually when running full testing.

49. : A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. It can be performed by either development or testing teams.

50. Glass box Testing: Similar to white box testing, based on knowledge of the internal logic of an application‟s code. It is performed by development teams.

51. GUI software Testing: The process of testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done by the testing teams.

52. Globalization Testing: Testing method that checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. It is performed by the testing team.

53. Hybrid Integration Testing: Testing technique which combines top-down and bottom-up integration techniques in order leverage benefits of these kind of testing. It is usually performed by the testing teams.

54. Integration Testing: The phase in software testing in which individual software modules are combined and tested as a group. It is usually conducted by testing teams.

55. Interface Testing: Testing conducted to evaluate whether systems or components pass data and control correctly to one another. It is usually performed by both testing and development teams.

56. Install/uninstall Testing: Quality assurance work that focuses on what customers will need to do to install and set up the new software successfully. It may involve full, partial or upgrades install/uninstall processes and is typically done by the software testing engineer in conjunction with the configuration manager.

3

57. Internationalization Testing: The process which ensures that product‟s functionality is not broken and all the messages are properly externalized when used in different languages and locale. It is usually performed by the testing teams.

58. Inter-Systems Testing: Testing technique that focuses on testing the application to ensure that interconnection between application functions correctly. It is usually done by the testing teams.

59. Keyword-driven Testing: Also known as table-driven testing or action-word testing, is a software testing methodology for automated testing that separates the test creation process into two distinct stages: a Planning Stage and an Implementation Stage. It can be used by either manual or automation testing teams.

60. Load Testing: Testing technique that puts demand on a system or device and measures its response. It is usually conducted by the performance engineers.

61. Localization Testing: Part of software testing process focused on adapting a globalized application to a particular culture/locale. It is normally done by the testing teams.

62. Loop Testing: A white box testing technique that exercises program loops. It is performed by the development teams.

63. Manual Scripted Testing: Testing method in which the test cases are designed and reviewed by the team before executing it. It is done by teams.

64. Manual-Support Testing: Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data from automated system. it is conducted by testing teams.

65. Model-Based Testing: The application of Model based design for designing and executing the necessary artifacts to perform software testing. It is usually performed by testing teams.

66. Mutation Testing: Method of software testing which involves modifying programs' source code or byte code in small ways in order to test sections of the code that are seldom or never accessed during normal tests execution. It is normally conducted by testers.

67. Modularity-driven Testing: Software testing technique which requires the creation of small, independent scripts that represent modules, sections, and functions of the application under test. It is usually performed by the testing team.

68. Non-functional Testing: Testing technique which focuses on testing of a software application for its non-functional requirements. Can be conducted by the performance engineers or by manual testing teams.

69. Negative Testing: Also known as "test to fail" - testing method where the tests' aim is showing that a component or system does not work. It is performed by manual or automation testers.

70. Operational Testing: Testing technique conducted to evaluate a system or component in its operational environment. Usually it is performed by testing teams.

71. : Systematic, statistical way of testing which can be applied in user interface testing, system testing, , configuration testing and performance testing. It is performed by the testing team.

72. : Software development technique in which two team members work together at one keyboard to test the software application. One does the testing and the other analyzes or reviews the testing. This can be done between one Tester and Developer or Business Analyst or between two testers with both participants taking turns at driving the keyboard.

73. Passive Testing: Testing technique consisting in monitoring the results of a running system without introducing any special test data. It is performed by the testing team.

74. Parallel Testing: Testing technique which has the purpose to ensure that a new application which has replaced its older version has been installed and is running correctly. It is conducted by the testing team.

75. Path Testing: Typical white box testing which has the goal to satisfy coverage criteria for each logical path through the program. It is usually performed by the development team.

76. Penetration Testing: Testing method which evaluates the security of a computer system or network by simulating an attack from a malicious source. Usually they are conductedby specialized penetration testing companies. 4

77. Performance Testing: Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements. It is usually conducted by the performance engineer.

78. Qualification Testing: Testing against the specifications of the previous release, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.

79. Ramp Testing: Type of testing consisting in raising an input signal continuously until the system breaks down. It may be conducted by the testing team or the performance engineer.

80. Regression Testing: Type of software testing that seeks to uncover software errors after changes to the program (e.g. bug fixes or new functionality) have been made, by retesting the program. It is performed by the testing teams.

81. Recovery Testing: Testing technique which evaluates how well a system recovers from crashes, hardware failures, or other catastrophic problems. It is performed by the testing teams.

82. Requirements Testing: Testing technique which validates that the requirements are correct, complete, unambiguous, and logically consistent and allows designing a necessary and sufficient set of test cases from those requirements. It is performed by QA teams.

83. : A process to determine that an information system protects data and maintains functionality as intended. It can be performed by testing teams or by specialized security-testing companies.

84. Sanity Testing: Testing technique which determines if a new software version is performing well enough to accept it for a major testing effort. It is performed by the testing teams.

85. Scenario Testing: Testing activity that uses scenarios based on a hypothetical story to help a person think through a complex problem or system for a testing environment. It is performed by the testing teams.

86. Scalability Testing: Part of the battery of non-functional tests which tests a software application for measuring its capability to scale up - be it the user load supported, the number of transactions, the data volume etc. It is conducted by the performance engineer.

87. Statement Testing: White box testing which satisfies the criterion that each statement in a program is executed at least once during program testing. It is usually performed by the development team.

88. Static Testing: A form of software testing where the software isn't actually used it checks mainly for the sanity of the code, algorithm, or document. It is used by the developer who wrote the code.

89. Stability Testing: Testing technique which attempts to determine if an application will crash. It is usually conducted by the performance engineer.

90. : Testing technique which examines all the basic components of a software system to ensure that they work properly. Typically, smoke testing is conducted by the testing team, immediately after a software build is made .

91. Storage Testing: Testing type that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. It is usually performed by the testing team.

92. : Testing technique which evaluates a system or component at or beyond the limits of its specified requirements. It is usually conducted by the performance engineer.

93. Structural Testing: White box testing technique which takes into account the internal structure of a system or component and ensures that each program statement performs its intended function. It is usually performed by the software developers.

94. System Testing: The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It is conducted by the testing teams in both development and target environment.

95. System integration Testing: Testing process that exercises a software system's coexistence with others. It is usually performed by the testing teams.

5

96. Top Down Integration Testing: Testing technique that involves starting at the stop of a system hierarchy at the user interface and using stubs to test from the top down until the entire system has been implemented. It is conducted by the testing teams.

97. Thread Testing: A variation of top-down testing technique where the progressive integration of components follows the implementation of subsets of the requirements. It is usually performed by the testing teams.

98. Upgrade Testing: Testing technique that verifies if assets created with older versions can be used properly and that user's learning is not challenged. It is performed by the testing teams.

99. Unit Testing: Software verification and validation method in which a programmer tests if individual units of source code are fit for use. It is usually conducted by the development team.

100. User Interface Testing: Type of testing which is performed to check how user-friendly the application is. It is performed by testing teams.

101. : Testing technique which verifies the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. It is usually performed by end users.

102. Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. It is usually conducted by the performance engineer.

103. Vulnerability Testing: Type of testing which regards application security and has the purpose to prevent problems which may affect the application integrity and stability. It can be performed by the internal testing teams or outsourced to specialized companies.

104. White box Testing: Testing technique based on knowledge of the internal logic of an application‟s code and includes tests like coverage of code statements, branches, paths, conditions. It is performed by software developers.

105. Workflow Testing: Scripted end-to-end testing technique which duplicates specific workflows which are expected to be utilized by the end-user. It is usually conducted by testing teams.

Incremental integration testing – Bottom up approach for testing i.e of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black- box type testing geared to functional requirements of an application.

System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

6

Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system‟s response time degrades or fails.

Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Performance testing – Term often used interchangeably with „stress‟ and „load‟ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.

Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.

In Black Box Testing we just focus on inputs and output of the software system without bothering about internal knowledge of the software program.

The above Black Box can be any software system you want to test. For example : an operating system like Windows, a website like Google ,a database like Oracle or even your own custom application. Under Black Box Testing , you can test these applications by just focusing on the inputs and outputs without knowing their internal code implementation.

Black box testing - Steps

Here are the generic steps followed to carry out any type of Black Box Testing.

 Initially requirements and specifications of the system are examined.  Tester chooses valid inputs (positive test scenario) to check whether SUT processes them correctly . Also some invalid inputs (negative test scenario) are chosen to verify that the SUT is able to detect them.  Tester determines expected outputs for all those inputs.  Software tester constructs test cases with the selected inputs.  The test cases are executed.  Software tester compares the actual outputs with the expected outputs.  Defects if any are fixed and re-tested.

Types of Black Box Testing

There are many types of Black Box Testing but following are the prominent ones -

 Functional testing – This black box testing type is related to functional requirements of a system; it is done by software testers.  Non-functional testing – This type of black box testing is not related to testing of a specific functionality , but non- functional requirements such as performance, scalability, usability.  Regression testing – Regression testing is done after code fixes , upgrades or any other system maintenance to check the new code has not affected the existing code.

Tools used for Black Box Testing:

Tools used for Black box testing largely depends on the type of black box testing your are doing. 7

For Functional/ Regression Tests you can use - QTP

For Non-Functional Tests you can use - Loadrunner

Black box testing strategy:

Following are the prominent test strategy amongst the many used in Black box Testing

 Equivalence Class Testing: It is used to minimize the number of possible test cases to an optimum level while maintains reasonable test coverage.  Boundary Value Testing: Boundary value testing is focused on the values at boundaries. This technique determines whether a certain range of values are acceptable by the system or not.It is very useful in reducing the number of test cases. It is mostly suitable for the systems where input is within certain ranges.  Decision Table Testing: A decision table puts causes and their effects in a matrix. There is unique combination in each column.

Comparison of Black Box and White Box Testing:

While White Box Testing (Unit Testing) validates internal structure and working of your software code, the main focus of black box testing is on the validation of your functional requirements.

To conduct White Box Testing , knowledge of underlying programming language is essential. Current day software systems use a variety of programming languages and technologies and its not possible to know all of them. Black box testing gives abstraction from code and focuses testing effort on the software system behaviour.

Also software systems are not developed in a single chunk but development is broken down in different modules. Black box testing facilitates testing communication amongst modules (Integration Testing) .

In case you push code fixes in your live software system , a complete system check (black box regression tests) becomes essential.

Though White box testing has its own merits and help detect many internal errors which may degrade system performance

Black Box Testing and Software Development Life Cycle (SDLC)

Black box testing has its own life cycle called Software Test Life Cycle (STLC) and it is relative to every stage of Software Development Life Cycle.

 Requirement – This is the initial stage of SDLC and in this stage requirement is gathered. Software testers also take part in this stage.  Test Planning & Analysis – Testing Types applicable to the project are determined. A Test Plan is created which determines possible project risks and their mitigation.  Design – In this stage Test cases/scripts are created on the basis of software requirement documents  Test Execution- In this stage Test Cases prepared are executed. Bugs if any are fixed and re-tested.

***************STLC************************

The different stages in Software Test Life Cycle -

Each of these stages have a definite Entry and Exit criteria , Activities & Deliverables associated with it. 8

In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the different stages in STLC. Lets look into them in detail.

Requirement Analysis

During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.

Activities

 Identify types of tests to be performed.  Gather details about testing priorities and focus.  Prepare Requirement Traceability Matrix (RTM).  Identify test environment details where testing is supposed to be carried out.  Automation feasibility analysis (if required).

Deliverables

 RTM  Automation feasibility report. (if applicable)

Test Planning

This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

 Preparation of test plan/strategy document for various types of testing  Test tool selection  Test effort estimation  Resource planning and determining roles and responsibilities.  Training requirement

Deliverables

 Test plan /strategy document.  Effort estimation document.

Test Case Development

This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities

 Create test cases, automation scripts (if applicable)  Review and baseline test cases and scripts  Create test data (If Test Environment is available)

9

Deliverables

 Test cases/scripts  Test data

Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities

 Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment.  Setup test Environment and test data  Perform smoke test on the build

Deliverables

 Environment ready with test data set up  Smoke Test Results.

Test Execution

During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities

 Execute tests as per plan  Document test results, and log defects for failed cases  Map defects to test cases in RTM  Retest the defect fixes  Track the defects to closure

Deliverables

 Completed RTM with execution status  Test cases updated with results  Defect reports

Test Cycle Closure

Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities

 Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality  Prepare test metrics based on the above parameters.  Document the learning out of the project  Prepare Test closure report  Qualitative and quantitative reporting of quality of the work product to the customer. 10