Overview

Testing can never completely identify all the defects within software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.

Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. is the process of attempting to make this assessment.

A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[4]

[edit]History

The separation of from testing was initially introduced by Glenford J. Myers in 1979.[5] Although his attention was on breakage testing ("a successful test is one that finds a bug"[5][6]) it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[7]

. Until 1956 - Debugging oriented[8] . 1957–1978 - Demonstration oriented[9] . 1979–1982 - Destruction oriented[10] . 1983–1987 - Evaluation oriented[11] . 1988–2000 - Prevention oriented[12]

[edit]Software testing topics

[edit]Scope

A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[13] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.[14]

[edit]Functional vs non-functional testing

Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work".

Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, orsecurity. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.

[edit]Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer.[15] A common source of requirements gaps is non- functional requirements such as testability, scalability, maintainability, usability, performance, and security.

Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.[16] Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software.[16] A single defect may result in a wide range of failure symptoms.

[edit]Finding faults early

It is commonly believed that the earlier a defect is found the cheaper it is to fix it.[17] The following table shows the cost of fixing the defect depending on the stage it was found.[18] For example, if a problem in the requirements is found only post-release, then it would cost 10– 100 times more to fix than if it had already been found by the requirements review.

Time detected

Cost to fix a defect

Requirements Architecture Construction System test Post-release Requirements 1× 3× 5–10× 10× 10–100×

Time introduced Architecture - 1× 10× 15× 25–100×

Construction - - 1× 10× 10–25×

[edit]Compatibility

A common cause of software failure (real or perceived) is a lack of compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application, which must render in a web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.

[edit]Input combinations and preconditions

A very fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[13][19] This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed tobe versus what it is supposed to do)— usability, scalability, performance, compatibility, reliability—can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.

[edit]Static vs. dynamic testing

There are many approaches to software testing. Reviews, walkthroughs, or inspections are considered as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing can be (and unfortunately in practice often is) omitted. Dynamic testing takes place when the program itself is used for the first time (which is generally considered the beginning of the testing stage). Dynamic testing may begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). Typical techniques for this are either using stubs/drivers or execution from a environment. For example, spreadsheet programs are, by their very nature, tested to a large extent interactively ("on the fly"), with results displayed immediately after each calculation or text manipulation. [edit]Software verification and validation

Software testing is used in association with verification and validation:[20]

. Verification: Have we built the software right? (i.e., does it match the specification). . Validation: Have we built the right software? (i.e., is this what the customer wants).

The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology:

Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. [edit]The software testing team

Software testing can be done by software testers. Until the 1980s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing,[21] different roles have been established: manager, test lead, test designer, tester, automation developer, and test administrator.

[edit]Software quality assurance (SQA)

Though controversial, software testing is a part of the software quality assurance (SQA) process.[13] In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the amount of faults that end up in the delivered software: the so-called defect rate.

What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane.

Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.

Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

[edit]Testing methods [edit]The box approach

Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

[edit]White box testing Main article: White box testing

White box testing is when the tester has access to the internal data structures and algorithms including the code that implement these. Types of white box testing The following types of white box testing exist:

. API testing (application programming interface) - testing of the application using public and private APIs . Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once) . Fault injection methods - improving the coverage of a test by introducing faults to test code paths . Mutation testing methods . Static testing - White box testing includes all static testing

Test coverage White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[22] Two common forms of code coverage are:

. Function coverage, which reports on functions executed . Statement coverage, which reports on the number of lines executed to complete the test

They both return a code coverage metric, measured as a percentage.

[edit]Black box testing Main article: Black box testing

Black box testing treats the software as a "black box"—without any knowledge of internal implementation. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, exploratory testing and specification-based testing.

Specification-based testing: Specification-based testing aims to test the functionality of software according to the applicable requirements.[23] Thus, the tester inputs data into, and only sees the output from, the test object. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Specification-based testing is necessary, but it is insufficient to guard against certain risks.[24] Advantages and disadvantages: The black box tester has no "bonds" with the code, and a tester's perception is very simple: a code must have bugs. Using the principle, "Ask and you shall receive," black box testers find bugs where programmers do not. On the other hand, black box testing has been said to be "like a walk in a dark labyrinth without a flashlight," because the tester doesn't know how the software being tested was actually constructed. As a result, there are situations when (1) a tester writes many test cases to check something that could have been tested by only one test case, and/or (2) some parts of the back-end are not tested at all.

Therefore, black box testing has the advantage of "an unaffiliated opinion", on the one hand, and the disadvantage of "blind exploring", on the other. [25]

[edit]Grey box testing

Grey box testing (American spelling: gray box testing) involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. Grey box testing may also include reverse engineeringto determine, for instance, boundary values or error messages. [edit]Testing levels

Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by theSWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model.[26] Other test levels are classified by the testing objective.[27]

[edit]Test target [edit]Unit testing Main article: Unit testing

Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[28]

These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other.

Unit testing is also called component testing.

[edit]Integration testing Main article: Integration testing

Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be localised more quickly and fixed.

Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[29]

[edit]System testing Main article: System testing

System testing tests a completely integrated system to verify that it meets its requirements.[30]

[edit]System integration testing Main article: System integration testing

System integration testing verifies that a system is integrated to any external or third-party systems defined in the system requirements.[citation needed]

[edit]Objectives of testing [edit]Regression testing Main article: Regression testing

Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, to very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.

[edit]Acceptance testing Main article: Acceptance testing

Acceptance testing can mean one of two things:

1. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression. 2. Acceptance testing is performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed]

[edit]Alpha testing

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the- shelf software as a form of internal acceptance testing, before the software goes to beta testing.[31]

[edit]Beta testing

Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.[citation needed]

[edit]Non-functional testing

Special methods exist to test non-functional aspects of software. In contrast to functional testing, which establishes the correct operation of the software (correct in that it matches the expected behavior defined in the design requirements), non-functional testing verifies that the software functions properly even when it receives invalid or unexpected inputs. Software fault injection, in the form offuzzing, is an example of non-functional testing. Non- functional testing, especially for software, is designed to establish whether the device under test can tolerate invalid or unexpected inputs, thereby establishing the robustness of input validation routines as well as error-handling routines. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform non-functional testing.

[edit]Software performance testing and load testing

Performance testing is executed to determine how fast a system or sub-system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testing is primarily concerned with testing that can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing.

Volume testing is a way to test functionality. Stress testing is a way to test reliability. Load testing is a way to test performance. There is little agreement on what the specific goals of load testing are. The terms load testing, performance testing, reliability testing, and volume testing, are often used interchangeably.

[edit]Stability testing

Stability testing checks to see if the software can continuously function well in or above an acceptable period. This activity of non-functional software testing is often referred to as load (or endurance) testing.

[edit]Usability testing

Usability testing is needed to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application.

[edit]Security testing

Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.

[edit]Internationalization and localization

The general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).[32]

Actual translation to human languages must be tested, too. Possible localization failures include:

. Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string. . Technical terminology may become inconsistent if the project is translated by several people without proper coordination or if the translator is imprudent. . Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language. . Untranslated messages in the original language may be left hard coded in the source code. . Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing. . Software may use a keyboard shortcut which has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language. . Software may lack support for the character encoding of the target language. . Fonts and font sizes which are appropriate in the source language, may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small. . A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction. . Software may lack proper support for reading or writing bi-directional text. . Software may display images with text that wasn't localized. . Localized operating systems may have differently- named system configuration files and environment variables and different formats for date and currency.

To avoid these and other localization problems, a tester who knows the target language must run the program with all the possible use cases for translation to see if the messages are readable, translated correctly in context and don't cause failures.

[edit]Destructive testing Main article: Destructive testing

Destructive testing attempts to cause the software or a sub- system to fail, in order to test its robustness.

[edit]The testing process

[edit]Traditional CMMI or waterfall development model

A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed, before it is shipped to the customer.[33] This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.[34]

Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.[35] Further information: Capability Maturity Model Integration and Waterfall model

[edit]Agile or Extreme development model

In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous deployment where software updates can be published to the public frequently. [36] [37]

[edit]A sample testing cycle

Although variations exist between organizations, there is a typical cycle for testing.[38] The sample below is common among organizations employing the Waterfall development model.

. Requirements analysis : Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work. . Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed. . Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software. . Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team. . Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. . Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later. . Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing. . Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly. . Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.

[edit]Automated testing

Main article: Test automation

Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integrationsoftware will run tests automatically every time code is checked into a version control system.

While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well- developed test suite of testing scripts in order to be truly useful.

[edit]Testing tools

Program testing and fault detection can be aided significantly by testing tools and . Testing/debug tools include features such as:

. Program monitors, permitting full or partial monitoring of program code including: . Instruction set simulator , permitting complete instruction level monitoring and trace facilities . Program animation , permitting step-by-step execution and conditional at source level or in . Code coverage reports . Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points . Automated functional GUI testing tools are used to repeat system-level tests through the GUI . Benchmarks , allowing run-time performance comparisons to be made . Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage

Some of these features may be incorporated into an Integrated Development Environment (IDE).

. A regression testing technique is to have a standard set of tests, which cover existing functionality that result in persistent tabular data, and to compare pre-change data to post-change data, where there should not be differences, using a tool like diffkit. Differences detected indicate unexpected functionality changes or "regression".

[edit]Measurement in software testing

Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO/IEC 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.

There are a number of frequently-used software measures, often called metrics, which are used to assist in determining the state of the software or the adequacy of the testing.

[edit]Testing artifacts

Software testing process can produce several artifacts. Test plan A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code or making additional changes. Some companies have a higher-level document called a test strategy. Traceability matrix A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when the source documents are changed, or to verify that the test results are correct. Test case A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[39] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Test script The test script is procedure, or a programing code that replicate the user actions. Initially the term was derived from the product of work created by automated regression test tools. Test Case will be a baseline to create test scripts using a tool or a program. Test suite The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Test data In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. Test harness The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

Software Testing Types: Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality. White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions. Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses. Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers. Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application. System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system. End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix. Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application. Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails. Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load. Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this. Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing. Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment. Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks. Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above. Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products. Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing. Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

Testing Tools - Presentation Transcript 1. Ajax Testing Tool Review when to test, what to test, how to test Ajax applications Wednesday, October 1st, 1:00 – 2:30p Ted Husted In this session, we explore when to test, what to test and how to test Ajax components. creating automatic tests with various tools, including YUI Test and OpenQA Selenium how to use Ajax testing tools with IDEs and continuous integration systems. 2. Ajax Testing Tool Review when to test, what to test, how to test Ajax applications Square One University Series 3. Ajax Testing Tool Review For the latest version of this presentation, visit http://slideshare.com/ted.husted For the latest version of source code, visit http://code.google.com/p/yazaar/ 4. Abstract Not long ago, testing Ajax components meant play-testing a page by hand. Today, there are a growing number of tools we can use to simplify and automate Ajax testing. During the session, we will cover when, what, and how to test Ajax applications creating automatic tests with various tools testing with IDEs and continuous integration systems 5. Ajax Testing Tool Review Tool Review JsUnit and YUI Test Selenium Cruise Control and Hudson Ajax Testing in Action Live Coding Demonstration YUI Test + Selenium + Hudson + Eclipse 6. JsUnit JsUnit is a Unit Testing framework for client-side (in-browser) JavaScript. Essentially a port of JUnit to JavaScript. Platform for automating the execution of tests on multiple browsers and multiple machines running different OSs. Development began in January 2001. 7. JsUnit – Key Features Create test cases with JavaScript code. modes - warn, info, and debug. Group related cases using test suites. Server component provide integration with other test platforms, test logging, and running tests on multiple target platforms. 8. JsUnit – Key Features Browser Support - Internet Explorer 5.0+, Firefox or Mozilla 0.9+, Netscape 6.0+, and Konqueror 5+. Release - 2.2 (Alpha, 2006 March) 2.2.0 (tagged 2008 Jan) Since 2001 License - GPL, LGPL, MPL One team member 9. http://jsunit.net/ 10. http://www.jsunit.net/runner/testRunner.html? testpage=/runner/tests/jsUnitTestSuite.html 11. http://www.jsunit.net/runner/testRunner.html? testpage=/runner/tests/jsUnitTestSuite.html 12. http://www.jsunit.net/documentation/runnerExample.html 13. http://jsunit.net/documentation/ 14. http://tech.groups.yahoo.com/group/jsunit/ 15. http://www.cafepress.com/agilestuff 16. http://www.cafepress.com/agilestuff 17. JsUnit No Form Support No Asynchronous Support Server Support - Java IDE Support - Eclipse, IDEA CruiseControl Support 18. JsUnit Strengths Weaknesses Established, Sole Developer, xUnit model, Conservative License, Active community. Irregular release schedule, Several known limitations. 19. JsUnit Bottom Line Use when team members are already experienced with jsUnit (and licensing is not an issue) Consider YUI Test to test asynchronous code if starting fresh (or willing to try something new). For acceptance tests, add Selenium to the mix 20. JsUnit Resources AJAX and Unit Testing - it's time to mingle Jim Plush (2006 Feb) http://www.litfuel.net/plush/?postid=117 Ajax and Unit Testing Part Two, The Wrath of Mock Jim Plush (2006 Nov) http://www.litfuel.net/plush/?postid=154 21. YUI Test Testing framework for browser-based JavaScript solutions. Add unit testing to JavaScript solutions. Derives characteristics from nUnit and jUnit. 22. YUI Test – Key Features Create test cases through simple syntax. Failure detection for methods that throw errors. Group related cases using test suites. Asynchronous tests for testing events and Ajax communication. Cross-browser DOM Event simulation. 23. YUI Test – Key Features Support for “A-Grade” Browsers Release 2.5.2 (2008 May) Since July 2007 (YUI 2.3.0) License – BSD ~16 Team Members Yahoo! employees and contributors Maintained by Nicholas C. Zakas http://www.nczonline.net/ 24. http://developer.yahoo.com/yui/yuitest/ 25. http://developer.yahoo.com/yui/yuitest/#start 26. http://developer.yahoo.com/yui/examples/yuitest/yt- advanced-test-options.html 27. http://developer.yahoo.com/yui/docs/module_yuitest.html 28. http://developer.yahoo.com/yui/docs/YAHOO.util.DateAssert.ht ml#method_datesAreEqual 29. http://yuiblog.com/assets/pdf/cheatsheets/yuitest.pdf 30. http://tech.groups.yahoo.com/group/ydn-javascript/ 31. http://tech.groups.yahoo.com/group/ydn-javascript/msearch? query=yuitest 32. YUI Test Form Support Asynchronous Support No Server Support No IDE Support No CI Support 33. YUI Test Strengths Weaknesses Bundled with YUI Bundled with YUI Library Library Large, well-funded Lacks server support team Regular releases Active community Well documented 34. YUI Test Bottom Line: Use when coding JavaScript or Ajax applications (and Test-Driven Development) Good for simple event/form tests Needs better automation tools For true acceptance tests, add Selenium to the mix 35. YUI Test Resources Test Driven Development with YUI Test Nicholas D. Zakas (2008 September) http://ajaxexperience.techtarget.com/assets/documents/Nicholas_Za kas_Test_Driven_Development.pdf (presentation) Writing Your First YUI Application Eric Miraglia (2008 May) http://www.insideria.com/2008/05/writing-your-first-yui-applica.html 36. Open QA Selenium Selenium is a suite of tools to automate web app testing across many platforms Selenium IDE records and runs tests as a Firefox Plugin. Selenium Remote Control runs tests across multiple platforms Selenium Grid distributes test running across multiple machines 37. Selenium – Key Features Create test scripts using Selenium Commands. Run tests in against live applications. Compile test scripts in native languages, such as Java, C#, Ruby. Integrate scripts with other test suites and continuous integrations systems. 38. Selenium – Key Features Support for major browsers Firefox 2+, (RC and Core) IE7, Safari 2+, Opera 8+, Windows, OS X, Linus, Solaris. Current Releases IDE, RC, Grid, 2008; Core: 2007 Since 2005 License – Apache ~11 Team Members Originated as ThoughtWorks project 39. http://selenium.openqa.org/ 40. http://selenium.openqa.org/documentation/ 41. openWelcome.action assertTitleMailReader clickAndWaitlink=Register with MailReader assertTitleMailReader - Register typeRegister_save_usernametrillian typeRegister_save_passwordastra< /td> typeRegister_save_password2astra typeRegister_save_fullNameTricia McMilliantypeRegister_save_fromAddresstric ia@magrathea . clickAndWaitRegister_save_Save assertTitleMailReader - Menu assertTextPresentTricia McMillian 42. RegisterTrillianTest.java public class RegisterTrillianTest extends SeleneseTestCase { public void testRegisterTrillian() throws Exception { selenium.open(quot;/menu/Welcome.actionquot;); assertEquals(quot;MailReaderquot;, selenium.getTitle()); selenium.click(quot;link=Register with MailReaderquot;); selenium.waitForPageToLoad(quot;30000quot;); assertEquals(quot;MailReader - Registerquot;, selenium.getTitle()); selenium.type(quot;Register_save_usernamequot;, quot;trillianquot;); selenium.type(quot;Register_save_passwordquot;, quot;astraquot;); selenium.type(quot;Register_save_password2quot;, quot;astraquot;); selenium.type(quot;Register_save_fullNamequot;, quot;Tricia McMillianquot;); selenium.type(quot;Register_save_fromAddressquot;, quot;[email protected];); selenium.click(quot;Register_save_Savequot;); selenium.waitForPageToLoad(quot;30000quot;); assertEquals(quot;MailReader - Menuquot;, selenium.getTitle()); checkForVerificationErrors(); } 43. http://clearspace.openqa.org/index.jspa 44. OpenQA Selenium Form Support Asynchronous Support Server Support IDE Support CI Support 45. Firefox 3 and Selenium RC The current Remote Control beta release (2007) is not compatible with FF3 Minor configuration issue with version numbering in FF3 Hot patch available Best Advice: Install FF2 in default location, and FF3 in an alternate spot. 46. OpenQA Selenium Strengths Weaknesses Granual toolset Complex setup Large, dedicated Superficial suite team support Steady releases Choppy Active community documentation Perpetual beta 47. OpenQA Selenium Bottom Line Use to create acceptance tests Complements unit tests 48. CruiseControl Continuous build process framework Plugins for email notification, Ant, and source control tools. Web interface to view build details 49. CruiseControl – Key Features Build loop – On trigger event, runs tasks, notifies listeners. Legacy reporting - Browse results of build loop and access artefacts. Dashboard - Visual representation of project status. 50. CruiseControl – Key Features Java, Ruby, and .NET projects Several third-party tools and plugins Java, .NET, and Ruby Releases in 2008 Java 2.7.3, .NET 1.4 (July) Ruby 1.3.0 (April) Since 2001 License – BSD Style ~8, ~12+, ~6, Team Members 51. http://cruisecontrol.sourceforge.net/ 52. http://confluence.public.thoughtworks.org/display/CCNET/Welc ome+to+CruiseControl.NET 53. http://cruisecontrolrb.thoughtworks.com/ 54. http://cruisecontrol.sourceforge.net/overview.html 55. http://cruisecontrol.sourceforge.net/dashboard.html 56. http://confluence.public.thoughtworks.org/display/CC/Home 57. http://sourceforge.net/mailarchive/forum.php? forum_name=cruisecontrol-user 58. Cruise Control Strengths Weaknesses Mature XML configuration Multi-platform Choppy Regular releases documentation Active community 59. Hudson Continuous build process framework Runs as a Java web application BYO Container or standalone mode 60. Hudson – Key Features RSS/E-mail/IM Integration JUnit/TestNG test reporting Permanent links Change set support After-the-fact tagging History trend, Distributed builds, File fingerprinting, Plugins. 61. Hudson – Key Features Quick Install, Free style setup – Runs standalone, instant project checkout, automatic build configuration. Visual Configuration – No XML required. Friendly Dashboard - Project status at a glance. 62. Hudson – Key Features Regular releases (daily/weekly milestones) License -- MIT / Creative Community 63. http://cruisecontrol.sourceforge.net/ 64. http://cruisecontrol.sourceforge.net/dashboard.html 65. Hudson Strengths Weaknesses Simple setup Java container Slick UI Committers? Well documented Regular releases Active community 66. http://cruisecontrol.sourceforge.net/overview.html 67. Let's Code It! 68. Ajax Testing Tool Review During the session, we covered when, what, and how to test Ajax applications creating automatic tests with various tools testing with IDEs and continuous Integration systems

Square One University Series Home | Help | Sign in

Search Toolkit

Search

Software Testing Software testing, Theory, ISTQB, Interview based theory, Manual & Automation Test process, QTP, Loadrunner, Quality center, Software testing life cycle and terminology Software testing - Complete Theory. A very detailed study & use of Software Testing from 199Os to 2008 has been digested about by several Major software companies and is evident in modern testing methodology. The following C T T (Concise testing thory) course should be of sufficient help to clear an I.T. interview for software testing engineers, QA, Automation and Manual Test management team and Performance testers alike. From manual to QTP, Load runner & Test management, the ever evolving testing field is drawing more and more attention lately. Its a very high paying field in IT and has lot of opportunity. Lets discuss everything that makes Software testing such an interesting topic. Contents

. Software Testing Theory Index o Software Testing? o Who can do testing & things to understand ( Pre requirement to understand Software testing

effectively) o Software Development Life Cycle (SDLC) o Software Testing Methods & Levels of Testing. o Types of Testing: o Sub Index : less more

LinkCitation EmailPrint Favorite Collect this page

Software Testing Theory Index Software Testing Complete Theory

Environments

Software testing life cycle

Bug life cycle

ISTQB Chapters

ISTQB Tests & Answers

References Save your Eyes : Green Search Engine: Testers approved

Let me start with a short Intro, I am a Self employed Musician and web developer, and have spent lot of time understanding the software testing field and the stages of evolution in it. And I bring in front of you a very effective small dose of testing theory. (Thanks to Mr Suresh Reddy, My senior mentor at NRSTT Hyderabad for al the guidance and coaching. ) I have divided the Concise Testing Theory ( C.T.T ) into the following sections.

Validation & Verification of any task can be called as Testing (1)

Sections What is Software Testing ?

Who can do it? & Terminology.

Software Development Life Cycle ( SDLC.)

Testing methodology & Levels of testing.

Types of Testing.

Software Testing life cycle

Bug life cycle (BLC.)Automation Testing.

Performance Testing.

ISTQB & Software Certification

Each section is explained in Detail and will be enhanced as and when needed. Good luck and enjoy the syllabi. I will shortly create concise knols on Sql server & QTP.

I do Believe Strongly that A good understanding of CTT can get you a job in IT ( at least in India, US, UK & AU) very soon. All you need minimum is to be A graduate from any field ( 15 years of formal education).

Software Testing? Testing in general, is a process carried out by individuals or groups across all domains, where requirements exist.

Whereas, In more software subjective language, The comparison of EV (expected value ) and

AV (Actual value) is known as testing.

To get a better understanding of the sentence above , lets see a small example, example 1.

I have a bulb , and my requirement is that when i switch it on, it should glow. So My next step is to identify three things

Action to be performed : turn the Switch on, Expected value : Bulb should glow, Actual value : Present Status of the bulb after performing the action ,( on or off ).

Now its the time for our test to generate a result based on a simple logic,

IF EV = AV then the result is PASS , else Any deviation will make the result as FAIL.

Now , ideally based on the difference or margin of difference between the EV and AV , The Severity of the defect is decided and is subjected for further rectifications.

Conclusive Definition of Testing :

Testing can be defined as the process in which the defects are Identified, Isolated and then subjected for rectification and Re ensuring that the end result is defect free (highest quality) in order to achieve maximum customer satisfaction. ( An Interview definition. ).

Who can do testing & things to understand ( Pre requirement to understand Software testing effectively) As we have understood so far, that testing is a very general task and can be performed by anybody. But we have to understand the importance of software testing first, In the earlier days of software development, the support for developing the programs was very little. Commercially very few rich companies used advanced software solutions. As three decades of research and development has passed , IT sector has become sufficient enough to develop advanced software programming for even space exploration.

Programming languages in the beginning were very complex and were not feasible to achieve larger calculations, Hardware was costly and was very limited in its capacity to perform continuous tasks, and Networking bandwidths was a gen next topic. But Now Object oriented Programming languages, Terra bytes of memory and bandwidths of hundreds of M bps, are the foundations for future's interactive web generation. Sites like Google, MSN and Yahoo make life much easier to live with Chat , Conferencing and Data management etc.

So coming back to our goal (testing) , We need to realize that Software is here to stay and become a part of mankind. Its the reality of tomorrow and a necessity to be learnt for healthy survival in the coming technological era. More and more companies are spending billions of cash for achieving the limits and in all these process one thing is attaining its place more prominently which is none other than QUALITY.

I used so much of background to stress quality because Its the final and direct goal of Quality assurance people or Test engineers.

• Quality can be defined as Justification of all of the user's requirements in a product with the presence of value & safety. So as you can see that quality is a much wider requirement of the masses when it comes to consumption of any service, So the Job of a Tester is to act like a user and verify the product Fit ( defect free = quality ).

• Defect can be defined as deviation from a users requirement in a particular product or service. So the same product have different level of defects or No defects based on tastes and requirements of different users. As more and more business solutions are moving from man to machine , both development and testing are headed for an unprecedented boom. Developers are under constant pressure of learning and updating themselves with the latest programming languages to keep up the pace with user's requirements.

As a result , Testing is also becoming an Integral part of Software companies to produce quality results for beating the competition. Previously developers used to test the applications besides coding, but this had disadvantages , Like time consumption , emotional block to find defects in one's own creation and Post delivery maintenance which is a common requirement now. So Finally Testers have been appointed to simultaneously test the applications along with the development , In order to identify defects as soon as possible to reduce time loss and improve efficiency of the developers.

Who can do it? Well, Anyone can do it. You need to be a graduate to enter software testing and comfortable relationship with the PC. All you have to get in your mind is that your job is to act like a third person or end user and taste (test) the food before they eat, and report it to the developer.

We are not correcting the mistake here, we are only identifying the defects and reporting it and waiting for the next release to check whether the defects are rectified and also to see whether new defects have raised.

Some Important terminologies • Project : If something is developed based on a particular user/users requirements, and is used exclusively by them, then it is known as project. The finance is arranged by the client to complete the project successfully. • Product : If something is developed based on company's specification (after a general survey of the market requirements) and can be used by multiple set of masses, then it is known as a product. The company has to fund the entire development and usually expect to break even after a successful market launch. • Defect v/s Defective : If the product is justifying partial requirements of a particular customer but is usable functionally, then we say that The product has a defect, But If the product is functionally non usable , then even if some requirements are satisfied, the product still will be tagged as Defective. • Quality Assurance : Quality assurance is a process of monitoring and guiding each and every role in the organization in order to make them perform their tasks according to the company's process guidelines. • Quality Control or Validation : To check whether the end result is the right product, In other words whether the developed service meets all the requirements or not. • Quality Assurance Verification : It is the method to check whether the developed product or project has followed the right process or guidelines through all the phases of development. • NCR : Whenever the role is not following the process in performing the task assigned, then the penalty given to him is known as NCR ( Non Confirmacy raised). • Inspection : Its a process of checking conducted by a group of members on a role or a department suddenly without any prior intimation. • Audit : Audit is a process of checking conducted on the roles or a department with a prior notice, well in advance. • SCM : Software configuration management : This is a process carried out by a team to attain the following things Version control and Change control . In other terms SCM team is responsible for updating all the common documents used across various domains to maintain uniformity, and also name the project and update its version numbers by gauzing the amount of change in the application or service after development. • Common Repository : Its a server accessible by authorized users , to store and retrieve the information safely and securely. • Base lining vs publishing : Its a process of finalizing the documents vs making it available to all the relevant resources. • Release : Its a process of sending the application from the development department to the testing department or from the company to the market. • SRN ( Software Release Note ) : Its a note prepared by the development department and sent to the testing department during the release and it contains information about Path of the build, Installation information, test data, list of known issues, version no, date and credentials etc., • SDN ( Software delivery Note ) : Its a note prepared by a team under the guidance of a Project manager, and will be submitted to the customer during delivery. It contains a carefully crated User manual and list of known issues and workarounds. • Slippage : The extra time taken to accomplish a task is known as slippage • Metrics v/s Matrix : Clear measurement of any task is defined as metrics, whereas a tabular format with linking information which is used for tracing any information back through references is called as Matrix. • Template v/s document : Template is a pre-defined set of questionnaire or a professional fill in the blanks set up , which is used to prepare an finalize any document. The advantages of template is to maintain uniformity and easier comprehension, throughout all the documentation in a Project, group, department , company or even larger masses. • Change Request v/s Impact Analysis : Change request is the proposal of the customer to bring some changes into the project by filling a CRT ( change request template). Whereas Impact analysis is a study carried out by the business analysts to gauze , how much impact will fall on the already developed part of the application and how feasible it is to go ahead with the change or demands of the customer.

Software Development Life Cycle (SDLC) This is a nothing but a model which is used to achieve efficient results in software companies and consists of 6 main phases. We will discuss each stage in a descriptive fashion, dissected into four parts individually, such as Tasks, Roles , Process and Proof of each phase completion. Remember SDLC is similar to waterfall model where the output of one phase acts as the input for the next phase of the cycle. SDLC Image See references below

Initial phase

Analysis phase

Design phase

Coding phase

Testing phase

Delivery and Maintenance

Initial-Phase/Requirements phase : Task : Interacting with the customer and gathering the requirements

Roles : Business Analyst and Engagement Manager (EM).

Process : First the BA will take an appointment with the customer, Collects the requirement template, meets the customer , gathers requirements and comes back to the company with the requirements document. Then the EM will go through the requirements document . Try to find additional requirements , Get the prototype ( dummy, similar to end product) developed in order to get exact details in the case of unclear requirements or confused customers, and also deal with any excess cost of the project.

Proof : The Requirements document is the proof of completion of the first phase of SDLC .

Alternate names of the Requirements Document :

(Various companies and environments use different terminologies, But the logic is same )

FRS : Functional Requirements Specification. CRS : Customer/Client Requirement Specification, URS : User Requirement Specifications, BRS : Business Requirement Specifications, BDD : Business Design Document, BD : Business Document.

Analysis Phase : Tasks : Feasibility Study,

Analysis Image references see below

Tentative planning,

Technology selection

Requirement analysis

Roles : System Analyst (SA)

Project Manager (PM)

Technical manager (TM)

Process : A detailed study on the requirements and judging the possibilities and scope of the requirements is known as feasibility study. It is done by the Manager level teams usually. After that in the next step we move to a temporary scheduling of staff to initiate the project, Select a suitable technology to develop the project effectively ( customer's choice is given first preference , If it is feasible ). And Finally the Hardware, software and Human resources required are listed out in a document to baseline the project.

Proof : The proof document of the analysis phase is SRS ( System Requirement Specification.)

Design Phase : Tasks : High level Designing (H.L.D)

: Low level Designing (L.L.D)

Roles : Chief Architect ( handle HLD )

: Technical lead ( involved in LLD)

Process : The Chief Architect will divide the whole project into modules by drawing some graphical layouts using Unif[ied Modeling Language (UML). The Technical lead will further divide those modules into sub modules using UML . And both will be responsible for visioning the GUI ( The screen where user interacts with the application OR Graphical User Interface .) and developing the Pseudo code (A dummy code Or usually, Its a set of English Instructions to help the developers in coding the application.)

Coding Phase : Task : Developing the programs

Roles: Developers/programmers

Process : The developers will take the support of technical design document and will be following the coding standards, while actual source code is developed. Some of the Industry standard coding methods include Indentation, Color coding, Commenting etc.

Proof : The proof document of the coding phase is Source Code Document (SCD).

Testing Phase : Task : Testing

Roles : Test engineers, Quality Assurance team.

Process : Since It is the core of our article , we will look at a descriptive fashion of understanding the testing process in an I T environment.

• First, the Requirements document will be received by the testing department • The test engineers will review the requirements in order to understand them. • While revising , If at all any doubts arise, then the testers will list out all the unclear requirements in a document named Requirements Clarification Note (RCN). • Then they send the RCN to the author of the requirement document ( i.e, Business Analyst ) in order to get the clarifications. • Once the clarifications are done and sent to the testing team, they will take the test case template and write the test cases. ( test cases like example1 above). • Once the first build is released, they will execute the test cases. • While executing the test cases , If at all they find any defects, they will report it in the Defect Profile Document or DPD. • Since the DPD is in the common repository, the developers will be notified of the status of the defect. • Once the defects are sorted out, the development team releases the next build for testing. And also update the status of defects in the DPD. • Testers will here check for the previous defects, related defects, new defects and update the DPD. Proof : The last two steps are carried out till the product is defect free , so quality assured product is the proof of the testing phase ( and that is why it is a very important stage of SDLC in modern times).

Delivery & Maintenance phase Tasks : Delivery : Post delivery maintenance

Roles : Deployment engineers

Process : Delivery : The deployment engineers will go to the customer environment and install the application in the customer's environment & submit the application along with the appropriate release notes to the customer . Maintenance : Once the application is delivered the customer will start using the application, While using if any problem occurs , then it becomes a new task, Based on the severity of the issue corresponding roles and process will be formulated. Some customers may be expecting continuous maintenance, In that case a team of software engineers will be taking care of the application regularly.

Software Testing Methods & Levels of Testing. There are three methods of testing , Black Box, White Box & Grey box testing, Let's see.

Black Box testing : If one performs testing only on the functional part of an application (where end users can perform actions) without having the structural knowledge, then that method of testing is known as Black box testing. Usually test engineers are in this category.

White Box testing : If one performs testing on the structural part of the application then that method of testing is known as white box testing. Usually developers or white box testers are the ones to do it successfully.

Grey Box testing : If one performs testing on both the functional part as well as the structural part of an application, then that method of testing is known as Grey box testing. It's an old way of testing and is not as effective as the previous two methods and is losing popularity recently.

Levels of Testing There are 5 levels of testing in a software environment. They are as follows,

Image: Systematic & Simultaneous Levels of testing. (The ticked options are the ones where Black box testers are required and the

other performed by the developers usually).

Unit level testing: A unit is defined as smallest part of an application. In this level of testing, each and every program will be tested in order to confirm whether the conditions, functions and loops etc are working fine or not. Usually the white box testers or developers are the performers at this level.

Module level testing: A module is defined as a group of related features to perform a major task. At this level the modules are sent to the testing department and the test engineers will be validating the functional part of the modules.

Integration level testing: At this level the developers will be developing the interfaces in order to integrate the tested modules. While Integrating the modules they will test whether the interfaces that connect the modules are functionally working or not.

Usually, the developers opt to integrate the modules in the following methods,

• Top down Approach: In this approach the parent modules are developed first and then the corresponding child modules will be developed. In case while integrating the modules if any mandatory module is missing then that is replaced with a temporary program called stub, to facilitate testing. • Bottom up approach: In this approach the child modules will be developed first and will be integrated back to the parent modules. While integrating the modules in this way, if at all any mandatory module is missing, then that module is replaced with a temporary program known as driver to facilitate testing. • Hybrid or Sandwich approach: In this approach both the TD & BU approaches will be mixed due to several reasons. • Big Bang approach: In this approach one wait till all the modules are developed and integrate them finally at a time. System Level testing : Arguably, the Core of the testing comes in this level. Its a major phase in this testing level, because depending on the requirement and afford-ability of the company, Automation testing, load, performance & stress testing etc will be carried out in this phase which demands additional skills from a tester.

System: Once the stable complete application is installed into an environment, then as a whole it can be called as a system (envt. + Application)

At SLS level, the test engineers will perform many different types of testing, but the most Important one is,

System Integration Test: In this type of testing one will perform some actions on the modules that were integrated by the developers in the previous phase, and simultaneously check whether the reflections are proper in the related connected modules.

Example 2: Let's take An ATM machine application, with the modules as follows,

Welcome screen, Balance inquiry screen, Withdrawal screen & Deposit screen.

Assuming that these 4 modules were integrated by the developers, So black box testers can perform System Integration testing like this :-

Test case 1:

Check Balance: Let's say amount is X

Deposit amount: Let's say amount is Y

Check balance: Expected Value

If the Actual Value is X + Y, then it is equal to the Expected Value. And because EV = AV, the result is PASS.

User Acceptance Level Testing:

At this level the user will be invited and in his presence, testing will be carried on. Its the final testing before user signs off and accepts the application. Whatever user may desire, the corresponding features need to be tested functionally by the black box testers or Senior testers in the project.

Types of Testing: There are two categories broadly to classify all the available types of testing software. They are Static testing & Dynamic testing. Static means testing where No actions are performed on the application, and features like GUI and appearance related testing comes under this category. And Dynamic is where user needs to perform some actions on the application to test like functionality checking, link checking etc.

Initially there were very few popular types of testing, which were lengthy and manual. But as the applications are becoming more and more complex, Its inevitable that, not only features , functionality and appearance but performance and stability also are major areas to be concentrated.

The following types are the most is use and we will discuss about each of them one by one.

Build Acceptance Test/Verification/Sanity testing (BAT) : Its a type of testing in which one will perform an overall testing on the application to validate whether it is proper for further detailed testing. Usually testing team conducts this in high risk applications before accepting the application from the development department.

Other two terms Smoke testing and Sanity testing are in debate from time immemorial to find a fixed definition. Majority feel that If developers test the overall application before releasing to the testing team for further action is known as Smoke testing and future actions performed by black box test engineers is known as Sanity testing. But this is cited as vice versa.

Regression Testing : Its a type of testing in one will perform testing on the already tested functionality again and again. As the name suggests regression is revisiting the same functionality of a feature, but it happens in the following two cases. Case1 : When the test engineers find some defects and report it to the developers, The development team fixes the issue and releases the next build. Once the next build is released, as per the requirement the testers will check wether the defect is fixed, and also whether the related features are still working fine which might have been affected while fixing the defect.

Example 3: Lets say that you wanted a bike with 17" tyres instead of 15" and sent the bike to the service center for changing it. Before sending it back for changing, you tested all the other features and you are satisfied with them. Once the bike is returned to you with new tyres, you will also check the look and feel overall, and check whether the brakes still work as expected, whether the mileage maintains its expected value and all related features that can be affected by this change. Testing the tyre in this case is New testing and all the other features will fall under Regression testing.

Case 2 : Whenever some new feature is added to the application in the middle of development, and released for testing. The test engineers may need to check additionally all the related features of the newly added feature. In this case also all the features except the new functionality comes under Regression testing.

Retesting : Its a type of testing in which one will perform testing on the same functionality again and again with multiple sets of data in order to come to a conclusion whether its working fine or not.

Example 4: Lets say the customer wants a bank application and the login screen must have the password field that only accepts alphanumeric (eg. abc123, 56df etc) data and no special characters ( *,$,# etc). In this case one needs to test the field with all possible combinations and different sets of data to conclude that the password field is working fine. Such kind of repeated testing on any feature is called Retesting.

Alpha Testing & Beta Testing : These are both performed in the User acceptance testing phase, If the customer visits the company and the comapny's own test engineers perform testing in their own premises, then it is referred as Alpha testing.

If the testing is carried out at the client's environment and is carried out by the end users or third party test experts, then it is known as Beta testing . ( Remember that both these types are before the actual implementation of the software in the clients environment, hence the term user acceptance. )

Installation Testing : Its a type of testing in which one will install the application into the environment by following the guidelines given in the Deployment document / Installation guide. If at all the installation is succesful then one will come to a conclusion that the installation guide and the instructions in it are correct and it is appropriate to install the application, otherwise report the problems in the deployment document.

One main point is to ensure that, In this type of testing we are checking the user manual and not the product. ( i.e, the installation/setup guide and not the application's capability to install.)

Compatibility testing : Its a type of testing in which one will install the application into multiple environments prepared with different combinations or environmental components in order to check whether the application is suitable with those environments or not. Usually this type of testing is carried out on products rather than projects.

Monkey Testing : Its a type of testing in which Abnormal actions are performed intentionally on the application in order to check the stability of the application. Remember it is different from stress testing or load testing. We are concentrating on the stability of the features and functionality in the case of extreme possibilities of actions performed on the application by different kind of users.

Useability Testing : In this kind of testing we will check the user freindliness of the application. Depending on the complexity of the application, one needs to test whether information about all the features are understandable easily. Navigation of the application must be easy to follow.

Exploratory Testing : Its a type of testing in which domain experts ( knowledge of business & functions) will perform testing on the application without having the knowledge of requirements just by parallely exploring the functionality. If you go by the simple definition of exploring, It means having minimum idea about something, & then doing something related to it in order to know something more about it.

End to End Testing : Its a type of testing in which we perform testing on various end to end scenarios of an application. Which means performing various operations on the applications like different users in real time scenario.

Example 5 : Lets take a bank application and consider the different end to end scenarios that exist within the application. Scenario 1 : Login, Balance enquiry , & Logout.

Scenario 2 : Login, Deposit, Withdrawal & logout.

Scenario 3 : Login, Balance enquiry, Deposit, Withdrawal, Logout. etc.

Security Testing : Its a type of testing in which one will test whether the application is a secured application or not. In order to do the same, A black box test engineer will concentrate on the following types of testing.

• Authentication Testing : It is a type of testing in which one will try to enter into the application by entering different combinations of usernames and passwords in order to check whether the application is allowing only the authorized users or not. • Direct URL testing : It is a type of testing in which one will enter the direct URL's ( Uniform resource locator) of the secured pages and check whether the application is allowing the access to those areas or not. • Firewall Leakage testing : Firewall term is a very widely misunderstood word and it generally means a barrier between two levels. ' The definition comes from an Old African story where people used to put fire around them while sleeping, in order to prevent red-ants coming near them. ' So In this type of testing, One will enter into the application as one level of user ( ex. member, ) and try to access other higher level of users pages (ex. admin ,) , In order to check whether the firewalls are working fine or not. Port testing: Its a type of testing in which one will install the application into the clients environment and check whether it is compatible with that environment. ( According to the requirements, one needs to find what kind of environment is needed to be tested, whether it is product or project etc .) Soak Testing/Reliability testing : Its a type of testing in which one will perform testing on the application continuosly for long period of time in order to check the stability of the application.

Mutation testing : Its a type of testing in which one will perform testing on the application or its related factors by doing some changes to the logics or layers of the environment. ( Refer envt. knol for details) Any thing from the functionality of a feature to the applications overall route can be tested with different set of combinations of environment.

Adhoc testing : Its a type of testing in which the test engineers will perform testing in their own style after understanding the requirements clearly. ( Note that in Exploratory testing the domain experts do not have the knowledge of requirements whereas in this case the test engineers have the expected values set. )

Continuation :

The remaining Chapters are indexed below and are published on seperate knols. This is to maintain the size of the knol to under 15-20 pages for easy management. If you are unable to find the knols, please click on the author name and visit the knols list published by me.

Sub Index : Environments

Software testing life cycle Bug life cycle

ISTQB Certification syllabi and papers

Automation testing. (Under revision ETA 02/28/09)

Performance testing

Quicktest proffesional VBScript. ( Under revision ETA 02/25/09)

Quality Center. (unpublished draft proofreading due. ETA ..02/09)

References 1. Image Analysis_speedo.png,Net savvy.com 2008 Retreived on 29/12/2008 . Analysis image url 2. SDLC image jpg format, [email protected], retreived on 18/12/2008, OIW.com bp2.blogger.com 3. Heartfelt thanks to Mr Suresh Reddy at NRSTT Hyderabad , My testing mentor, without whose support this article would not have been possible. 2009 Mr Suresh Reddy at NRSTT Hyderabad 4. ISTQB1,4,5 Beizer , B (1990) Software Testing Technique, 2nd edition, Van Nostrand Reinhold:Boston 5. ISTQB2,6 coupland, L. (2003) A practitioners guide to software test design, Artech House 6. ISTQB3 Gilb, T & Graham, D (1993) Software Inspection, Addison-Wesley: London Hatton, L. (1997) 'Reexamining the Fault Density-component Size Connection' in IEEE Software, vol. 14 Issue 2, March 197; pp. 89-97. 7. ISTQB3, Van Veenendaal, E. (2004). The Testing Practitioner, 2nd edition, UTN publishing van Veenendaal, E. and van der Zwan, M. (2000) 'GQM based Inspections' in proceedings of the 11th European software control and metrics conference (ESCOM), Munich, May 2000. 8. ISTQB4, Broekman, B. and Notenboom, E. (2003) Testng Embedded software, Addison Wesley 9. ISTQB5 Black, R. (2001) Managing the testing process, 2nd edition, john wiley & sons: New York. 10. ISTQB6, Buwalda, H. Jaanssen, D & Pinkster, I. (2001) Intergrated Test Design and Automation, Addison Wesley

Comments

Sign in to write a comment

poorani rangarajan pursuing testing new (Fresher to IT) Hello Sir, Your flow through teaching basics and essentials for the testing process is truly amazing..... Thanks for the lovely explanation...wishes for continuing the wonderful job.... Aarthi

Last edited Mar 6, 2011 10:55 PM

Report abusive comment

0 Post reply to this comment ▼

Anonymous

Untitled http://www.rolexwatchessale.org rolex replica http://www.daliled.com/en LED power supply http://www.ebuypainting.com venice oil paintings http://www.quickbooksbuy.comms office 2007 http://www.guccigucci.org discount gucci handbags http://www.timekeeperwatches.comburberry watch http://www.handbags1cheap.com/ replica handbags http://www.infowowgold.com wow gold http://www.tswisswatches.com replica watches http://www.moncleroutlet-sale.com moncler outlet http://www.flamereader.com text to speech softwarehttp://www.wholesale-charm.com wholesale charm

Last edited Mar 4, 2011 3:03 AM

Report abusive comment

-1 Post reply to this comment ▼

Verma Amit

Get Quality Reviews on each product. Get top reviews on each products. I read it and getting confused what about you??? http://www.reviewlocator.com/ Last edited Jan 12, 2011 2:20 AM

Report abusive comment

-1 Post reply to this comment ▼

Anonymous

Untitled Thank you for a wonderful explanation. Can you please tell me whether there are any good testing courses in Bangalore? What is the scope of landing a job after completing a testing course with a non- technical background? What other skills(programming language) do I need along with training in testing to be a successful QA? Thanks

Dec 15, 2010 6:32 PM

Report abusive comment

0 Post reply to this comment ▼

Anonymous

Software testing Hi Vinayak, You have really provided us with a great information on software testing. I was looking for information in this regard. The content provided by you on the same is truly appreciable. It is so easy to understand with the words and examples used by you. Thanks http://www.q3tech.com

Last edited Oct 22, 2010 12:46 AM

Report abusive comment

0 Post reply to this comment ▼ Anonymous

Thanks Greath. thanks for the wonderful info.

Regards, lex

Mail : [email protected] Web: http://newlife.net.au/carpet_cleaning.htm

Last edited Sep 5, 2010 4:48 AM

Report abusive comment

0 Post reply to this comment ▼

SAURABH SINHA mind blowing knowledge description!!! you had done it

Last edited Jun 14, 2010 9:46 PM

Report abusive comment

0 Post reply to this comment ▼

Mausumi knowledge in testing Hi Vinayak, I was a S/W engg having 2 yrs of exp. then I got the chance to come to testing on financial domain which is dealing with US retirement plan. Since I am new to testing world, so i am going through the Knol and trying to understand the testing process. In the mean time I want to take ISTQB fundamental exam. Can you please give me how should I proceed for the exam as well as to increase my knowledge in testing? Regarding the knowledge in automation, right now I am doing manual, so I dont have much idea. But I want to learn those tools. Can you please guide me how should I proceed to learn those tools by my own? or which tool I should learn in the beginning which has a good demand also?

Please assist me here.

Thank you

Last edited Feb 28, 2011 10:41 PM

Report abusive comment

+3 View/post replies (1) to this comment ▼

Rajamanickam Antonimuthu

Nice Article -Need clarification regarding Alpha testing. Nice Article. Please check whether your definition for alpha testing is correct.

You have mentioned that alpha testing will be done by "company's own test engineers ". But my current understanding is alpha testing will be done by the customer.

Last edited Jun 13, 2009 5:38 AM

Report abusive comment

-2 Post reply to this comment ▼

Hiten Shah

Hats Off!!!!!!!!!! Hi Vinayak,

Hats off!!! to you...You have done a great job... I liked the way you have explained Testing with simple examples/scenarios...

I have just begun my studies for ISTQB Foundation Level exam and this would help me a lot...

Regards Hiten

Last edited Jun 7, 2009 5:13 AM

Report abusive comment

0 View/post replies (1) to this comment ▼

1 - 10 of 22 Older comments » Show all comments

Edit this knol

Write a knol

Vinayak Rao

Musician, Software tester, Googler at officialiwant.com

India

Article rating:

Your rating:

46 Ratings

No rating

Moderated collaboration

Creative Commons Attribution 3.0 License

Top Viewed Knol Award Top Pick Knol Award

Version: 93

Versions

Last edited: Mar 14, 2009 3:20 AM. Reviews Be the first to review this knol Vinayak Rao also wrote • Yummy... Chocolates Truffels and Gift Baskets • Microsoft CRM • Software Testing Life Cycle

• Software Environment in Tesing & Process Models • ISTQB Certification, Answers of All tests Chapter 1-6

(18 knols and collections) Related Knols •

Web Software Testin... by Ed Seward •

Software Testing by Akber Choudhry

Testing Fundamentals by Saravanan Thiruna... •

Software testing by d vs •

Program Testing by Beya Adamu Knol translations Help translate this knol into your language.

Search for uses of this page ▼ Categories Automation testing, Internet, ISTQB,reference, science, software

Based on community consensus.

Sign in to add or vote for categories

Learn more about categories Activity for this knol This week: 383pageviews

Totals: 60026pageviews 30comments

Public activity feed

Flag inappropriate content

Home | Terms of Service | Privacy Policy | Content Policy | Help

©2009 Google

69.