→Different Models(Waterfall,V Model,Spiral Etc…) →Maturity Models. → 1. Acceptance Testing: Formal Testing Conducted To

Total Page:16

File Type:pdf, Size:1020Kb

→Different Models(Waterfall,V Model,Spiral Etc…) →Maturity Models. → 1. Acceptance Testing: Formal Testing Conducted To different models(waterfall,v model,spiral etc…) maturity models. 1. Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is usually performed by the customer. 2. Accessibility Testing: Type of testing which determines the usability of a product to the people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is conducted by persons having disabilities. 3. Active Testing: Type of testing consisting in introducing test data and analyzing the execution results. It is usually conducted by the testing teams. 4. Agile Testing: Software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. It is usually performed by the QA teams. 5. Age Testing: Type of testing which evaluates a system's ability to perform in the future. The evaluation process is conducted by testing teams. 6. Ad-hoc Testing: Testing performed without planning and documentation - the tester tries to 'break' the system by randomly trying the system's functionality. It is performed by the testing teams. 7. Alpha Testing: Type of testing a software product or system conducted at the developer's site. Usually it is performed by the end user. 8. Assertion Testing: Type of testing consisting in verifying if the conditions confirm the product requirements. It is performed by the testing teams. 9. API Testing: Testing technique similar to unit testing in that it targets the code level. API Testing differs from unit testing in that it is typically a QA task and not a developer task. 10. All-pairs Testing: Combinatorial testing method that tests all possible discrete combinations of input parameters. It is performed by the testing teams. 11. Automated Testing: Testing technique that uses automation testing tools to control the environment set-up, test execution and results reporting. It is performed by a computer and is used inside the testing teams. 12. Basis Path Testing: A testing mechanism which derives a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths. It is used by testing teams when defining test cases. 13. Backward Compatibility Testing: Testing method which verifies the behavior of the developed software with older versions of the test environment. It is performed by testing teams. 14. Beta Testing: Final testing before releasing application for commercial purpose. It is typically done by end-users or others. 15. Benchmark Testing: Testing technique that uses representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. It is performed by testing teams. 16. Big Bang Integration Testing: Testing technique which integrates individual program modules only when everything is ready. It is performed by the testing teams. 17. Binary Portability Testing: Technique that tests an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. It is performed by the testing teams. 18. Boundary Value Testing: Software testing technique in which tests are designed to include representatives of boundary values. It is performed by the QA testing teams. 1 19. Bottom Up Integration Testing: In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. It is usually performed by the testing teams. 20. Branch Testing: Testing technique in which all branches in the program source code are tested at least once. This is done by the developer. 21. Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail. It is performed by testing teams. 22. Black box Testing: A method of software testing that verifies the functionality of an application without having specific knowledge of the application's code/internal structure. Tests are based on requirements and functionality. It is performed by QA teams. 23. Code-driven Testing: Testing technique that uses testing frameworks (such as xUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. It is performed by the development teams. 24. Compatibility Testing: Testing technique that validates how well a software performs in a particular hardware/software/operating system/network environment. It is performed by the testing teams. 25. Comparison Testing: Testing technique which compares the product strengths and weaknesses with previous versions or other similar products. Can be performed by tester, developers, product managers or product owners. 26. Component Testing: Testing technique similar to unit testing but with a higher level of integration - testing is done in the context of the application instead of just directly testing a specific method. Can be performed by testing or development teams. 27. Configuration Testing: Testing technique which determines minimal and optimal configuration of hardware and software, and the effect of adding or modifying resources such as memory, disk drives and CPU. Usually it is performed by the performance testing engineers. 28. Condition Coverage Testing: Type of software testing where each condition is executed by making it true and false, in each of the ways at least once. It is typically made by the automation testing teams. 29. Compliance Testing: Type of testing which checks whether the system was developed in accordance with standards, procedures and guidelines. It is usually performed by external companies which offer "Certified OGC Compliant" brand. 30. Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. It it usually done by performance engineers. 31. Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. It is usually performed by testing teams. 32. Context Driven Testing: An Agile Testing technique that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization at a specific moment. It is usually performed by Agile testing teams. 33. Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems. It is usually performed by the QA teams. 34. Decision Coverage Testing: Type of software testing where each condition/decision is executed by setting it on true/false. It is typically made by the automation testing teams. 35. Destructive Testing: Type of testing in which the tests are carried out to the specimen's failure, in order to understand a specimen's structural performance or material behaviour under different loads. It is usually performed by QA teams. 36. Dependency Testing: Testing type which examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality. It is usually performed by testing teams. 37. Dynamic Testing: Term used in software engineering to describe the testing of the dynamic behavior of code. It is typically performed by testing teams. 2 38. Domain Testing: White box testing technique which contains checkings that the program accepts only valid input. It is usually done by software development teams and occasionally by automation testing teams. 39. Error-Handling Testing: Software testing type which determines the ability of the system to properly process erroneous transactions. It is usually performed by the testing teams. 40. End-to-end Testing: Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. It is performed by QA teams. 41. Endurance Testing: Type of testing which checks for memory leaks or other problems that may occur with prolonged execution. It is usually performed by performance engineers. 42. Exploratory Testing: Black box testing technique performed without planning and documentation. It is usually performed by manual testers. 43. Equivalence Partitioning Testing: Software testing technique that divides the input data of a software unit into partitions of data from which test cases can be derived. it is usually performed by the QA teams. 44. Fault injection Testing: Element of a comprehensive test strategy that enables the tester to concentrate on the manner in which the application under test is able to handle exceptions. It is performed by QA teams. 45. Formal verification Testing: The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. It is usually performed by QA teams. 46. Functional Testing: Type of black box testing that bases its test cases on the specifications of the software component under test. It is performed by testing teams. 47. Fuzz Testing: Software testing technique that provides invalid, unexpected, or random data to the inputs of a program - a special area of
Recommended publications
  • A Combinatorial Approach to Detecting Buffer Overflow Vulnerabilities
    A Combinatorial Approach to Detecting Buffer Overflow Vulnerabilities Wenhua Wang, Yu Lei, Donggang Liu, David Raghu Kacker, Rick Kuhn Kung, Christoph Csallner, Dazhi Zhang Information Technology Laboratory Department of Computer Science and Engineering National Institute of Standards and Technology The University of Texas at Arlington Gaithersburg, Maryland 20899, USA Arlington, Texas 76019, USA {raghu.kacker,kuhn}@nist.gov {wenhuawang,ylei,dliu,kung,csallner,dazhi}@uta.edu Abstract— Buffer overflow vulnerabilities are program defects Many security attacks exploit buffer overflow vulnerabilities that can cause a buffer to overflow at runtime. Many security to compromise critical data structures, so that they can attacks exploit buffer overflow vulnerabilities to compromise influence or even take control over the behavior of a target critical data structures. In this paper, we present a black-box system [27][32][33]. testing approach to detecting buffer overflow vulnerabilities. Our approach is a specification-based or black-box Our approach is motivated by a reflection on how buffer testing approach. That is, we generate test data based on a overflow vulnerabilities are exploited in practice. In most cases specification of the subject program, without analyzing the the attacker can influence the behavior of a target system only source code of the program. The specification required by by controlling its external parameters. Therefore, launching a our approach is lightweight and does not have to be formal. successful attack often amounts to a clever way of tweaking the In contrast, white-box testing approaches [5][13] derive test values of external parameters. We simulate the process performed by the attacker, but in a more systematic manner.
    [Show full text]
  • Types of Software Testing
    Types of Software Testing We would be glad to have feedback from you. Drop us a line, whether it is a comment, a question, a work proposition or just a hello. You can use either the form below or the contact details on the rightt. Contact details [email protected] +91 811 386 5000 1 Software testing is the way of assessing a software product to distinguish contrasts between given information and expected result. Additionally, to evaluate the characteristic of a product. The testing process evaluates the quality of the software. You know what testing does. No need to explain further. But, are you aware of types of testing. It’s indeed a sea. But before we get to the types, let’s have a look at the standards that needs to be maintained. Standards of Testing The entire test should meet the user prerequisites. Exhaustive testing isn’t conceivable. As we require the ideal quantity of testing in view of the risk evaluation of the application. The entire test to be directed ought to be arranged before executing it. It follows 80/20 rule which expresses that 80% of defects originates from 20% of program parts. Start testing with little parts and extend it to broad components. Software testers know about the different sorts of Software Testing. In this article, we have incorporated majorly all types of software testing which testers, developers, and QA reams more often use in their everyday testing life. Let’s understand them!!! Black box Testing The black box testing is a category of strategy that disregards the interior component of the framework and spotlights on the output created against any input and performance of the system.
    [Show full text]
  • Scenario Testing) Few Defects Found Few Defects Found
    QUALITY ASSURANCE Michael Weintraub Fall, 2015 Unit Objective • Understand what quality assurance means • Understand QA models and processes Definitions According to NASA • Software Assurance: The planned and systematic set of activities that ensures that software life cycle processes and products conform to requirements, standards, and procedures. • Software Quality: The discipline of software quality is a planned and systematic set of activities to ensure quality is built into the software. It consists of software quality assurance, software quality control, and software quality engineering. As an attribute, software quality is (1) the degree to which a system, component, or process meets specified requirements. (2) The degree to which a system, component, or process meets customer or user needs or expectations [IEEE 610.12 IEEE Standard Glossary of Software Engineering Terminology]. • Software Quality Assurance: The function of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented. • Software Quality Control: The function of software quality that checks that the project follows its standards, processes, and procedures, and that the project produces the required internal and external (deliverable) products. • Software Quality Engineering: The function of software quality that assures that quality is built into the software by performing analyses, trade studies, and investigations on the requirements, design, code and verification processes and results to assure that reliability, maintainability, and other quality factors are met. • Software Reliability: The discipline of software assurance that 1) defines the requirements for software controlled system fault/failure detection, isolation, and recovery; 2) reviews the software development processes and products for software error prevention and/or controlled change to reduced functionality states; and 3) defines the process for measuring and analyzing defects and defines/derives the reliability and maintainability factors.
    [Show full text]
  • A Test Case Suite Generation Framework of Scenario Testing
    VALID 2011 : The Third International Conference on Advances in System Testing and Validation Lifecycle A Test Case Suite Generation Framework of Scenario Testing Ting Li 1,3 Zhenyu Liu 2 Xu Jiang 2 1) Shanghai Development Center 2) Shanghai Key Laboratory of 3) Shanghai Software Industry of Computer Software Technology Computer Software Association Shanghai, China Testing and Evaluating Shanghai, China [email protected] Shanghai, China [email protected] {lzy, jiangx}@ssc.stn.sh.cn Abstract—This paper studies the software scenario testing, emerged in recent years, few of them provide any which is commonly used in black-box testing. In the paper, the consideration for business workflow verification. workflow model based on task-driven, which is very common The research work on business scenario testing and in scenario testing, is analyzed. According to test business validation is less. As for the actual development of the model in scenario testing, the model is designed to application system, the every operation in business is corresponding test case suite. The test case suite that conforms designed and developed well. However, business processes to the scenario test can be obtained through test case testing and requirements verification are very important and generation and test item design. In the last part of the paper, necessary for software quality. The scenario testing satisfies framework of test case suite design is given to illustrate the software requirement through the test design and test effectiveness of the method. execution. Keywords-test case; software test; scenario testing; test suite. Scenario testing is to judge software logic correction according to data behavior in finite test data and is to analyze results with all the possible input test data.
    [Show full text]
  • Evaluating Testing with a Test Level Matrix
    FSW QA Testing Levels Definitions Copyright 2000 Freightliner LLC. All rights reserved. FSW QA Testing Levels Definitions 1. Overview This document is used to help determine the amount and quality of testing (or its scope) that is planned for or has been performed on a project. This analysis results in the testing effort being assigned to a particular Testing Level category. Categorizing the quality and thoroughness of the testing that has been performed is useful when analyzing the metrics for the project. For example if only minimum testing was performed, how come so many person-hours were spent testing? Or if the maximum amount of testing was performed, how come there are so many trouble calls coming into the help desk? 2. Testing Level Categories This section provides the basic definitions of the six categories of testing levels. The definitions of the categories are intentionally vague and high level. The Testing Level Matrix in the next section provides a more detailed definition of what testing tasks are typically performed in each category. The specific testing tasks assigned to each category are defined in a separate matrix from the basic category definitions so that the categories can easily be re-defined (for example because of QA policy changes or a particular project's scope - patch release versus new product development). If it is decided that a particular category of testing is required on a project, but a testing task defined for completion in that category is not performed, it will be noted in the Test Plan (if it exists) and in the Testing Summary Report.
    [Show full text]
  • Tips for Success
    Tips for Success EHR System Implementation Testing By this stage of your EHR system implementation, you have fully grasped the significant flexibility of your EHR system, and have devoted significant hours to making major set-up and configuration decisions to tailor the system to your specific needs and clinical practices. Most likely, you have spent many long hours and late nights designing new workflow processes, building templates, configuring tables and dictionaries, setting system triggers, designing interfaces, and building screen flow logic to make the system operate effectively within your practice. Now, it is absolutely critical to thoroughly test your “tailoring” to be sure the system and screens work as intended before moving forward with “live” system use. It is far easier to make changes to the system configuration while there are no patients and physicians sitting in the exam rooms growing impatient and frustrated. Testing (and more testing!) will help you be sure all your planning and hard work thus far pays off in a successful EHR “Go-Live”. The following outlines an effective, proven testing process for assuring your EHR system works effectively and as planned before initial “Go-Live”. 1. Assign and Dedicate the Right Testing Resources Effectively testing a system requires significant dedicated time and attention, and staff that have been sufficiently involved with planning the EHR workflows and processes, to fully assess the set up of the features and functions to support its intended use. Testing also requires extreme patience and attention to detail that are not necessarily skills all members of your practice staff possess.
    [Show full text]
  • Standard Glossary of Terms Used in Software Testing Version 3.0 Advanced Test Analyst Terms
    Standard Glossary of Terms used in Software Testing Version 3.0 Advanced Test Analyst Terms International Software Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged. Copyright © International Software Testing Qualifications Board (hereinafter called ISTQB®). acceptance criteria Ref: IEEE 610 The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. acceptance testing Ref: After IEEE 610 See Also: user acceptance testing Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. accessibility testing Ref: Gerrard Testing to determine the ease by which users with disabilities can use a component or system. accuracy Ref: ISO 9126 See Also: functionality The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. accuracy testing See Also: accuracy Testing to determine the accuracy of a software product. actor User or any other person or system that interacts with the test object in a specific way. actual result Synonyms: actual outcome The behavior produced/observed when a component or system is tested. adaptability Ref: ISO 9126 See Also: portability The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.
    [Show full text]
  • The Nature of Exploratory Testing
    The Nature of Exploratory Testing by Cem Kaner, J.D., Ph.D. Professor of Software Engineering Florida Institute of Technology and James Bach Principal, Satisfice Inc. These notes are partially based on research that was supported by NSF Grant EIA-0113539 ITR/SY+PE: "Improving the Education of Software Testers." Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Kaner & Bach grant permission to make digital or hard copies of this work for personal or classroom use, including use in commercial courses, provided that (a) Copies are not made or distributed outside of classroom use for profit or commercial advantage, (b) Copies bear this notice and full citation on the front page, and if you distribute the work in portions, the notice and citation must appear on the first page of each portion. Abstracting with credit is permitted. The proper citation for this work is ”Exploratory & Risk-Based Testing (2004) www.testingeducation.org", (c) Each page that you use from this work must bear the notice "Copyright (c) Cem Kaner and James Bach", or if you modify the page, "Modified slide, originally from Cem Kaner and James Bach", and (d) If a substantial portion of a course that you teach is derived from these notes, advertisements of that course should include the statement, "Partially based on materials provided by Cem Kaner and James Bach." To copy otherwise, to republish or post on servers, or to distribute to lists requires prior specific permission and a fee.
    [Show full text]
  • What Is a Good Test Case? Cem Kaner, J.D., Ph.D
    What Is a Good Test Case? Cem Kaner, J.D., Ph.D. Florida Institute of Technology Department of Computer Sciences [email protected] STAR East, May 2003 This research was partially supported by NSF Grant EIA-0113539 ITR/SY+PE: "Improving the Education of Software Testers." Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). Abstract Designing good test cases is a complex art. The complexity comes from three sources: Test cases help us discover information. Different types of tests are more effective for different classes of information. Test cases can be “good” in a variety of ways. No test case will be good in all of them. People tend to create test cases according to certain testing styles, such as domain testing or risk-based testing. Good domain tests are different from good risk-based tests. What’s a Test Case? Let’s start with the basics. What’s a test case? IEEE Standard 610 (1990) defines test case as follows: “(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. “(2) (IEEE Std 829-1983) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.” According to Ron Patton (2001, p. 65), “Test cases are the specific inputs that you’ll try and the procedures that you’ll follow when you test the software.” Boris Beizer (1995, p.
    [Show full text]
  • ISTQB Glossary of Testing Terms 2.3X
    Standard glossary of terms used in Software Testing Version 2.3 (dd. March 28th, 2014) Produced by the ‘Glossary Working Party’ International Software Testing Qualifications Board Editor : Erik van Veenendaal (Bonaire) Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged. Copyright © 2014, International Software Testing Qualifications Board (hereinafter called ISTQB®). 1 Acknowledgements This document was produced by the Glossary working group of the International Software Testing Qualifications Board (ISTQB). The team thanks the national boards for their suggestions and input. At the time the Glossary version 2.3 was completed the Glossary working group had the following members (alphabetic order): Armin Beer, Armin Born, Mette Bruhn-Pedersen, Josie Crawford, Ernst Dőring, George Fialkovitz, Matthias Hamburg, Bernard Homes, Ian Howles, Ozgur Kisir, Gustavo Marquez- Soza, Judy McKay (Vice-Chair) Avi Ofer, Ana Paiva, Andres Petterson, Juha Pomppu, Meile Posthuma. Lucjan Stapp and Erik van Veenendaal (Chair) The document was formally released by the General Assembly of the ISTQB on March, 28th, 2014 2 Change History Version 2.3 d.d. 03-28-2014 This new version has been developed to support the Foundation Extention Agile Tester syllabus. In addition a number of change request have been implemented in version 2.3 of the ISTQB Glossary. New terms added: Terms changed; - build verification test - acceptance criteria - burndown chart - accuracy testing - BVT - agile manifesto - content
    [Show full text]
  • Testing, SEED Infotech’S Service Portfolio Encompasses the Entire Range of Solutions Required by the IT Industry
    The Journey of Thousand miles starts with a single step Take the Step Today SEED Infotech Ltd. is Pune’s leading Training, Consulting and Staffing Company, offering a wide array of solutions customized for a range of key verticals and horizontals in the IT industry. From customized corporate training in IT, Project Management and Soft-skills to Strategy Consulting for the implementation of the right tools for enterprise wide planning, software development and testing, SEED Infotech’s service portfolio encompasses the entire range of solutions required by the IT industry. We have 33+ branches all across India. 16+ years of Experience in Technology Training, Staffing and Consulting 600 dedicated and highly skilled IT Professional with 200+ in house Trainers 1,80,000+ Students and Professional trained so far 1,00,000+ Sq. ft. of Training Infrastructure Courses mapped with Industry Requirement State-of-the-art IT laboratories and communication set-up 33+ Training Location across India Strategic tie-ups with Global Technology leaders Short Term IT Courses Job Oriented Diploma Courses C, C++ NET Java/J2EE Oracle IBM SEED - Diploma in Software Testing Linux PERL PHP IT for 10th & 12th IBM Tivoli Storage Manager SCTS - SEED Certified Technology Specialist (Java / .NET Track) Hardware & Networking Courses Technical Writing SPIC - Software Professionals’ Incubation Center A+ N+ MCSA CCNA SCHANSA - Java / .NET - Software Testing - Hardware & Networking - Tivoli Admin English & Foreign Languages SEED Tutorails Spoken English Japanese Chinese German English Maths Science Sanskrit German French Maths & English Aptitude French Marathi Hindi Computer Science Personality Development SEED Infotech Ltd. 'Panchasheel', 42/16, Erandawana, SEED Infotech Lane, Off Karve Road, Pune - 411004.
    [Show full text]
  • A Review of Hybrid Exploratory Testing Techniques
    International Journal of Computer Sciences and Engineering Open Access Review Paper Vol.-7, Issue-1, Jan 2019 E-ISSN: 2347-2693 A Review of Hybrid Exploratory Testing Techniques 1* 2 3 Manas kumar Yogi , Y. Jnapika , Bhanuprakash Peddireddy 1Computer Science & Engineering,Pragati Engineering College,Surampalem,Kakinada, India 2 Computer Science & Engineering,Pragati Engineering College,Surampalem,Kakinada, India 3 Computer Science & Engineering,Pragati Engineering College,Surampalem,Kakinada, India *Corresponding Author: [email protected], Tel.: 09966979279 Available online at: www.ijcseonline.org Accepted: 14/Nov/2018, Published: 31/Jan/2019 Abstract—Wildcat testing contains a mess of strategy related to it. It is a decent combination of structured thinking and race exploration that may be terribly powerful for locating bugs and substantiate correctness. This paper shows however the wildcat testing mentality is often combined with additional ancient scenario-based and scripted testing. This hybrid technique relaxes a lot of the rigidity unremarkably related to scripting and makes smart use of the wildcat testing steering bestowed. It additionally permits groups that square measure heavily unconditional in existing scripts to feature wildcat testing to their arsenal. Ancient state of affairs testing is incredibly seemingly to be a well-known idea for the reader. Several testers write or follow some type of script or end-to-end state of affairs once they perform manual testing. State of affairs testing is well-liked as a result of it lends confidence that the merchandise can faithfully perform the state of affairs for actual users. The additional the state of affairs reflects expected usage, the additional such confidence is gained.
    [Show full text]