8. Testing New Functionalities. Regression Testing How to Test New Functionality?

Total Page:16

File Type:pdf, Size:1020Kb

8. Testing New Functionalities. Regression Testing How to Test New Functionality? 8. Testing new functionalities. Regression testing How to test new functionality? Astea Solutions QA Team Questions ● What is Test Plan? ● What is regression testing? ● What are severity and priority? ● What is Software Testing Lifecycle? ● What is smoke testing? Why do we need a Test Plan? Why do we need a Test Plan? ● Writing a test plan guides our thinking ● Forces us to confront the challenges that await us ● Focus our thinking on important topics ● Serves as a vehicle for communicating with other members of the team ● Test plan is not required: ○ for simple projects/features ○ If time is insufficient Why do we need a Test plan?(2) ● Repeatability ○ All testers should be able to execute the tests ○ Assures all critical elements are tested correctly ● Controllability ○ Knowledge of test data requirements, expected results, what to run ● Coverage ○ Ensures adequate coverage Elements of a Test plan Elements of a Test plan ● Test Scope ○ Defines what will be tested ● Test Objectives ○ Description of expected (measurable) test result, priority ● Assumptions ○ Include skill level of testers, budget, starting state of application, tools & equipment availability, etc. ● Risk Analysis ○ Things that could impact testing ability Elements of a Test plan(2) ● Test Design ○ Identifies tests to run, stages to test ● Roles & Responsibilities ● Test Schedule & Resources ○ Major test activities, sequence of tests, estimates, dependence on other activities, people, tools ● Test Data Management ○ Methods for preparing test data, backup/rollback procedures Elements of a Test plan(3) ● Test Environment ○ Version Control, hardware and software configurations, defect tracking tool, environment for each kind of testing ● Communication Approach ○ Meetings, processes, contact lists ● Test Tools ○ Automation, performance, etc. Test Prioritization Test prioritization Criteria for prioritization of test cases may be: ● Usage frequency of a function ● Probability of failure ● Visibility of a failure ● Priority of the requirements ● Customer priorities ● Project risk Entry Criteria Entry Criteria Test entry criteria define when to start testing ● E.g., at the beginning of a test level or when a set of tests is ready for execution Entry criteria may cover the following: ● Test environment availability and readiness ● Test tool readiness in the test environment ● Testable code availability ● Test data availability Test Estimation Test Estimation Test schedule is dependent on release schedule ● In order to make test schedule we need test estimates Test estimation includes ● Time for writing test cases and preparing test data ● Time for executing test cases ● During test case creation new un-estimated test conditions may appear ● Test estimations are usually made by senior QAs Estimates techniques ● If tasks are big break them into smaller tasks ● Time for re-testing should be considered Test Execution Test Execution Test execution consist of: ● Testing new functionalities ● Regression testing Smoke test ● Is done before start test execution ● Is fast test ● Verifies that the system works ● Concentrates on finding blocking issues Testing new functionalities ● New test set is created for testing that feature corresponding to that cycle of that release ● Regression testing is included How to Effectively Test a Functionality? ● Read the requirements or specification corresponding to that feature thoroughly ● Develop the test cases exclusively to test the functionality ● Knowledge of how the functionality has been implemented - this gives clarity on the impacted areas ● Start testing the feature early in the cycle and report the defects and the same process should be repeated throughout the release builds. Regression testing Regression: ● Happens after system changes ● it can be functional or non-functional What causes regression? ● Common code changed incorrectly ● Incorrect version control (merging conflicts) ● Incorrect bug fixes Regression testing When should you do regression testing? ● At all levels - unit testing, integration testing, system testing, etc. ● At least one complete regression test before system deployment to production Which test cases do you execute in regression testing? ● Select only test cases that cover the impacted components of the system (if time is short) Regression testing suites Fixed regression suite - covering all functionalities ● Fixed time to execute ● All functionalities are tested ● Changed functionalities might not be sufficiently tested Variable suite - covering changed functionalities ● Estimates are needed for every execution ● Can miss bugs in case of incorrect affected functionalities analysis Summary ● What are the elements of a Test plan? ● When do we start testing? ● How to test new functionality? ● Regression testing suites? QUESTIONS.
Recommended publications
  • Smoke Testing
    International Journal of Scientific and Research Publications, Volume 4, Issue 2, February 2014 1 ISSN 2250-3153 Smoke Testing Vinod Kumar Chauhan Quality Assurance (QA), Impetus InfoTech Pvt. Ltd. (India) Abstract- Smoke testing is an end-to-end testing which determine the stability of new build by checking the crucial functionality of the application under test and used as criteria of accepting the new build for detailed testing. Index Terms- Software testing, acceptance testing, agile, build, regression testing, sanity testing I. INTRODUCTION The purpose of this research paper is to give details of smoke testing from the industry practices. II. SMOKE TESTING The purpose of smoke testing is to determine whether the new software build is stable or not so that the build could be used for detailed testing by the QA team and further work by the development team. If the build is stable i.e. the smoke test passes then build could be used by the QA and development team. On the stable build QA team performs functional testing for the newly added features/functionality and then performs regression testing depending upon the situation. But if the build is not stable i.e. the smoke fails then the build is rejected to the development team to fix the build issues and create a new build. Smoke Testing is generally done by the QA team but in certain situations, can be done by the development team. In that case the development team checks the stability of the build and deploy to QA only if the build is stable. In other case, whenever there is new build deployment to the QA environment then the team first performs the smoke test on the build and depending upon the results of smoke the decision to accept or reject the build is taken.
    [Show full text]
  • Types of Software Testing
    Types of Software Testing We would be glad to have feedback from you. Drop us a line, whether it is a comment, a question, a work proposition or just a hello. You can use either the form below or the contact details on the rightt. Contact details [email protected] +91 811 386 5000 1 Software testing is the way of assessing a software product to distinguish contrasts between given information and expected result. Additionally, to evaluate the characteristic of a product. The testing process evaluates the quality of the software. You know what testing does. No need to explain further. But, are you aware of types of testing. It’s indeed a sea. But before we get to the types, let’s have a look at the standards that needs to be maintained. Standards of Testing The entire test should meet the user prerequisites. Exhaustive testing isn’t conceivable. As we require the ideal quantity of testing in view of the risk evaluation of the application. The entire test to be directed ought to be arranged before executing it. It follows 80/20 rule which expresses that 80% of defects originates from 20% of program parts. Start testing with little parts and extend it to broad components. Software testers know about the different sorts of Software Testing. In this article, we have incorporated majorly all types of software testing which testers, developers, and QA reams more often use in their everyday testing life. Let’s understand them!!! Black box Testing The black box testing is a category of strategy that disregards the interior component of the framework and spotlights on the output created against any input and performance of the system.
    [Show full text]
  • Smoke Testing What Is Smoke Testing?
    Smoke Testing What is Smoke Testing? Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing. The term ‘Smoke Testing’ is came from the hardware testing, in the hardware testing initial pass is done to check if it did not catch the fire or smoked in the initial switch on.Prior to start Smoke testing few test cases need to created once to use for smoke testing. These test cases are executed prior to start actual testing to checkcritical functionalities of the program is working fine. This set of test cases written such a way that all functionality is verified but not in deep. The objective is not to perform exhaustive testing, the tester need check the navigation’s & adding simple things, tester needs to ask simple questions “Can tester able to access software application?”, “Does user navigates from one window to other?”, “Check that the GUI is responsive” etc. Here are graphical representation of Smoke testing & Sanity testing in software testing: Smoke Sanity Testing Diagram The test cases can be executed manually or automated; this depends upon the project requirements. In this types of testing mainly focus on the important functionality of application, tester do not care about detailed testing of each software component, this can be cover in the further testing of application. The Smoke testing is typically executed by testers after every build is received for checking the build is in testable condition. This type of testing is applicable in the Integration Testing, System Testing and Acceptance Testing levels.
    [Show full text]
  • Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK
    1 From Start-ups to Scale-ups: Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK Abstract—This paper1 describes some of the challenges and research questions that target the most productive intersection opportunities when deploying static and dynamic analysis at we have yet witnessed: that between exciting, intellectually scale, drawing on the authors’ experience with the Infer and challenging science, and real-world deployment impact. Sapienz Technologies at Facebook, each of which started life as a research-led start-up that was subsequently deployed at scale, Many industrialists have perhaps tended to regard it unlikely impacting billions of people worldwide. that much academic work will prove relevant to their most The paper identifies open problems that have yet to receive pressing industrial concerns. On the other hand, it is not significant attention from the scientific community, yet which uncommon for academic and scientific researchers to believe have potential for profound real world impact, formulating these that most of the problems faced by industrialists are either as research questions that, we believe, are ripe for exploration and that would make excellent topics for research projects. boring, tedious or scientifically uninteresting. This sociological phenomenon has led to a great deal of miscommunication between the academic and industrial sectors. I. INTRODUCTION We hope that we can make a small contribution by focusing on the intersection of challenging and interesting scientific How do we transition research on static and dynamic problems with pressing industrial deployment needs. Our aim analysis techniques from the testing and verification research is to move the debate beyond relatively unhelpful observations communities to industrial practice? Many have asked this we have typically encountered in, for example, conference question, and others related to it.
    [Show full text]
  • Exploring Languages with Interpreters and Functional Programming Chapter 11
    Exploring Languages with Interpreters and Functional Programming Chapter 11 H. Conrad Cunningham 24 January 2019 Contents 11 Software Testing Concepts 2 11.1 Chapter Introduction . .2 11.2 Software Requirements Specification . .2 11.3 What is Software Testing? . .3 11.4 Goals of Testing . .3 11.5 Dimensions of Testing . .3 11.5.1 Testing levels . .4 11.5.2 Testing methods . .6 11.5.2.1 Black-box testing . .6 11.5.2.2 White-box testing . .8 11.5.2.3 Gray-box testing . .9 11.5.2.4 Ad hoc testing . .9 11.5.3 Testing types . .9 11.5.4 Combining levels, methods, and types . 10 11.6 Aside: Test-Driven Development . 10 11.7 Principles for Test Automation . 12 11.8 What Next? . 15 11.9 Exercises . 15 11.10Acknowledgements . 15 11.11References . 16 11.12Terms and Concepts . 17 Copyright (C) 2018, H. Conrad Cunningham Professor of Computer and Information Science University of Mississippi 211 Weir Hall P.O. Box 1848 University, MS 38677 1 (662) 915-5358 Browser Advisory: The HTML version of this textbook requires a browser that supports the display of MathML. A good choice as of October 2018 is a recent version of Firefox from Mozilla. 2 11 Software Testing Concepts 11.1 Chapter Introduction The goal of this chapter is to survey the important concepts, terminology, and techniques of software testing in general. The next chapter illustrates these techniques by manually constructing test scripts for Haskell functions and modules. 11.2 Software Requirements Specification The purpose of a software development project is to meet particular needs and expectations of the project’s stakeholders.
    [Show full text]
  • Getting to Continuous Testing
    T2 Continuous Testing Thursday, October 3rd, 2019 9:45 AM Getting to Continuous Testing Presented by: Max Saperstone Coveros Brought to you by: 888---268---8770 ·· 904---278---0524 - [email protected] - http://www.starwest.techwell.com/ Max Saperstone Max Saperstone has been working as a software and test engineer for over a decade, with a focus on test automation within the CI/CD process. He specializes in open source tools, including the Selenium tool suite, JMeter, AutoIT, Cucumber, and Chef. Max has led several test automation efforts, including developing an automated suite focusing on web-based software to operate over several applications for Kronos Federal. He also headed a project with Delta Dental, developing an automated testing structure to run Cucumber tests over multiple test interfaces and environments, while also developing a system to keep test data "ageless." He currently heads up the development of Selenified, an open source testing framework to allow for testing of multiple interfaces, custom reporting, and minimal test upkeep. Getting to Continuous Testing A Greenfield Test Automation Story [email protected] maxsaperstone @Automate_Tests Max Saperstone @Automate_Tests 1 Max Saperstone @Automate_Tests Max Saperstone • Director of Testing and Automation at Coveros • Over a decade of test automation experience • Often Test Architect on consulting projects • Helped transform multiple organizations to test effectively within sprints • Mainly focuses on open source tools • Lover of the outdoors and cheap beer Max
    [Show full text]
  • Unit Testing
    Unit Testing • Unit Testing: Unit testing focuses verification effort on the smallest unit of software design—the software component or module. • The unit test focuses on the internal processing logic and data structures within the boundaries of a component. • This type of testing can be conducted in parallel for multiple components. unit-4-Types of Testing 1/17 Unit Testing unit-4-Types of Testing 2 Unit Testing • The module interface is tested to ensure that information properly flows into and out of the program unit under test. • Local data structures are examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. • All independent paths through the control structure are exercised to ensure • that all statements in a module have been executed at least once. • Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. • Finally, all error-handling paths are tested. unit-4-Types of Testing 3/17 Unit Test Consideration • Selective testing of execution paths is an essential task during the unit test. • Test cases should be designed to uncover errors due to erroneous computations, incorrect comparisons, or improper control flow. unit-4-Types of Testing 4/17 Unit Test Procedure unit-4-Types of Testing 5 Unit Testing ➢ Driver is nothing more than a “main program” that accepts test case data, passes such data to the component (to be tested), and prints relevant results. ➢ Stubs serve to replace modules that are subordinate (invoked by) the component to be tested. ➢ A stub or “dummy subprogram” uses the subordinate module’s interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing.
    [Show full text]
  • Continuous Testing
    BROUGHT TO YOU IN PARTNERSHIP WITH CONTENTS ö Continuous Testing vs. Test Continuous Testing: Automation ö Continuous Testing and Agile Transforming Testing for Agile + DevOps Success ö The Top Continuous Testing Roadblocks and DevOps ö The Path to Continuous Testing ö Conclusion WRITTEN BY CHRISTOPHER SPRINGSTEAD, PRODUCT MARKETING MANAGER, CA TECHNOLOGIES UPDATED BY WAYNE ARIOLA, CMO, TRICENTIS Let's face it. Businesses don't want — or need — perfect software. Continuous Testing vs. Test Automation They want to deliver new, business-differentiating software as Like Lucy and Ethel struggling to keep pace at the chocolate soon as possible. To enable this, we need fast feedback on whether factory, many software testers have been scrambling to keep pace the latest innovations will work as expected or crash and burn with accelerated processes — then along comes the supervisor in production. We also need to know if these changes somehow proclaiming, "You're doing splendidly! Speed it up!" broke the core functionality that the customer base — and thus the As expectations associated with testing are changing, legacy testing business — depends upon. platforms aren't keeping up. Legacy testing platforms take a "heavy" This is where continuous testing comes in. Continuous testing is approach to testing. They rely on brittle scripts, deliver slow end- the process of executing automated tests as part of the software to-end regression test execution, and produce an overwhelming delivery pipeline in order to obtain feedback on the business risks level of false positives. As a result, they've achieved limited success associated with a software release as rapidly as possible.
    [Show full text]
  • Continuous Testing Digital Transformation Requires Continuous Testing
    Continuous Testing Digital Transformation Requires Continuous Testing Ingo Philipp Businesses must continuously exploit digital technologies to both create new sources of customer value and increase operational agility in service of customers. © Forrester Research Across industries, companies face the challenge of software-led transformation Digital Disruption Your Company Identify Your Customer Needs Build Solution Receive Money Plan Code Version Build Provision Test Deploy Monitor © Melissa Perri 6+ Big Bang Waterfall Requirements Design Implementation Testing Acceptance Deployment Following a Plan Contract Negotiation 4 Copious Documentation Incremental Processes & Tools Rational Unified Process Inception Elaboration Construction Transition Rigid Rules Agile Horizon 2 Flexible Framework Agile Scrum, Kanban Individuals & Interactions Development Operations Working Software Customer Collaboration Responding to Change 1 Continuous DevOps DevOps Digital Singularity Development Response It’s not the strongest that survive, nor the most intelligent, but the one most responsive to change Charles Darwin Sprint Sprint Start End Time Efficiency Gain 100% +4x Creation Start testing early, Degree of API shift left test automation Completion +6x Maintenance UI +20x 0% Execution On average, organizations require access to 33 systems for development or testing. Testing Present Welcome to the tester’s hell 18 Average # of systems with unrestricted access 96 % of testers have restricted test lab access Enterprise system landscapes are alike disease gene networks. *voke, market snapshot report on service virtualization - 2012 Automation doesn't make testing easy, it makes testing possible Lesson Learned © Wolfgang Platz = 50 ~100 Billion Billion Billion Billion Billion Billion Billion Combinations A B Visiting London Testing harder isn’t the answer, testing smarter is! Lesson Learned © Wolfgang Platz © BBC Testing is exactly like washing a pig.
    [Show full text]
  • QUANTIQ's Testing Team
    DATA SHEET QUANTIQ’s Testing Team Copyright © 2019 QUANTIQ Technology. All Rights Reserved. QUANTIQ’s Offering a wide range of QUANTIQ’s Testing Team are What set’s the QUANTIQ world class independent independent services for both able to assist you no matter Testing Team apart from other Testing Team existing and non QUANTIQ clients where you are in your project in-house testing teams is that they are truly independent QUANTIQ’s World Class Testing Team QUANTIQ’s world class independent Testing Team offer a wide range of independent services for both existing and non QUANTIQ clients. All QUANTIQ testers are qualified with The International Software Testing Qualifications Board (the global standard and benchmarking tool for software testing) and have extensive experience with the entire Dynamics 365 Stack. QUANTIQ’s Testing Team are able to assist you no matter where you are in your project New Project Work Covering a range of services from the diagnostic phase through to the operational phase. Offerings are broken down into Bronze, Silver and Gold packages meaning that no matter what your Testing Requirement is, there is a package which suits your business needs. In addition to this, tailor made services are available for those with unique testing requirements. Testing Types Test Bronze Silver Gold Functional testing Unit test x x x Functional testing Integration testing x x Functional testing Smoke testing x x x Functional testing Regression testing x x Functional testing Pre-acceptance testing x Performance testing Performance testing x x Test automation Functional testing x x Test automation Integration testing x x Service Offering Test automation Bespoke integration testing x Full cycle testing End-to-end testing x x Authoring test scripts Test scripts development x Authoring test scripts Test scripts QA x x Authoring test scripts Training on test script creation x x x Testing strategy Testing strategy workshop x x x 2 Copyright © 2019 QUANTIQ Technology.
    [Show full text]
  • Optimized Order of Software Testing Techniques in Agile Process – a Systematic Approach
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 1, 2017 Optimized Order of Software Testing Techniques in Agile Process – A Systematic Approach Farrukh Latif Butt Shahid Nazir Bhatti Sohail Sarwar Department of Software Engineering Department of Software Engineering Department of Computing and Bahria University Islamabad, Bahria University Islamabad, Technology Pakistan Pakistan Iqra University Islamabad, Pakistan Amr Mohsen Jadi Abdul Saboor Department of CSSE Department of Software Engineering University of Hail, Hail, KSA International Islamic University Islamabad, Pakistan Abstract—The designing, development of a software product results are communicated to whole team like whether this alpha needs lot of efforts whereas software testing is also a very build appears fine to continue for further use or not [4]. On the challenging task but it is equally mandatory activity in order to other hand, software developers implement user stories ensure the quality of the product before shipping to customer. accommodating them in the software application that they When it comes to the Agile model under which software builds certify at their own through writing unit tests against every user are developed very frequently and development goes on a very story or bug they fix that eventually make a library of unit tests high pace, software testing becomes more important and critical. [15]. On the availability of next build, software testers also Organizations following the agile methodology, encounter assume the responsibility of regression testing to know whether number of problems in formulating a software testing process fixing of bugs has ripple effects on other areas of the product or unless they come up with a systematic testing approach based on not? This aspect of regression testing has been elaborated in right testing technique at a proper stage of the agile process.
    [Show full text]
  • Review of Software Testing Nikita Sharma School of Computer and Electronics, IPS Academy, Indore, M.P., India [email protected]
    International Journal of Engineering Research and General Science Volume 4, Issue 3, May-June, 2016 ISSN 2091-2730 Review of Software Testing Nikita Sharma School of Computer and Electronics, IPS Academy, Indore, M.P., India [email protected] Abstract: In general, testing is finding out how well something (software) works. In terms of human beings, testing tells what level of knowledge or skill has been acquired. In computer hardware and software development, testing is used at key checkpoints in the overall process to determine whether objectives are being met. When the design is complete, coding follows and the finished code is then tested. This paper will focus on the main points of testing like why we need testing, Software testing Life Cycle, types of software testing, How testing is carried out in practical environment and finally at the last this paper will conclude about every point which will discuss in this paper. Figure1: Testing Keyword: SDLC (Software Development Life Cycle), STLC (Software Testing Life Cycle), Project life cycle, testing, static testing, dynamic testing, Black box testing (behavioral testing), White box testing (Structural or Glass box testing), Unit testing, Incremental integration testing, Integration testing, Functional testing, System testing, End-to-end testing, Sanity testing (confidence testing, smoke testing), Regression testing, Acceptance testing, Load testing, Performance testing, Stress testing, Usability testing ,Install/uninstall testing, Recovery testing ,Security testing ,Compatibility testing, Comparison testing, Alpha testing, Beta testing ,SRS (software requirement specification), bug, tool, Test Case. Why we need testing: Testing is the process with the intention to find errors/bugs to analyze the actual results with expected results.
    [Show full text]