Improving Software Testing in an Agile Environment

Total Page:16

File Type:pdf, Size:1020Kb

Improving Software Testing in an Agile Environment DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2020 Improving Software Testing in an Agile Environment JÉRÔME DE CHAUVERON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE 1 Abstract Software development has evolved at an ever-increasing pace over the past years, one of the forces behind this acceleration is the move from on-premise application to cloud- based software: Software as a service (SaaS). Cloud computing has changed the way applications are deployed used and tested. Time-to-market for software has decreased from the order of month to the order of day. Furthermore, Source Code Management based on Git (introduced in 2005), changed the way software is developed allowing for collaborative work thanks to automatic merge and versioning. Additionally, continu- ous integration (CI) tools developed on top of Git facilitate regular testing, build and deployment.Nevertheless, despite of being necessary Integration tools, they require an extensive amount of cloud resources which may stretch the duration of code integra- tion. The goal of this thesis is to optimize the speed of the CI pipeline, improve the software tests while optimizing the load on the cloud resources. As a result of the work of this thesis the completion time of the software test has been decreased by 21%, and the total continuous integration completion time decreased by 18%. Furthermore, new bugs and anomalies were detected thanks to improved soft- ware test and new approaches for the emulation of extreme scenarios. The bugs were corrected making the system more resilient and improving the user experience. 1 Sammanfattning Programvaruutveckling har accelererat i en allt snabbare takt under de senaste åren. En av krafterna bakom denna acceleration är övergången från lokal applikation till molnbaserad programvara: Programvara som en tjänst (SaaS). Molntjänster (Cloud computing) har förändrat sättet applikationer distribueras, används och testas. Tiden till att mjukvaran når marknaden har minskat från månader till dagligen. Dessutom har källkodshantering baserad på Git (introducerades 2005), ändrat hur programvaran utvecklas, vilket möjliggör samarbete, detta tack vare automatisk sam- manslagning och versionshantering. Dessutom underlättar integrationsverktyg (CI) utvecklade ovanpå Git, regelbunden testning, byggnad och utveckling. Trots att de är nödvändiga integrationsverktyg, kräver de en omfattande mängd molnresurser som kan tänja på varaktigheten för integration. Målet med denna avhandling är att optimera hastigheten på CI-pipeline, förbättra programvarutester medan belastningen på molnresurserna optimeras. Som ett resultat av arbetet med denna avhandling har mjukvarutesternas slutföringstid reducerats med 21 %, och den totala kompletteringstiden för kontinuerlig integration reducerats med 18 %. Dessutom upptäcktes nya buggar och avvikelser, tack vare förbättrade mjukvarutester och nya tillvägagångssätt för emulering av extrema scenarier. Buggarna korrigerades, vilket gör systemet mer motståndskraftigt med en förbättrad användarupplevelse. 2 Acknowledgements I would like to thank my supervisor Shatha Jaradat for her hard work, time, and guidance throughout this thesis. I would also like to thank my manager and colleagues at Dassault system for their cooperation, advice and friendship. It has been a very challenging experience. I would like to thank my examiner Mihhail Matskin. Febuary, 9 2020 J´er^omede Chauveron 3 Contents 1 Introduction 8 1.1 Motivation . .8 1.2 Background . .8 1.3 Problem . .9 1.4 Purpose . .9 1.5 Goal . 10 1.6 Research Methodology . 10 1.7 Delimitation . 10 1.8 Ethics and Sustainability . 11 1.9 Outline . 11 2 Background and Related Work 12 2.1 Background on Software testing . 12 2.2 Pipeline Description . 14 2.3 Code and Testing Metrics . 16 2.4 Integration Testing & Functional testing . 17 2.5 Fuzzy testing Background . 19 2.6 Load testing Background . 21 2.7 Literature review and related work . 22 3 Methods 25 3.1 Code coverage for End-to-End testing . 25 3.2 Automating Coverage for End-to-End tests . 26 3.3 Background and Set-up description . 27 3.4 Implementation description . 28 3.5 Load Testing Tools Comparison . 28 4 Experiments 30 4.1 Functional test traces analysis . 30 4.2 Parameters influencing failure rate . 32 4.3 Gitlab Runner usage optimization . 34 4.4 Troubleshooting and Fuzz testing Results and Analysis . 36 5 Conclusion and future work 39 4 List of Figures 1 Software testing Pyramid [1] . 13 2 Git practices description [3] . 14 3 CI/CD Pipeline description . 15 4 Representation of process arrival in Gitlab [6] . 15 5 Example of Cyclomatic number computation . 16 6 Code Example for Halstead complexity computation . 17 7 Representation of an End to End Test scenario [5] . 17 8 Code example of an unreachable path via blackbox testing . 19 9 SAGE algorithm description [14] . 20 10 Flow Chart of the AFL algorithm . 21 11 GAN architecture [21] . 23 12 Instrumenting code an example . 25 13 HTML Code Coverage Report . 26 14 Troubleshooting setup description . 28 15 Evolution of Katalon test duration . 30 16 Pipeline execution time distribution . 31 17 Job triggering pipeline failure distribution . 33 18 Pipeline success function of the number of .ts files modified . 34 19 Merge Pipeline total execution time evolution . 35 20 Example of an asynchronous request failure . 37 21 Example of Linear size increase of factor 2 on a JSON of depth 2 . 37 22 Example of Recursive size increase of factor 2 on a JSON of depth 2 . 37 5 List of Tables 1 Comparison of different End-to-End testing tools . 18 2 Comparison of the different tool used for troubleshooting . 27 3 Comparison of the different Inter-process communication tools . 28 4 Comparison of the different tool for load testing . 29 5 Correlation Between code metrics and Pipeline success rate . 32 6 List of acronyms API Application Programming Interface AWS Amazon Web Services CI/CD Continuous Integration / Continuous Deployment HTTPS Hypertext Transfer Protocol Secure JSON JavaScript Object Notation IDE Integrated Development Environment OS Operating System QA Quality Assurance REST REpresentational State Transfer SSL Secure Socket Layer TCP Transmission Control Protocol UI User Interface VM Virtual Machine 7 1 Introduction 1.1 Motivation The DevOps approach and the Agile methodology for software development had a profound impact on the way software is being developed, shipped, and deployed. The era of main- frames (1970-1980) was defined by technologies such as Cobol and Multiple Virtual Storage (MVS) with 1-5 years release cycle and extremely high risks in terms of meeting the cus- tomer needs. Then came the era of client/server (1990) with still high risk and relatively slow release cycle: 3-12 month, the current era of cloud is pushing down the release cycle to the order of days. This new approach pushing for faster release cycle and thus driving quality analysis and software deployment towards more automation as manual testing is tedious, lengthy, and expensive. This thesis was conducted during the development of the UI for a web application referred to as the web-app with the purpose of improving software testing and the CI pipeline efficiency. 1.2 Background Software quality is a leading concern for companies as a bug might be extremely expensive in terms of time spent correcting it and impact on the client. To maintain a certain level of quality, tests are used to certify that the system can handle a typical user scenario (with different network configurations). Moreover the system needs to remain functional despite having given servers unreachable, high network congestion or unexpected API response. In fact those conditions may occur in a real-world scenario despite being very unlikely to arise in a testing environment. The objective is to have the most relevant tests possible and to have them being automated while using the minimum resources in terms of VM's. Relevant test meaning that the tests need to go through as much as possible of the system and remain close to real-world usage. Moreover, the test should access that the system is functional in various scenarios The studied system contains multiple types of tests the one detailed in this thesis are the following: • Functional test • Load Test • Unit Test • Chaos testing To evaluate and improve the relevancy of those tests we are using: • Code coverage a metric indicating how much of the code is being executed through a test (can be applied to any type of test but in our case is designed for functional testing). It is the percentage of the line being executed during the test as well as the execution detail line-per-line. The objective is to maximize the percentage of executed code and complete the functional test by unit tests for the part of the code not executed during the functional tests. 8 • Mitmproxy a proxy to modify on the fly response given by the server to test the resilience of the UI • Netem a network emulation tool: adding delay, loss, packet re-ordering, and breaking connection 1 The Code management platform used is Gitlab which enables us to extract traces of the activity and pipeline history and will be used to optimize the continuous Integration pipeline. 1.3 Problem Continuous Integration pipeline enables software to be built, tested and integrated in the master directory continuously. Enabling more collaborative work and faster release cycle. However, a CI pipeline is expensive to set-up and maintain due to the different jobs re- quired: deployment of the code, End-to-End test, Unit test and Build. All those jobs require a VMs, and maintaining them is a tedious process. Moreover the CI pipeline con- tains E2E test checking the entire system. But the E2E test is lengthy in terms of execution time and hard to maintain. On top of this, those tests need to remain relevant and evolve with the system. Otherwise a bug won't be detected when code is pushed, making the bug harder to locate and correct.
Recommended publications
  • Core Elements of Continuous Testing
    WHITE PAPER CORE ELEMENTS OF CONTINUOUS TESTING Today’s modern development disciplines -- whether Agile, Continuous Integration (CI) or Continuous Delivery (CD) -- have completely transformed how teams develop and deliver applications. Companies that need to compete in today’s fast-paced digital economy must also transform how they test. Successful teams know the secret sauce to delivering high quality digital experiences fast is continuous testing. This paper will define continuous testing, explain what it is, why it’s important, and the core elements and tactical changes development and QA teams need to make in order to succeed at this emerging practice. TABLE OF CONTENTS 3 What is Continuous Testing? 6 Tactical Engineering Considerations 3 Why Continuous Testing? 7 Benefits of Continuous Testing 4 Core Elements of Continuous Testing WHAT IS CONTINUOUS TESTING? Continuous testing is the practice of executing automated tests throughout the software development cycle. It’s more than just automated testing; it’s applying the right level of automation at each stage in the development process. Unlike legacy testing methods that occur at the end of the development cycle, continuous testing occurs at multiple stages, including development, integration, pre-release, and in production. Continuous testing ensures that bugs are caught and fixed far earlier in the development process, improving overall quality while saving significant time and money. WHY CONTINUOUS TESTING? Continuous testing is a critical requirement for organizations that are shifting left towards CI or CD, both modern development practices that ensure faster time to market. When automated testing is coupled with a CI server, tests can instantly be kicked off with every build, and alerts with passing or failing test results can be delivered directly to the development team in real time.
    [Show full text]
  • Kickstart Your Gatling Performance Testing
    Kickstart Your Gatling Performance Testing Siegfried Goeschl Version 1.0.0, 2018-11-04 Introduction 1 Siegfried Goeschl • Senior Software Engineer • Writing server-side code • Java Meetup Vienna co-organizer • Apache Software Foundation member • Currently working at Erste Bank Austria 2 Introducing Gatling • Performance testing framework • Tests are written in in Scala • Developer-centric test tool • Development started in 2010 • Gatling 3.0.1 released now • Since V3 there are two license models - free & commercial. • As you know some guys have a strong opinion about OSS 3 What Linus Says 4 5 • Having said that a commercial license could generate more revenue keeping the Open Source version alive. 6 Money Makes The World Go Round 7 Gatling vs. FrontLine • Gatling Open Source is under ASL 2.0 • Gatling FrontLine is the enterprise edition ◦ Annual license or "pay as you go" ◦ Web-based, ◦ More bells & whistle ◦ Real-time reporting 8 Under The Hood • Supports HTTP 1.1/2.0 & JMS protocol • Response validation ◦ Regular expressions ◦ XPath & JSONPath ◦ CSS selectors 9 Under The Hood • Provides Domain Specific Language (DSL) • Uses asynchronous non-blocking HTTP client • Integrates with Maven, SBT & Gradle • Test data feeders CSV, JSON, JDBC, Redis • Management-friendly HTML reports • No more 1:1 mapping between virtual users and worker threads. 10 When To Use Gatling? • Want to write test code in your IDE? • Need some integration & performance tests? • Want to run those test on your CI server? • Do you care about reviews and version control? 11 Getting Started 12 Getting Started • JDK 1.8 • Apache Maven 3.5.x • IntelliJ Community Edition • IntelliJ Scala Plugin 13 Getting Started • https://github.com/gatling/gatling-maven-plugin-demo • Import the Maven project into your IDE • Write and debug Scala code there • Execute Gatling tests on the command line • Simple CI integration using Maven • The official Gatling distributable is not suited for development.
    [Show full text]
  • Leading Practice: Test Strategy and Approach in Agile Projects
    CA SERVICES | LEADING PRACTICE Leading Practice: Test Strategy and Approach in Agile Projects Abstract This document provides best practices on how to strategize testing CA Project and Portfolio Management (CA PPM) in an agile project. The document does not include specific test cases; the list of test cases and steps for each test case are provided in a separate document. This document should be used by the agile project team that is planning the testing activities, and by end users who perform user acceptance testing (UAT). Concepts Concept Description Test Approach Defines testing strategy, roles and responsibilities of various team members, and test types. Testing Environments Outlines which testing is carried out in which environment. Testing Automation and Tools Addresses test management and automation tools required for test execution. Risk Analysis Defines the approach for risk identification and plans to mitigate risks as well as a contingency plan. Test Planning and Execution Defines the approach to plan the test cases, test scripts, and execution. Review and Approval Lists individuals who should review, approve and sign off on test results. Test Approach The test approach defines testing strategy, roles and responsibilities of various team members, and the test types. The first step is to define the testing strategy. It should describe how and when the testing will be conducted, who will do the testing, the type of testing being conducted, features being tested, environment(s) where the testing takes place, what testing tools are used, and how are defects tracked and managed. The testing strategy should be prepared by the agile core team.
    [Show full text]
  • Tero Suominen PERFORMANCE TESTING REST APIS Information
    Tero Suominen PERFORMANCE TESTING REST APIS Information Technology 2017 PERFORMANCE TESTING REST APIS Suominen, Tero Satakunta University of Applied Sciences Degree Programme in Computer Science December 2017 Number of pages: 39 Appendices: 0 Keywords: REST, performance testing, automation ____________________________________________________________________ The subject for this thesis was performance testing REST APIs that had been imple- mented into a Java application. The purpose of this research was to come up with a method on how the performance and functionality of the REST APIs could be meas- ured and tested within Profit Software. The research consisted of two parts. First, I searched to find an existing software ca- pable of being used for testing REST APIs. After selecting the tool that would be used to create the performance tests, a local test environment was set up that allowed us to estimate the capability of the software and the method of testing itself. The local envi- ronment consisted of the same components and software that could be used also in the already existing test environments within the company. This way moving the tests from the local environment into the actual test environment went smoothly. With the help of this research we were able to detect issues with the functionality of some APIs, when they were under load. We were able to fix these issues in the imple- mentation during the development phase and after changing the implementation we could verify that the APIs functioned correctly by using these same tests. REST RAJAPINTOJEN SUORITUSKYKYTESTAUS Suominen, Tero Satakunnan ammattikorkeakoulu Tietotekniikan koulutusohjelma Joulukuu 2017 Sivumäärä: 39 Liitteitä: 0 Asiasanat: REST, suorituskykytestaus, automaatio ____________________________________________________________________ Opinnäytetyön aiheena oli Java-sovellukseen toteutettujen REST rajapintojen performanssitestaus.
    [Show full text]
  • Loadrunner Professional and Loadrunner Enterprise 2021
    Product Flyer Application Delivery Management LoadRunner Professional and LoadRunner Enterprise 2021 License Bundles Community bundles and license bundles available for LoadRunner Professional and LoadRunner Enterprise 2021. Learn more at Community Bundles (Free of Charge) https://software.microfocus.com/ Bundle Name Description products Community 50 Vusers for all protocols, except for Templates and GUI Mobile UI Unlimited Vusers for the TruClient—Native Mobile protocol. Note: This bundle requires an Micro Focus Mobile Center license. For more details, see: https://software.microfocus.com/software/customer-technical-support-services License Bundles Bundle Name Protocols .NET Record/Replay Microsoft .NET Database ODBC Oracle—2 Tier Development SDK Unit Test—NUnit, Junit Selenium, LeanFT* GUI Unified Functional Testing (UFT) Java Java Record Replay Java over HTTP Mobile and IoT DevWeb JMeter Gatling Web—HTTP/HTML TruClient—Mobile Web SMP (SAP Mobile Platform) MQTT (Internet of Things) CoAP (Internet of Things) Network DNS (Domain Name Resolution) FTP (File Transfer Protocol) IMAP (Internet Message Access Protocol) LDAP (Listing Directory Service) MAPI (Microsoft Exchange) POP3 (Post Office Protocol) SMTP (Simple Mail Transfer Protocol) Tuxedo Windows Sockets (Winsock) Oracle E-Business DevWeb JMeter Gatling Oracle NCA Oracle—Web Siebel—Web Web—HTTP/HTML * Continued on next page LeanFT support is for LoadRunner Enterprise only. Contact us at: www.microfocus.com Like what you read? Share it. Bundle Name Protocols Remote Access Citrix ICA
    [Show full text]
  • Continuous Quality and Testing to Accelerate Application Development
    Continuous Quality and Testing to Accelerate Application Development How to assess your current testing maturity level and practice continuous testing for DevOps Continuous Quality and Testing to Accelerate Application Development // 1 Table of Contents 03 Introduction 04 Why Is Continuous Quality and Testing Maturity Important to DevOps? 05 Continuous Testing Engineers Quality into DevOps 07 Best Practices for Well- Engineered Continuous Testing 08 Continuous Testing Maturity Levels Level 1: Chaos Level 2: Continuous Integration Level 3: Continuous Flow Level 4: Continuous Feedback Level 5: Continuous Improvement 12 Continuous Testing Maturity Assessment 13 How to Get Started with DevOps Testing? 14 Continuous Testing in the Cloud Choosing the right tools for Continuous Testing On-demand Development and Testing Environments with Infrastructure as Code The Right Tests at the Right Time 20 Get Started 20 Conclusion 21 About AWS Marketplace and DevOps Institute 21 Contributors Introduction A successful DevOps implementation reduces the bottlenecks related to testing. These bottlenecks include finding and setting up test environments, test configurations, and test results implementation. These issues are not industry specific. They can be experienced in manufacturing, service businesses, and governments alike. They can be reduced by having a thorough understanding and a disciplined, mature implementation of Continuous Testing and related recommended engineering practices. The best place to start addressing these challenges is having a good understanding of what Continuous Testing is. Marc Hornbeek, the author of Engineering DevOps, describes it as: “A quality assessment strategy in which most tests are automated and integrated as a core and essential part of DevOps. Continuous Testing is much more than simply ‘automating tests.’” In this whitepaper, we’ll address the best practices you can adopt for implementing Continuous Quality and Testing on the AWS Cloud environment in the context of the DevOps model.
    [Show full text]
  • Chap 3. Test Models and Strategies 3.5 Test Integration and Automation 1
    Chap 3. Test Models and Strategies 3.5 Test Integration and Automation 1. Introduction 2. Integration Testing 3. System Testing 4. Test Automation Appendix: JUnit Overview 1 1. Introduction -A system is composed of components. System of software components can be defined at any physical scope. Component System Typical intercomponent interfaces (locus of (Focus of Integration) (Scope of integration faults) Integration) Method Class Instance variables Intraclass messages Class Cluster Intraclass messages Cluster Subsystem Interclass messages Interpackage messages Subsystem System Inteprocess communication Remote procedure call ORB services, OS services -Integration test design is concerned with several primary questions: 1. Which components and interfaces should be exercised? 2. In what sequence will component interfaces be exercised? 2 3. Which test design technique should be used to exercise each interface? -Integration testing is a search for component faults that cause intercomponent failures. -System scope testing is a search for faults that lead to a failure to meet a system scope responsibility. ÷System scope testing cannot be done unless components interoperate sufficiently well to exercise system scope responsibilities. -Effective testing at system scope requires a concrete and testable system-level specification. ÷System test cases must be derived from some kind of functional specification. Traditionally, user documentation, product literature, line-item narrative requirements, and system scope models have been used. 3 2. Integration Testing -Unit testing focuses on individual components. Once faults in each component have been removed and the test cases do not reveal any new fault, components are ready to be integrated into larger subsystems. -Integration testing detects faults that have not been detected during unit testing, by focusing on small groups of components.
    [Show full text]
  • Integration Testing of Object-Oriented Software
    POLITECNICO DI MILANO DOTTORATO DI RICERCA IN INGEGNERIA INFORMATICA E AUTOMATICA Integration Testing of Object-Oriented Software Ph.D. Thesis of: Alessandro Orso Advisor: Prof. Mauro Pezze` Tutor: Prof. Carlo Ghezzi Supervisor of the Ph.D. Program: Prof. Carlo Ghezzi XI ciclo To my family Acknowledgments Finding the right words and the right way for expressing acknowledgments is a diffi- cult task. I hope the following will not sound as a set of ritual formulas, since I mean every single word. First of all I wish to thank professor Mauro Pezze` , for his guidance, his support, and his patience during my work. I know that “taking care” of me has been a hard work, but he only has himself to blame for my starting a Ph.D. program. A very special thank to Professor Carlo Ghezzi for his teachings, for his willingness to help me, and for allowing me to restlessly “steal” books and journals from his office. Now, I can bring them back (at least the one I remember...) Then, I wish to thank my family. I owe them a lot (and even if I don't show this very often; I know this very well). All my love goes to them. Special thanks are due to all my long time and not-so-long time friends. They are (stricty in alphabetical order): Alessandro “Pari” Parimbelli, Ambrogio “Bobo” Usuelli, Andrea “Maken” Machini, Antonio “the Awesome” Carzaniga, Dario “Pitone” Galbiati, Federico “Fede” Clonfero, Flavio “Spadone” Spada, Gianpaolo “the Red One” Cugola, Giovanni “Negroni” Denaro, Giovanni “Muscle Man” Vigna, Lorenzo “the Diver” Riva, Matteo “Prada” Pradella, Mattia “il Monga” Monga, Niels “l’e´ semper chi” Kierkegaard, Pierluigi “San Peter” Sanpietro, Sergio “Que viva Mex- ico” Silva.
    [Show full text]
  • Roman Udka Software Test Automation Engineer
    Roman Udka Software Test Automation Engineer Location: Ukraine, Kharkiv Telegram/Cell phone: +380994365885 E-Mail: [email protected] Summary: ● Experience 7 years. ● Performing execution and analysis test results. ● Implementation of qa education processes. ● Developing from scratch an automation framework based on the java stack. ● Organizing automation qa processes. ● Setuping CI/CD processes. ● Preparation approach for performance testing. ● Team leading / management at least 5 people; Skills: ● Test automation: Selenium (Selenide), TestNG, Selenium Grid (Selenoid). ● Performance tools: Locust, Gatling. ● Build automation/Continuous integration: Docker, Gradle, GitLabCi, Jenkins. ● Cloud: Azure, Linode. ● Bug Tracking/Test Management: Atlassian JIRA. ● Web debugging tools: Chrome dev tools. ● Programming: Java, Python; ● Version control system: Git; ● Agile: Scrum, Kanban. Work Experience: January, 2019 - till now Senior Software Test Engineer, GlobalLogic Project Role: Lead Automation Tester Tasks and Accomplishments: ● Automation tests creation and execution on 5 sub-projects. ● Performance testing. ● Creation of test automation framework from scratch and maintenance. ● Creation a test plan and strategy. ● Leading and supporting work of qa team; Environment: ● Selenide, TestNG, Git, Gradle, GitLabCI, IntelliJ IDEA, Azure, Report Portal. ● Java, Python, Selenium Grid, Docker, JIRA, DataDog, SendGrid, Locust, Gatling; January, 2015 - January, 2019 Software Test Engineer, EPAM Systems Project Role: Automation Tester Tasks and Accomplishments:
    [Show full text]
  • A Confused Tester in Agile World … Qa a Liability Or an Asset
    A CONFUSED TESTER IN AGILE WORLD … QA A LIABILITY OR AN ASSET THIS IS A WORK OF FACTS & FINDINGS BASED ON TRUE STORIES OF ONE & MANY TESTERS !! J Presented By Ashish Kumar, WHAT’S AHEAD • A STORY OF TESTING. • FROM THE MIND OF A CONFUSED TESTER. • FEW CASE STUDIES. • CHALLENGES IDENTIFIED. • SURVEY STUDIES. • GLOBAL RESPONSES. • SOLUTION APPROACH. • PRINCIPLES AND PRACTICES. • CONCLUSION & RECAP. • Q & A. A STORY OF TESTING IN AGILE… HAVE YOU HEARD ANY OF THESE ?? • YOU DON’T NEED A DEDICATED SOFTWARE TESTING TEAM ON YOUR AGILE TEAMS • IF WE HAVE BDD,ATDD,TDD,UI AUTOMATION , UNIT TEST >> WHAT IS THE NEED OF MANUAL TESTING ?? • WE WANT 100% AUTOMATION IN THIS PROJECT • TESTING IS BECOMING BOTTLENECK AND REASON OF SPRINT FAILURE • REPEATING REGRESSION IS A BIG TASK AND AN OVERHEAD • MICROSOFT HAS NO TESTERS NOT EVEN GOOGLE, FACEBOOK AND CISCO • 15K+ DEVELOPERS /4K+ PROJECTS UNDER ACTIVE • IN A “MOBILE-FIRST AND CLOUD-FIRST WORLD.” DEVELOPMENT/50% CODE CHANGES PER MONTH. • THE EFFORT, KNOWN AS AGILE SOFTWARE DEVELOPMENT, • 5500+ SUBMISSION PER DAY ON AVERAGE IS DESIGNED TO LOWER COSTS AND HONE OPERATIONS AS THE COMPANY FOCUSES ON BUILDING CLOUD AND • 20+ SUSTAINED CODE CHANGES/MIN WITH 60+PEAKS MOBILE SOFTWARE, SAY ANALYSTS • 75+ MILLION TEST CASES RUN PER DAY. • MR. NADELLA TOLD BLOOMBERG THAT IT MAKES MORE • DEVELOPERS OWN TESTING AND DEVELOPERS OWN SENSE TO HAVE DEVELOPERS TEST & FIX BUGS INSTEAD OF QUALITY. SEPARATE TEAM OF TESTERS TO BUILD CLOUD SOFTWARE. • GOOGLE HAVE PEOPLE WHO COULD CODE AND WANTED • SUCH AN APPROACH, A DEPARTURE FROM THE TO APPLY THAT SKILL TO THE DEVELOPMENT OF TOOLS, COMPANY’S TRADITIONAL PRACTICE OF DIVIDING INFRASTRUCTURE, AND TEST AUTOMATION.
    [Show full text]
  • Loadrunner Cloud an Essential Component of the Devops Pipeline
    Product Flyer Application Delivery Management LoadRunner Cloud an Essential Component of the DevOps Pipeline Enable faster release and deployment cycles by taking advantage of agile development methodologies to achieve automated con- tinuous delivery. Micro Focus® LoadRunner Cloud makes it simple to integrate testing into the Development Process and test early and often. LoadRunner Cloud at a Glance: Applying an effective testing process within a Scalability in a Continuous high-speed delivery cycle can be a major chal- ■ Scalability: Testing Environment lenge, and traditional non-automated testing Managing hardware for performance testing A cloud-based solution that eliminates the methods may become a bottleneck. Con tin- tools is always a challenge. It becomes an even dependency on hardware as infrastructure is uous testing helps overcome this problem by bigger challenge in advanced DevOps environ- provisioned in the cloud fully automating testing throughout the delivery ments that have multiple teams and multiple ■ cycle. Micro Focus LoadRunner Cloud is the Automation: pipelines executing performance tests—with ideal solution to help overcome the potential Built in integration with Git, so scripts are managed challenges of applying continuous load testing. the need to manage a larger pool of load gener- in the repository for automatic updates prior to runs ators and controllers. LoadRunner Cloud, elim- ■ inates the dependency on hardware, allowing Collaboration: LoadRunner Cloud is the market-leading, multiple users and teams to connect to a single Project Management support, allowing users to share cloud-based performance testing solution, assets such as scripts, licenses, and load generators designed to support agile teams and DevOps cloud solution.
    [Show full text]
  • ISO 26262 Software Compliance with Parasoft C++Test
    ISO 26262 Software Compliance with Parasoft: Achieving Functional Safety in the Automotive Industry Some modern automobiles have more lines of code than a jet fighter. Even moderately sophisticated cars ship with larger and more complex codebases than the same line from just a few years ago. The inclusion of multi-featured infotainment systems, driver-assist technologies, and electronically controlled safety features as standard components—even in economy models—have fueled the growth of software in the automotive industry. Additionally, the emergence of driverless technology and “connected” cars that function as IoT systems on wheels will mean even larger and more complex codebases. All of the innovation taking place in the automotive industry, though, raises concerns over the safety, security, and reliability of automotive electronic systems. The concerns are appropriate given that the automotive software supply chain is a long convoluted system of third-party providers spanning several tiers. Consider, for example, that software developed for a specific microcontroller unit (MCU) may be integrated by a third-tier provider into a component they’re shipping to a second-tier provider and so on—until a composite component is delivered for final integration by the automaker. While not all automotive software is critical to the safe operation of the vehicle, code that carries out functional safety operations must be safe, secure, and reliable. Organizations must implement strong software quality process controls around the development of safety-critical software in accordance with ISO 26262, which is a functional safety standard for automotive software. ISO 26262 provides guidance on processes associated with software development for electrical and/or electronic (E/E) systems in automobiles.
    [Show full text]