Automated Testing and Verification Web Applications Testing
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Human Performance Regression Testing
Human Performance Regression Testing Amanda Swearngin, Myra B. Cohen Bonnie E. John, Rachel K. E. Bellamy Dept. of Computer Science & Eng. IBM T. J. Watson Research Center University of Nebraska-Lincoln, USA P.O. Box 704 Lincoln, NE 68588-0115 Yorktown Heights, NY 10598, USA faswearn,[email protected] fbejohn,[email protected] Abstract—As software systems evolve, new interface features common tasks, necessitating longer mouse movements and, in such as keyboard shortcuts and toolbars are introduced. While fact, decrease efficiency. In addition, many toolbars with small it is common to regression test the new features for functional icons may add screen clutter and may decrease a new user’s correctness, there has been less focus on systematic regression testing for usability, due to the effort and time involved in human ability to discover how to accomplish a task over a simpler studies. Cognitive modeling tools such as CogTool provide some UI design. help by computing predictions of user performance, but they still Usability testing has traditionally been empirical, bringing require manual effort to describe the user interface and tasks, end-users in to a testing facility, asking them to perform tasks limiting regression testing efforts. In recent work, we developed on the system (or prototype), and measuring such things as the CogTool-Helper to reduce the effort required to generate human performance models of existing systems. We build on this work time taken to perform the task, the percentage of end-users by providing task specific test case generation and present our who can complete the task in a fixed amount of time, and vision for human performance regression testing (HPRT) that the number and type of errors made by the end-users. -
Testing Java EE 6 Applications: Tools and Techniques
Testing Java EE 6 Applications: Tools and Techniques Reza Rahman Expert Group Member, Java EE 6 and EJB 3.1 Resin EJB 3.1 Lite Container Developer Author, EJB 3 in Action [email protected] Testing and Java EE Testing critical in enterprise development Pain-point that was not addressed well by J2EE Java EE 6 helps enormously by providing a number of enabling features geared towards testing Still somewhat a patchwork of evolving solutions Focus on developer (unit and integration) testing Focus on JUnit More about new tools rather than new techniques Testing Java EE Applications Testing Servlet 3 JUnit, HttpUnit, HtmlUnit great choices See if your container can be embedded into a unit test Cactus and Selenium can be good choices too Simulated Servlet containers like ServletUnit or mocking Servlet objects with EasyMock or Mockito options for very simple cases Servlet 3 Testing Demo! Testing JSF 2 JSF 2 project stages invaluable for debugging, test configuration, component development For simple cases, generic Servlet testing tools could be used, especially Selenium JSFUnit ideal for more complete JSF testing JSFUnit uses Cactus, JUnit, HttpUnit, HtmlUnit under the hood JSF 2 Testing Demo! Testing EJB 3.1/CDI CDI increases Java EE middle-tier testability by leaps and bounds EJB 3.1 embedded containers, generic dependency injection, @Alternative, portable extensions, XML deployment descriptors key enablers Arquillian/ShrinkWrap ideal cross-vendor CDI/EJB 3.1 testing tools Apache MyFaces CODI Testing another option Testing EJB 3.1/CDI Check to -
End to End Testing Using Integrated Tools THESIS Presented in Partial
End to end testing using integrated tools THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By Da Zhang Graduate Program in Computer Science and Engineering The Ohio State University 2012 Dissertation Committee: Rajiv Ramnath, Advisor Jay Ramananthan Thomas Bitterman Copyright by Da Zhang 2012 Abstract Automated functional testing for web applications is a well-researched area. Most organizations confront serious challenges in testing their websites, and these challenges are becoming more and more serious because the number of clients who rely on the web to perform e-commerce activity is increasing. Therefore, thorough, automatic, regressive and lean website testing technology is required to maintain website quality. In this paper, we describe an environment for testing with Selenium and Nagios, as well as customization we develop to incorporate Selenium script into a Nagios executable library. Nagios is an open source framework for monitoring network hosts, services and other hardware conditions with the purpose of failure detection [29]. Based on plug-in mechanisms, each service within the Nagios executable library can be executed as a Nagios plug-in. Selenium is a set of different open source software tools, each with a different approach to supporting web application test automation and agile process automated testing [1]. In this paper, we introduce in the how we combine the Nagios monitoring tool and Selenium testing tool to realize end-to-end testing using integrated tools. ii Dedication This document is dedicated to my family and my friends. iii Acknowledgments I sincerely thank my professors, Dr. -
1. Can You Explain the PDCA Cycle and Where Testing Fits In?
1. Can you explain the PDCA cycle and where testing fits in? Software testing is an important part of the software development process. In normal software development there are four important steps, also referred to, in short, as the PDCA (Plan, Do, Check, Act) cycle. Let's review the four steps in detail. 1. Plan: Define the goal and the plan for achieving that goal. 2. Do/Execute: Depending on the plan strategy decided during the plan stage we do execution accordingly in this phase. 3. Check: Check/Test to ensure that we are moving according to plan and are getting the desired results. 4. Act: During the check cycle, if any issues are there, then we take appropriate action accordingly and revise our plan again. So developers and other stakeholders of the project do the "planning and building," while testers do the check part of the cycle. Therefore, software testing is done in check part of the PDCA cyle. 2. What is the difference between white box, black box, and gray box testing? Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested. White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills. There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. -
Orthogonal Array Application for Optimized Software Testing
WSEAS TRANSACTIONS on COMPUTERS Ljubomir Lazic and Nikos Mastorakis Orthogonal Array application for optimal combination of software defect detection techniques choices LJUBOMIR LAZICa, NIKOS MASTORAKISb aTechnical Faculty, University of Novi Pazar Vuka Karadžića bb, 36300 Novi Pazar, SERBIA [email protected] http://www.np.ac.yu bMilitary Institutions of University Education, Hellenic Naval Academy Terma Hatzikyriakou, 18539, Piraeu, Greece [email protected] Abstract: - In this paper, we consider a problem that arises in black box testing: generating small test suites (i.e., sets of test cases) where the combinations that have to be covered are specified by input-output parameter relationships of a software system. That is, we only consider combinations of input parameters that affect an output parameter, and we do not assume that the input parameters have the same number of values. To solve this problem, we propose interaction testing, particularly an Orthogonal Array Testing Strategy (OATS) as a systematic, statistical way of testing pair-wise interactions. In software testing process (STP), it provides a natural mechanism for testing systems to be deployed on a variety of hardware and software configurations. The combinatorial approach to software testing uses models to generate a minimal number of test inputs so that selected combinations of input values are covered. The most common coverage criteria are two-way or pairwise coverage of value combinations, though for higher confidence three-way or higher coverage may be required. This paper presents some examples of software-system test requirements and corresponding models for applying the combinatorial approach to those test requirements. The method bridges contributions from mathematics, design of experiments, software test, and algorithms for application to usability testing. -
Testing Web Applications in Practice
Testing web applications in practice Javier Jesús Gutiérrez, Maria José Escalona, Manuel Mejías, Jesús Torres Department of Computer Languages and Systems. University of Seville. {javierj, escalona, risoto, jtorres}@lsi.us.es ABSTRACT Software testing process is gaining importance at same time that size and complexity of software are growing. The specifics characteristics of web applications, like client-server architecture, heterogeneous languages and technologies or massive concurrent access, makes hard adapting classic software testing practices and strategies to web applications. This work exposes an overview of testing processes and techniques applied to web development. This work also describes a practical example testing a simple web application using only open-source tools. 1. INTRODUCTION Internet gives to developers a new and innovative way to build software. Internet also allows the access of millions of user to a web application [4]. Thus, problems in a web application can affect to millions of users, cause many costs to business [2] and destroy a commercial image. The Business Internet Group of San Francisco undertook a study to capture, assess and quantify the integrity of web applications on 41 specific web sites. According to the report, of those 41 sites, 28 sites contained web application failures [14]. Thus, software testing acquires a vital importance in web application development. First web applications had a simple design based on static HTML pages. Nowadays, web applications are much more complex and interactive with dynamic information and customized user interfaces. Design, support and test modern web applications have many challenges to developers and software engineers. This work is organized as follow. -
Cost Justifying Usability a Case-Study at Ericsson
Master thesis Computer science Thesis ID:MCS-2011-01 January 2011 Cost justifying usability a case-study at Ericsson Parisa Yousefi Pegah Yousefi School of Computing Blekinge Institute of Technology SE 371 39 Karlskrona Sweden 1 This thesis is submitted to the School of Computing at Blekinge Institute of Technology in partial fulfilment of the requirements for the degree of Master of Science in Computer Sci- ence. The thesis is equivalent to 20 weeks of full time studies. Contact Information: Author(s): Parisa Yousefi Address: Näktergalsvagen 5, Rödeby, 371 30 E-mail: [email protected] Pegah Yousefi Address: Drottninggatan 36, Karlskrona, 371 37 E-mail: [email protected] External advisor(s): Helen Sjelvgren Ericsson Address: Ölandsgatan 6 Karlskrona, 371 23 Phone: +46 10 7158732 University advisor(s): Dr Kari Rönkkö, Ph.D Jeff Winter Licentiate School of Computing Internet : www.bth.se/com Blekinge Institute of Technology Phone : +46 455 38 50 00 SE – 371 39 Karlskrona Fax : +46 455 38 50 57 Sweden 2 Abstract Usability in Human-computer Interaction and user- centered design is indeed a key factor, in success of any software products. However, software industries, in particular the attitude of revenue management, express a need of a economic and academic case, for justification of usability discipline. In this study we investigate the level of usability and usability issues and the gaps concerning usability activities and the potential users, in a part of charging system products in Ericsson. Also we try identifying the cost-benefit factors, usability brings to this project, in order to attempt 'justifying the cost of usability for this particular product'. -
Integration Tests
Software Development Tools COMP220/COMP285 Sebastian Coope More on Automated Testing and Continuous Integration These slides are mainly based on “Java Tools for Extreme Programming” – R.Hightower & N.Lesiecki. Wiley, 2002 1 Automated Testing • Testing software continuously validates that - the software works and - meets the customer’s requirements • Automating the tests ensures that - testing will in fact be continuous • Without testing, a team is just guessing that its software meets those requirements. 2 Need for automation Humans make mistakes Humans sometimes don’t care! Manual testing is slow, hard to gather statistics Automated testing can be done at all hours Automated testing is fast, 10,000 tests/second Regression testing builds up with the project size 3 Tests and refactoring • Refactoring is changing existing code for simplicity, clarity and/or feature addition. - cannot be accomplished without tests. • Even the most stable or difficult-to-change projects require occasional modification. • That is where automated testing comes in. 4 Tests and refactoring • Comprehensive tests (running frequently) - verify how the system should work, - allow the underlying behaviour to change freely. • Any problems introduced during a change are - automatically caught by the tests. • With testing, programmers - refactor with confidence, - the code works, and - the tests prove it 5 Types of Automated Testing 1. Unit Testing • testing of a unit of a code - everything that could possibly break • usually exercises all the methods in public interface -
SSW 567 Software Testing, Quality, and Maintenance
School of Systems and Enterprises Stevens Institute of Technology Castle Point on Hudson Hoboken, NJ 07030 SSW 567 Software Testing, Quality, and Maintenance PURPOSE: This memorandum provides the Syllabus for SSW 567 TEXTS: • A Practitioner’s Guide to Software Test Design by Copeland - Artech House [PG] • Software Testing: A Craftsman’s Approach – Jorgensen [ST] • Software Measurement and Estimation by Laird – Wiley (optional) [SME] SOFTWARE: • Open Office/Microsoft Office • Possibly the ATM Simulation software system by Russell C. Bjork [ATM] • Freeware tools for Configuration Mgmt (e.g., Subversion), Test Mgmt (e.g., Testopia) and Defect Mgmt Tool (e.g., Bugzilla, Mantis) COURSE DESCRIPTION: This course is an in-depth study of software testing and quality throughout the development process. The successful student will learn to effectively analyze and test software systems using a variety of methods as well as to select the best-fit test case design methodologies to achieve best results. There is extensive hands-on project work in this course. Emphasis is placed on testing as technical investigation of a product undertaken to determine the system quality and on the clear communication of results, both of which are crucial to success as a tester. COURSE PREREQUISITES: Working knowledge of C, C++, or Java (or some other programming language). Extensive programming will not be required. COURSE OUTLINE SSW 567 School of Systems and Enterprises Stevens Institute of Technology Castle Point on Hudson Hoboken, NJ 07030 Readings in MODULE DESCRIPTION PROJECT Texts Definitions of Testing, QA, Install configuration Lecture 1: Getting [PG] ch. 1, 15, 16; faults, failures & their Costs; mgmt tools, write Started [ST] Ch 1 & 2 Fundamental Issues of testing initial program Lecture 2: SCM Concepts, Processes, [ST] Ch 3: This is Test initial program - Configuration and Tools Continuous background seeded bugs Management Integration. -
Flash Usability Report
Website Tools and Applications with Flash DESIGN GUIDELINES BASED ON USER TESTING OF 46 FLASH TOOLS By Hoa Loranger, Amy Schade, and Jakob Nielsen Additional research by: Rolf Molich Kara Pernice 48105 WARM SPRINGS BLVD. FREMONT, CA 94539-7498 USA WWW.NNGROUP.COM Copyright © Nielsen Norman Group, All Rights Reserved. To get your own copy, download from: http://www.nngroup.com/reports/website-tools-and-applications-flash/ About This Free Report This report is a gift for our loyal audience of UX enthusiasts. Thank you for your support over the years. We hope this information will aid your efforts to improve user experiences for everyone. The research for this report was done in 2013, but the majority of the advice may still be applicable today, because people and principles of good design change much more slowly than computer technology does. We sometimes make older report editions available to our audience at no cost, because they still provide interesting insights. Even though these reports discuss older designs, it’s still worth remembering the lessons from mistakes made in the past. If you don’t remember history, you’ll be doomed to repeat it. We regularly publish new research reports that span a variety of web and UX related topics. These reports include thousands of actionable, illustrated user experience guidelines for creating and improving your web, mobile, and intranet sites. We sell our new reports to fund independent, unbiased usability research; we do not have investors, government funding or research grants that pay for this work. Visit our reports page at https://www.nngroup.com/reports/ to see a complete list of these reports. -
Testing Web Applications
Testing Web Applications Jeff Offutt http://www.cs.gmu.edu/~offutt/ SWE 642 Software Engineering for the World Wide Web Joint research with Ye Wu, Xiaochen Du, Wuzhi Xu, and Hong Huang Outline 1. Testing Web Applications is Different 2. HtmlUnit 3. User Session Data 4. Atomic Section Modeling 5. Testing Date State Management 6. Testing Web Services 4/27/2009 © Offutt 2 1 General Problem • Web applications are heterogeneous, dynamic and must satisfy very high quality attributes • Use of the Web is hindered by low quality Web sites and applications •Web applications need to be built better and tested more 4/27/2009 © Offutt 3 New Essential Problems of Web Software 1. Web site software is extremely loosely coupled – Coupled through the Internet – separated by space – Coupled to diverse hardware and software applications – Web services will dynamically couple with other services after deployment – without human intervention ! 2. Web software services offer dynamically changing flow of control – Web pages are created by software on user request – The interaction points (forms, buttons, etc.) vary depending on state: the user, previous choices, server-side data, even time of day – Examples : amazon.com, netflix.com, washingtonpost.com 4/27/2009 © Offutt 4 2 Extremely Loose Coupling • Tight Coupling : Dependencies among the methods are encoded in their logic – Changgyqggges in A may require changing logic in B • Loose Coupling : Dependencies among the methods are encoded in the structure and data flows – Changes in A may require changing data -
Standard Glossary of Terms Used in Software Testing Version 3.0 Advanced Test Analyst Terms
Standard Glossary of Terms used in Software Testing Version 3.0 Advanced Test Analyst Terms International Software Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged. Copyright © International Software Testing Qualifications Board (hereinafter called ISTQB®). acceptance criteria Ref: IEEE 610 The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. acceptance testing Ref: After IEEE 610 See Also: user acceptance testing Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. accessibility testing Ref: Gerrard Testing to determine the ease by which users with disabilities can use a component or system. accuracy Ref: ISO 9126 See Also: functionality The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. accuracy testing See Also: accuracy Testing to determine the accuracy of a software product. actor User or any other person or system that interacts with the test object in a specific way. actual result Synonyms: actual outcome The behavior produced/observed when a component or system is tested. adaptability Ref: ISO 9126 See Also: portability The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.