Software Testing

Total Page:16

File Type:pdf, Size:1020Kb

Software Testing Software testing PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 24 Aug 2011 09:57:44 UTC Contents Articles Introduction 1 Software testing 1 Black-box testing 16 Black-box testing 16 Exploratory testing 17 San Francisco depot 19 Session-based testing 20 Scenario testing 22 Equivalence partitioning 22 Boundary-value analysis 24 All-pairs testing 25 Fuzz testing 26 Cause-effect graph 30 Model-based testing 31 Web testing 35 Installation testing 37 White-box testing 38 White-box testing 38 Code coverage 39 Modified Condition/Decision Coverage 44 Fault injection 45 Bebugging 50 Mutation testing 51 Testing of non functional software aspects 55 Non-functional testing 55 Software performance testing 56 Stress testing 62 Load testing 63 Volume testing 67 Scalability testing 67 Compatibility testing 68 Portability testing 69 Security testing 69 Attack patterns 71 Localization testing 75 Pseudolocalization 76 Recovery testing 77 Soak testing 78 Characterization test 78 Unit testing 80 Unit testing 80 Self-testing code 85 Test fixture 86 Method stub 88 Mock object 89 Lazy systematic unit testing 92 Test Anything Protocol 93 xUnit 96 List of unit testing frameworks 98 SUnit 129 JUnit 130 CppUnit 132 Test::More 133 NUnit 134 NUnitAsp 136 csUnit 138 HtmlUnit 140 Test automation 141 Test automation 141 Test bench 145 Test execution engine 146 Test stubs 148 Testware 149 Test automation framework 150 Data-driven testing 151 Modularity-driven testing 152 Keyword-driven testing 152 Hybrid testing 154 Lightweight software test automation 155 Testing process 156 Software testing controversies 156 Test-driven development 158 Agile testing 165 Bug bash 166 Pair Testing 166 Manual testing 167 Regression testing 169 Ad hoc testing 171 Sanity testing 171 Integration testing 173 System testing 174 System integration testing 176 Acceptance testing 178 Risk-based testing 182 Software testing outsourcing 183 Tester driven development 185 Test effort 185 Testing artefacts 187 IEEE 829 187 Test strategy 189 Test plan 192 Traceability matrix 194 Test case 195 Test data 197 Test suite 198 Test script 199 Test harness 200 Static testing 201 Static testing 201 Software review 202 Software peer review 204 Software audit review 205 Software technical review 206 Management review 207 Software inspection 208 Fagan inspection 210 Software walkthrough 213 Code review 214 Automated code review 216 Code reviewing software 217 Static code analysis 218 List of tools for static code analysis 220 GUI testing and review 226 GUI software testing 226 Usability testing 229 Think aloud protocol 234 Usability inspection 235 Cognitive walkthrough 235 Heuristic evaluation 238 Pluralistic walkthrough 241 Comparison of usability evaluation methods 244 References Article Sources and Contributors 246 Image Sources, Licenses and Contributors 252 Article Licenses License 253 1 Introduction Software testing Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects). Software testing can be stated as the process of validating and verifying that a software program/application/product: 1. meets the requirements that guided its design and development; 2. works as expected; and 3. can be implemented with the same characteristics. Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed. Overview Testing can never completely identify all the defects within software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment. A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[4] History The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[5] Although his attention was on breakage testing ("a successful test is one that finds a bug"[5] [6] ) it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[7] • Until 1956 - Debugging oriented[8] Software testing 2 • 1957–1978 - Demonstration oriented[9] • 1979–1982 - Destruction oriented[10] • 1983–1987 - Evaluation oriented[11] • 1988–2000 - Prevention oriented[12] Software testing topics Scope A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[13] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.[14] Functional vs non-functional testing Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work". Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Defects and failures Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer.[15] A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security. Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.[16] Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software.[16] A single defect may result in a wide range of failure symptoms. Software testing 3 Finding faults early It is commonly believed that the earlier a defect is found the cheaper it is to fix it.[17] The following table shows the cost of fixing the defect depending on the stage it was found.[18] For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. Cost to fix a defect Time detected Requirements Architecture Construction System test Post-release Time introduced
Recommended publications
  • GNU/Linux AI & Alife HOWTO
    GNU/Linux AI & Alife HOWTO GNU/Linux AI & Alife HOWTO Table of Contents GNU/Linux AI & Alife HOWTO......................................................................................................................1 by John Eikenberry..................................................................................................................................1 1. Introduction..........................................................................................................................................1 2. Symbolic Systems (GOFAI)................................................................................................................1 3. Connectionism.....................................................................................................................................1 4. Evolutionary Computing......................................................................................................................1 5. Alife & Complex Systems...................................................................................................................1 6. Agents & Robotics...............................................................................................................................1 7. Statistical & Machine Learning...........................................................................................................2 8. Missing & Dead...................................................................................................................................2 1. Introduction.........................................................................................................................................2
    [Show full text]
  • Oral History of Winifred Mitchell Baker
    ........ Computer • History Museum Oral History of Winifred Mitchell Baker Interviewed by: Marc Weber Recorded: December 10, 2014 Mountain View, California CHM Reference number: X7311.2015 © 2015 Computer History Museum Oral History of Winifred Mitchell Baker Marc Weber: I'm Marc Weber of the Computer History Museum. And I'm here with Mitchell Baker, Chairwoman of Mozilla. Thank you so much for doing this interview. Winifred Mitchell Baker: Thanks, Marc. I'm happy to be here. The museum has been a bright spot for a long time, so I'm honored as well. Weber: Thank you. As am I. So start with a bit of your background. What is your full name? And when and where were you born? Baker: My full name is Winifred Mitchell Baker. My mom was a little eccentric though, and she never wanted me to use Winifred. So it's my first name. But in her mind, I was always Mitchell. So that's what I go by. And I was born in Berkeley in California in 1959. Weber: And tell me a little bit about your family and where you grew up. Baker: I grew up in Oakland, so the East Bay across from San Francisco. It borders Berkeley. My parents were born and raised on the East Coast and moved west, as people did in the '50s, where it seemed [like] starting a new life. They were each eccentric. And each had their own view of their world and really clear opinions. And I think some of that has rubbed off actually. Weber: So eccentric in what way? What did they do? Baker: Well, my dad was a classic entrepreneur.
    [Show full text]
  • EVALUATING and SELECTING SOFTWARE TEST AUTOMATION TOOLS Synthesizing Empirical Evidence from Practitioners
    A 752 OULU 2020 A 752 UNIVERSITY OF OULU P.O. Box 8000 FI-90014 UNIVERSITY OF OULU FINLAND ACTA UNIVERSITATISUNIVERSITATIS OULUENSISOULUENSIS ACTA UNIVERSITATIS OULUENSIS ACTAACTA SCIENTIAESCIENTIAEA A RERUMRERUM Päivi Raulamo-Jurvanen NATURALIUMNATURALIUM Päivi Raulamo-Jurvanen University Lecturer Tuomo Glumoff EVALUATING AND University Lecturer Santeri Palviainen SELECTING SOFTWARE TEST Postdoctoral researcher Jani Peräntie AUTOMATION TOOLS SYNTHESIZING EMPIRICAL EVIDENCE FROM University Lecturer Anne Tuomisto PRACTITIONERS University Lecturer Veli-Matti Ulvinen Planning Director Pertti Tikkanen Professor Jari Juga University Lecturer Anu Soikkeli University Lecturer Santeri Palviainen UNIVERSITY OF OULU GRADUATE SCHOOL; UNIVERSITY OF OULU, FACULTY OF INFORMATION TECHNOLOGY AND ELECTRICAL ENGINEERING Publications Editor Kirsti Nurkkala ISBN 978-952-62-2765-8 (Paperback) ISBN 978-952-62-2766-5 (PDF) ISSN 0355-3191 (Print) ISSN 1796-220X (Online) ACTA UNIVERSITATIS OULUENSIS A Scientiae Rerum Naturalium 752 PÄIVI RAULAMO-JURVANEN EVALUATING AND SELECTING SOFTWARE TEST AUTOMATION TOOLS Synthesizing Empirical Evidence from Practitioners Academic dissertation to be presented with the assent of the Doctoral Training Committee of Information Technology and Electrical Engineering of the University of Oulu for public defence in the OP auditorium (L10), Linnanmaa, on 13 November 2020, at 12 noon UNIVERSITY OF OULU, OULU 2020 Copyright © 2020 Acta Univ. Oul. A 752, 2020 Supervised by Professor Mika Mäntylä Professor Burak Turhan Associate Professor Vahid Garousi Reviewed by Associate Professor Filippo Ricca Associate Professor Viktoria Stray Opponent Professor Kari Smolander ISBN 978-952-62-2765-8 (Paperback) ISBN 978-952-62-2766-5 (PDF) ISSN 0355-3191 (Printed) ISSN 1796-220X (Online) Cover Design Raimo Ahonen PUNAMUSTA TAMPERE 2020 Raulamo-Jurvanen, Päivi, Evaluating and Selecting Software Test Automation Tools.
    [Show full text]
  • Software License Agreement
    ESSENTIAL STUDIO SOFTWARE LICENSE AGREEMENT This Software License Agreement (the “Agreement”) is a legal agreement between you (“You”, “Your”, or “Customer”) and Syncfusion, Inc., a Delaware corporation with its principal place of business located at 2501 Aerial Center Parkway, Suite 200, Morrisville, NC 27560 (“Syncfusion”). This license is for Essential Studio Enterprise Edition, Essential Studio WPF Edition, Essential Studio PDF Edition, Essential Studio Xamarin Edition, and Essential Studio Win Forms Edition. Syncfusion licenses its products on a per-copy basis (referred to below as Retail Licenses) or under a project license, a corporate division license, or an enterprise license. Your right to use any given copy of a Syncfusion Essential Studio software product is generally set forth in this Agreement. In the event that your copy of this software product is licensed under a project license, a division license, or global license, additional terms and conditions shall also apply which will be set forth in a separate written and signed agreement. Carefully read all of the terms and conditions of this Agreement prior to downloading and/or installing or using the Licensed Product (as that term is defined below). This Agreement between you and Syncfusion sets forth the terms and conditions of your use of the Licensed Product. For the purposes of this Agreement, the effective date of this Agreement shall be the date upon which you click the “YES” button below. BY CLICKING THE “YES” BUTTON, YOU ARE ACCEPTING ALL OF THE TERMS OF THIS AGREEMENT AND AGREE TO BE BOUND BY THE TERMS OF THIS AGREEMENT. THIS AGREEMENT CONSTITUTES A BINDING CONTRACT.
    [Show full text]
  • The Machine That Builds Itself: How the Strengths of Lisp Family
    Khomtchouk et al. OPINION NOTE The Machine that Builds Itself: How the Strengths of Lisp Family Languages Facilitate Building Complex and Flexible Bioinformatic Models Bohdan B. Khomtchouk1*, Edmund Weitz2 and Claes Wahlestedt1 *Correspondence: [email protected] Abstract 1Center for Therapeutic Innovation and Department of We address the need for expanding the presence of the Lisp family of Psychiatry and Behavioral programming languages in bioinformatics and computational biology research. Sciences, University of Miami Languages of this family, like Common Lisp, Scheme, or Clojure, facilitate the Miller School of Medicine, 1120 NW 14th ST, Miami, FL, USA creation of powerful and flexible software models that are required for complex 33136 and rapidly evolving domains like biology. We will point out several important key Full list of author information is features that distinguish languages of the Lisp family from other programming available at the end of the article languages and we will explain how these features can aid researchers in becoming more productive and creating better code. We will also show how these features make these languages ideal tools for artificial intelligence and machine learning applications. We will specifically stress the advantages of domain-specific languages (DSL): languages which are specialized to a particular area and thus not only facilitate easier research problem formulation, but also aid in the establishment of standards and best programming practices as applied to the specific research field at hand. DSLs are particularly easy to build in Common Lisp, the most comprehensive Lisp dialect, which is commonly referred to as the “programmable programming language.” We are convinced that Lisp grants programmers unprecedented power to build increasingly sophisticated artificial intelligence systems that may ultimately transform machine learning and AI research in bioinformatics and computational biology.
    [Show full text]
  • Get Started with Corticon.Js
    Get Started with Corticon.js Copyright © 2020 Progress Software Corporation and/or its subsidiaries or affiliates. All rights reserved. ® These materials and all Progress software products are copyrighted and all rights are reserved by Progress Software Corporation. The information in these materials is subject to change without notice, and Progress Software Corporation assumes no responsibility for any errors that may appear therein. The references in these materials to specific platforms supported are subject to change. Corticon, DataDirect (and design), DataDirect Cloud, DataDirect Connect, DataDirect Connect64, DataDirect XML Converters, DataDirect XQuery, DataRPM, Defrag This, Deliver More Than Expected, Icenium, Ipswitch, iMacros, Kendo UI, Kinvey, MessageWay, MOVEit, NativeChat, NativeScript, OpenEdge, Powered by Progress, Progress, Progress Software Developers Network, SequeLink, Sitefinity (and Design), Sitefinity, SpeedScript, Stylus Studio, TeamPulse, Telerik, Telerik (and Design), Test Studio, WebSpeed, WhatsConfigured, WhatsConnected, WhatsUp, and WS_FTP are registered trademarks of Progress Software Corporation or one of its affiliates or subsidiaries in the U.S. and/or other countries. Analytics360, AppServer, BusinessEdge, DataDirect Autonomous REST Connector, DataDirect Spy, SupportLink, DevCraft, Fiddler, iMail, JustAssembly, JustDecompile, JustMock, NativeScript Sidekick, OpenAccess, ProDataSet, Progress Results, Progress Software, ProVision, PSE Pro, SmartBrowser, SmartComponent, SmartDataBrowser, SmartDataObjects,
    [Show full text]
  • Wiki Comunità Di Pratica
    STRUMENTI SOFTWARE PER LA COOPERAZIONE DI RETE Comunità Wiki Comunità di pratica Le comunità di pratica e di apprendimento sono gruppi sociali che hanno come obiettivo finale il generare conoscenza organizzata e di qualità cui ogni individuo può avere libero accesso. In queste comunità gli individui mirano a un apprendimento continuo e hanno consapevolezza delle proprie conoscenze. Non esistono differenze di tipo gerarchico: tutti hanno uguale importanza perché il lavoro di ciascuno è beneficio per l’intera comunità. La finalità è il miglioramento collettivo. Chi entra in questo tipo di organizzazione mira a un modello di condivisione; non esistono spazi privati o individuali, in quanto tutti condividono tutto. Chi ha conoscenza e la tiene per sé è come se non l’avesse. Le comunità di pratica tendono all'eccellenza, a prendere ciò che di meglio produce ognuno dei collaboratori. Questo metodo costruttivista punta ad una conoscenza che si costruisce insieme e rappresenta un modo di vivere, lavorare e studiare. É questa una concezione che si differenzia notevolmente dalle società di tipo individualistico. Tra queste troviamo la società occidentale dove tra gli uomini prevale la competizione e manca quella collaborazione che invece funge da motore pulsante nelle comunità di pratica. Le teorie di McLuhan Fra i più importanti teorici delle comunità di pratica c'è Marshall McLuhan. Negli strumenti del comunicare egli afferma: "nel regime della tecnologia elettrica il compito dell’uomo diventa quello di imparare e di sapere; tutte le forme di ricchezza derivano dallo spostamento d’informazione". Secondo il mito greco dell'alfabeto, prima dell'arrivo di re Cadmo (che introdusse in Grecia le lettere fonetiche), la conoscenza e il potere erano monopolio sacerdotale, in quanto la scrittura prealfabetica, con i suoi innumerevoli segni, era difficile da apprendere.
    [Show full text]
  • Seacare Authority Exemption
    EXEMPTION 1—SCHEDULE 1 Official IMO Year of Ship Name Length Type Number Number Completion 1 GIANT LEAP 861091 13.30 2013 Yacht 1209 856291 35.11 1996 Barge 2 DREAM 860926 11.97 2007 Catamaran 2 ITCHY FEET 862427 12.58 2019 Catamaran 2 LITTLE MISSES 862893 11.55 2000 857725 30.75 1988 Passenger vessel 2001 852712 8702783 30.45 1986 Ferry 2ABREAST 859329 10.00 1990 Catamaran Pleasure Yacht 2GETHER II 859399 13.10 2008 Catamaran Pleasure Yacht 2-KAN 853537 16.10 1989 Launch 2ND HOME 856480 10.90 1996 Launch 2XS 859949 14.25 2002 Catamaran 34 SOUTH 857212 24.33 2002 Fishing 35 TONNER 861075 9714135 32.50 2014 Barge 38 SOUTH 861432 11.55 1999 Catamaran 55 NORD 860974 14.24 1990 Pleasure craft 79 199188 9.54 1935 Yacht 82 YACHT 860131 26.00 2004 Motor Yacht 83 862656 52.50 1999 Work Boat 84 862655 52.50 2000 Work Boat A BIT OF ATTITUDE 859982 16.20 2010 Yacht A COCONUT 862582 13.10 1988 Yacht A L ROBB 859526 23.95 2010 Ferry A MORNING SONG 862292 13.09 2003 Pleasure craft A P RECOVERY 857439 51.50 1977 Crane/derrick barge A QUOLL 856542 11.00 1998 Yacht A ROOM WITH A VIEW 855032 16.02 1994 Pleasure A SOJOURN 861968 15.32 2008 Pleasure craft A VOS SANTE 858856 13.00 2003 Catamaran Pleasure Yacht A Y BALAMARA 343939 9.91 1969 Yacht A.L.S.T. JAMAEKA PEARL 854831 15.24 1972 Yacht A.M.S. 1808 862294 54.86 2018 Barge A.M.S.
    [Show full text]
  • Wikis UND BLOGS
    WiKIS UND BLOGS Herausgegeben von Christoph Lange an Computer & Literatur Verlag GmbH INHALT Vorwort 15 1 Wiki contra CMS 21 1.1 Ein Wiki aus Benutzersicht: Wikipedia 23 1.2 Content-Management-Systeme 30 1.3 Wiki versus CMS? Wiki als CMS! 32 2 Mit Wikis arbeiten 39 2.1 Seite, Artikel und Topic 40 2.2 Seiten bearbeiten 43 2.2.1 Bearbeitungsvermerke angeben 44 2.2.2 Den Autor nennen 44 2.2.3 Beiträge unterschreiben 46 2.2.4 Kleine oder große Änderung 47 2.2.5 Bearbeitungskonflikte und Editwars 47 2.2.6 Zum Ausprobieren: Vorschau und Sandkasten 51 2.2.7 Die erste neue Seite 52 2.2.8 Seiten löschen 54 2.3 Syntax-Standards 56 2.4 Verwandtschaft mit HTML 57 2.5 Zeichensätze 58 2.6 Text gliedern und formatieren 59 2.6.1 Leerraum, Zeilenumbrüche und Absätze 59 2.6.2 Überschriften 60 2.6.3 Zeichenformate 62 2.6.4 Listen und Einrückungen 65 2.6.5 Einrückungen 69 2.6.6 Ohne Wiki-Syntax arbeiten 71 2.6.7 Tabellen 74 2.6.8 Vermischtes 84 Inhalt 2.7 Links 88 2.7.1 Hyperlinks und Hypertext 88 2.7.2 Wiki-interne Links 89 2.7.3 Kategorien 99 2.7.4 Links ins Web 105 2.7.5 InterWiki: Links zu anderen Wikis 107 2.7.6 Bilder einbinden 111 2.8 Textbausteine und Vorlagen 117 2.9 Versionen 129 2.9.1 Alte Versionen einer Seite 129 2.9.2 Änderungen im Wiki verfolgen 134 2.10 Uploads 141 2.11 Suche im Wiki 147 Anwendungsgebiete und Arbeitshilfen.
    [Show full text]
  • Test Automation in Web Environment
    FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Test Automation in Web Environment Jorge Miguel Guerra Santos ForMestrado Jury Integrado em Engenharia Evaluation Informática e Computação Supervisor: Profa Ana Paiva Proponent: Engo Joel Campos June 27, 2016 Test Automation in Web Environment Jorge Miguel Guerra Santos Mestrado Integrado em Engenharia Informática e Computação Approved in oral examination by the committee: Chair: External Examiner: Supervisor: June 27, 2016 Abstract In today’s fast moving world, it is a challenge for any company to continuously maintain and improve the quality and efficiency of software systems development. In many software projects, testing is neglected because of time or cost constraints. This leads to a lack of product quality, followed by customer dissatisfaction and ultimately to increased overall quality costs. Addition- ally, with the increasingly more complex software projects, the number of hours spent on testing increases as well, but without the support of suitable tools, the test efficiency and validity tends to decline. Some software testing tasks, such as extensive low-level interface regression testing, can be laborious and time consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively. Once automated tests have been developed, they can be run quickly and repeatedly. However, test automation systems usually lack reporting, analysis and meaningful information about project status. The end goal of this research work is to create a prototype that can create and organize test batteries by recording user interaction, reproduce the recorded actions automatically, detect failures during test execution and generate reports, while also setting up the test environment, all in a automatic fashion and develop techniques to create more maintainable test cases.
    [Show full text]
  • Lisp Web Tales My Attempts at Learning Web Development, Foolishly Using Common Lisp, and Even More Foolishly, Writing About It in Public
    Lisp Web Tales My attempts at learning web development, foolishly using common lisp, and even more foolishly, writing about it in public Pavel Penev This book is for sale at http://leanpub.com/lispwebtales This version was published on 2013-11-24 This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get reader feedback, pivot until you have the right book and build traction once you do. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License Tweet This Book! Please help Pavel Penev by spreading the word about this book on Twitter! The suggested hashtag for this book is #lispwebtales. Find out what other people are saying about the book by clicking on this link to search for this hashtag on Twitter: https://twitter.com/search?q=#lispwebtales Contents Preface .................................................... i Introduction ................................................. ii Why Lisp ................................................. ii Whats in the book ............................................ ii Who is this for .............................................. ii What you need to get started ....................................... iii Typographic conventions ......................................... iii 1 The basics ................................................. 1 Raw example ............................................... 1 A simple
    [Show full text]
  • Integrated Testing Environment for Developers and Coding Testers
    INTEGRATED TESTING ENVIRONMENT for Developers and Coding Testers Telerik www.telerik.com/test-studio www.twitter.com/teleriktesting Test Studio’s plugin for Visual Studio enables developers and testers comfortable writing code to work in the environment where they’re most productive. Write Code Where Needed Every test automation project will require some level of coding to be successful. Test Studio’s record and playback creates powerful, maintainable tests, but you’ll still need to write code at some point to cover common, critical aspects such as configuration, backing APIs, or test oracles. to ensure those APIs are properly tested elsewhere in the Setup and Teardown/ system, of course!) Using these APIs ensures our tests run Configuration faster, and we’re also keeping the overall test suite much more maintainable. Complex tests require clear, flexible configuration actions that keep the overall test suite maintainable over the long run. Pushing setup, teardown, and configuration to Configuration Actions code versus the system’s interface dramatically speeds up test execution by leveraging the system’s own internal Part of keeping your test suite lean and focused on high-value functionality through internal APIs, web service endpoints, tests is ensuring you’re not testing components which don’t or database stored procedures. make sense to test. Using coded steps to disable and re-enable these components during automated testing runs is a great Let’s have a look at some common scenarios where a team way to keep your tests smoother and targeted to functionality might drop to code to handle specific situations. your teams are writing.
    [Show full text]