Extreme Programming XP: Why?

 Previously:  Get all the requirements before starting design

 Freeze the requirements before starting development

 Resist changes: they will lengthen schedule

 Build a change control process to ensure that proposed changes are looked at carefully and no change is made without intense scrutiny

 Deliver a product that is obsolete on release

A system of practices that a community of software developers is evolving to address the problems of quickly delivering quality software, and then evolving it to meet changing business needs. eXtreme…

Taking proven practices to the extreme  If testing is good, let everybody test all the time  If code reviews are good, review all the time  If design is good, refactor all the time  If integration testing is good, integrate all the time  If simplicity is good, do the simplest thing that could possibly work  If short iterations are good, make them really, really short XP values

 Communication

 Simplicity

 Feedback

 Courage XP practices

 The Planning Game*  Collective Ownership

 Small Releases 

 Metaphor  40-Hour Week

 Simple Design*  On-Site Customer

 Testing*  Coding Standards

 Refactoring*  Open workspace

 Pair Programming*  Daily Schema migration XP: Embrace Change  Recognize that:  All requirements will not be known at the beginning  Requirements will change

 Use tools to accommodate change as a natural process

 Do the simplest thing that could possibly work and refactor mercilessly

 Emphasize values and principles rather than process XP Practices

(Source: http://www.xprogramming.com/xpmag/whatisxp.htm) XP Practices: Whole Team

 All contributors to an XP project are one team  Must include a business representative--the ‘Customer’  Provides requirements  Sets priorities  Steers project  Team members are programmers, testers, analysts, coach, manager  Best XP teams have no specialists XP Practices: Planning Game

 Two key questions in :  Predict what will be accomplished by the due date  Determine what to do next

 Need is to steer the project

 Exact prediction (which is difficult) is not necessary XP Practices: Planning Game

 XP Release Planning  Customer presents required features  Programmers estimate difficulty  Imprecise but revised regularly  XP Iteration Planning  Two week iterations  Customer presents features required  Programmers break features down into tasks  Team members sign up for tasks  Running software at end of each iteration XP Practices: Customer Tests

 The Customer defines one or more automated acceptance tests for a feature

 Team builds these tests to verify that a feature is implemented correctly

 Once the test runs, the team ensures that it keeps running correctly thereafter

 System always improves, never backslides XP Practices: Small Releases

 Team releases running, tested software every iteration

 Releases are small and functional

 The Customer can evaluate or in turn, release to end users, and provide feedback

 Important thing is that the software is visible and given to the Customer at the end of every iteration XP Practices: Simple Design

 Build software to a simple design

 Through programmer testing and design improvement, keep the software simple and the design suited to current functionality

 Not a one-time thing nor an up-front thing

 Design steps in release planning and iteration planning

 Teams design and revise design through refactoring, through the course of the project XP Practices:

 Pair Programming  TDD  Continuous Integration  Design Improvement  Collective Code Ownership  Coding Standard  Sustainable Pace XP Practices: Design Improvement

 Continuous design improvement process called ‘refactoring’:  Removal of duplication  Increase cohesion  Reduce coupling

 Refactoring is supported by comprehensive testing--customer tests and programmer tests XP Practices: Collective Code Ownership

 Any pair of programmers can improve any code at any time

 No ‘secure workspaces’

 All code gets the benefit of many people’s attention

 Avoid duplication

 Programmer tests catch mistakes

 Pair with expert when working on unfamiliar code XP Practices: Coding Standard

 Use common coding standard

 All code in the system must look as though written by an individual

 Code must look familiar, to support collective code ownership XP Practices: Sustainable Pace

 Team will produce high quality product when not overly exerted

 Avoid overtime, maintain 40 hour weeks

 ‘Death march’ projects are unproductive and do not produce quality software

 Work at a pace that can be sustained indefinitely TDD

 Test-driven development (TDD), also called test-driven design, is a method of software development in which unit testing is repeatedly done on source code.

 The concept is to "get something working now and perfect it later."

 After each test, refactoring is done and then the same or a similar test is performed again.

 The process is iterated as many times as necessary until each unit is functioning according to the desired specifications.

 Test-driven development is part of a larger paradigm known as Extreme Programming (XP). CONTD…

 Test-driven development can produce applications of high quality in less time than is possible with older methods.

 Proper implementation of TDD requires the developers and testers to accurately anticipate how the application and its features will be used in the real world.

 Problems are approached in an incremental fashion and tests intended for the same unit of code must often be done many times over.

 The methodical nature of TDD ensures that all the units in an application have been tested for optimum functionality, both individually and in synergy with one another. CONTD..

 Because tests are conducted from the very beginning of the design cycle, time and money spent in debugging at later stages is minimized. Test-Driven Development Process

• Add a Test

• Run all tests and see if the new one fails

• Write some code

• Run tests and Refactor code

• Repeat EXAMPLE Context of Testing

• Valid inputs

• Invalid inputs

• Errors, exceptions, and events

• Boundary conditions

• Everything that might break Benefits of TDD

 It can lead to simple, elegant, modular code.

 It can help developers find defects and bugs earlier than they otherwise would and it’s a commonly held belief that a bug is cheaper to fix the earlier you find it.

 The tests can serve as a kind of live documentation and make the code much easier to understand.

 It’s easier to maintain and refactor code, your own and other programmer’s code.

 It can speed up development in the long term.

 It can encourage developers to think from an end user point of view. DISADVANTAGES OF TDD

 It necessitates a lot of time and effort up front which can make development feel slow to begin with.

 Focusing on the simplest design now and not thinking ahead can mean major refactoring requirements.

 It’s difficult to write good tests that cover the essentials and avoid the superfluous.

 It takes time and effort to maintain the test suite and it must be reconfigured for maximum value.

 If the design is changing rapidly then you’ll need to keep changing your tests. You could end up wasting a lot of time writing tests for features that are quickly dropped. Pair programming

 Pairing involves having two programmers working at a single workstation. One programmer “drives,” operating the keyboard, while the other “navigates,” watching, learning, asking, talking, and making suggestions.  The theory is that pairing results in better designs, fewer bugs, and much better spread of knowledge across a development team, and therefore more functionality per unit time, measured over the long term.  The two programmers switch roles frequently.  While reviewing, the observer also considers the "strategic" direction of the work, coming up with ideas for improvements and likely future problems to address. This frees the driver to focus all of his or her attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide. Advantages:

 We were able able to develop better communication skills

 More ideas

 We were able to learn different perspectives from other people Disadvantages:

 Conflict and debate break out among the group

 Everyone has their own ideas and sometimes its difficult to collaborate the ideas

 Possibly one person does more than half of the work on the project Continuous Integration 31

 The term 'Continuous Integration' originated with the Extreme Programming development process, as one of its original twelve practices. Although Continuous Integration is a practice that requires no particular tooling to deploy, we've found that it is useful to use a Continuous Integration server. The best known such server is CruiseControl What is Continuous Integration?

 Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day.

 Each integration is verified by an automated build to detect integration errors as quickly as possible. Why Continuous Integration?

 Integration is hard

 Effort increases exponentially with Number of components, Number of bugs, Time since last integration. Building a Feature with Continuous Integration

• Begin by taking a copy of the current integrated source onto the local development machine. Do this by using a source code management system by checking out a working copy from the mainline.

• Now take the working copy and do whatever you need to do to complete my task. This will consist of both altering the production code, and also adding or changing automated tests. Continuous Integration assumes a high degree of tests which are automated into the software: a facility called self-testing code • When you have finished carry out an automated build on your development machine. This takes the source code in your working copy, compiles and links it into an executable, and runs the automated tests. Only if it all builds and tests without errors is the overall build considered to be good.

• Then commit the changes into the repository. But it is possible, that other people may have made changes to the mainline before you got chance to commit. So first update your working copy with their changes and rebuild. If their changes clash with your changes, it will manifest as a failure either in the compilation or in the tests. In this case it's your responsibility to fix this and repeat until you can build a working copy that is properly synchronized with the mainline. Once you have made your own build of a properly synchronized working copy you can then finally commit my changes into the mainline, which then updates the repository.

• At this point build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that your changes are done. There is always a chance that you missed something on your machine and the repository wasn't properly updated. Only when your committed changes build successfully on the integration is your job done. This integration build can be executed manually by you, or done automatically by Cruise. If a clash occurs between two developers, it is usually caught when the second developer to commit builds their updated working copy. If not the integration build should fail. Either way the error is detected rapidly Key Practices For Effective Continuous Integration

1. Maintain a Single Source Repository. • Software projects involve lots of files that need to be orchestrated together to build a product. Keeping track of all of these is a major effort, particularly when there's multiple people involved. Over the years software development teams have built tools to manage all this. These tools - called Source Code Management tools, configuration management, version control systems, repositories, - are an integral part of most development projects.

• The current open source repository of choice is Subversion. (The older open-source tool is CVS ) • Once you get a source code management system, make sure it is the well known place for everyone to go get source code. Everything should be in the repository: test scripts, properties files, database schema, install scripts, IDE configurations and third party libraries. The basic rule of thumb is that you should be able to walk up to the project with a virgin machine, do a checkout, and be able to fully build the system • One of the features of version control systems is that they allow you to create multiple branches, to handle different streams of development. But keep your use of branches to a minimum. In particular have a mainline: a single branch of the project currently under development. Pretty much everyone should work off this mainline most of the time. 2.Automate the Build • Getting the sources turned into a running system can often be a complicated process involving compilation, moving files around, loading schemas into the databases, and so on. However like most tasks in this part of software development it can be automated - and as a result should be automated • Automated environments for builds are a common feature of systems. the Java community developed Ant, the .NET community has had Nant and now has MSBuild. Make sure to build and launch your system using these scripts using a single command. • Include everything in the automated build. Anyone should be able to bring in a virgin machine, check the sources out of the repository, issue a single command, and have a running system on their machine. 3. Make Your Build Self-Testing • A good way to catch bugs more quickly and efficiently is to include automated tests in the build process. Testing isn't perfect, of course, but it can catch a lot of bugs . In particular the rise of Extreme Programming (XP) and Test Driven Development (TDD) have done a great deal to popularize self-testing code and as a result many people have seen the value of the technique. • Both of these approaches make a point of writing tests before you write the code that makes them pass - in this mode the tests are as much about exploring the design of the system as they are about bug catching • For self-testing code you need a suite of automated tests that can check a large part of the code base for bugs. The tests need to be able to be kicked off from a simple command and to be self-checking. The result of running the test suite should indicate if any tests failed. For a build to be self-testing the failure of a test should cause the build to fail. 4. Everyone Commits To the Mainline Every Day • Integration is primarily about communication. Integration allows developers to tell other developers about the changes they have made. Frequent communication allows people to know quickly as changes develop.

• The one prerequisite for a developer committing to the mainline is that they can correctly build their code. This, of course, includes passing the build tests. As with any commit cycle the developer first updates their working copy to match the mainline, resolves any conflicts with the mainline, then builds on their local machine. If the build passes, then they are free to commit to the mainline. • By doing this frequently, developers quickly find out if there's a conflict between two developers. The key to fixing problems quickly is finding them quickly. With developers committing every few hours a conflict can be detected within a few hours of it occurring, at that point not much has happened and it's easy to resolve. Conflicts that stay undetected for weeks can be very hard to resolve. • General rule of thumb is that every developer should commit to the repository every day. In practice it's often useful if developers commit more frequently than that. The more frequently you commit, the less places you have to look for conflict errors, and the more rapidly you fix conflicts.

• Frequent commits encourage developers to break down their work into small chunks of a few hours each. This helps track progress and provides a sense of progress 5. Every Commit Should Build the Mainline on an Integration Machine

• Ensure that regular builds happen on an integration machine and only if this integration build succeeds should the commit be considered to be done. Since the developer who commits is responsible for this, that developer needs to monitor the mainline build so they can fix it if it breaks

• Many organizations do regular builds on a timed schedule, such as every night. Nightly builds mean that bugs lie undetected for a whole day before anyone discovers them. Once they are in the system that long, it takes a long time to find and remove them. So builds should be continuous. 6. Fix Broken Builds Immediately

• A key part of doing a continuous build is that if the mainline build fails, it needs to be fixed right away. The whole point of working with CI is that you're always developing on a known stable base. When the mainline build does break, however, it's important that it gets fixed fast and becomes a high priority task .

• Often the fastest way to fix the build is to revert the latest commit from the mainline, taking the system back to the last-known good build. 7. Keep the Build Fast

• The whole point of Continuous Integration is to provide rapid feedback. Nothing sucks the blood of a CI activity more than a build that takes a long time

• For most projects, however, the XP guideline of a ten minute build is perfectly within reason. It's worth putting in concentrated effort to make it happen, because every minute you reduce off the build time is a minute saved for each developer every time they commit. Since CI demands frequent commits, this adds up to a lot of time. 8. Test in a Clone of the Production Environment • The point of testing is to flush out, under controlled conditions, any problem that the system will have in production. A significant part of this is the environment within which the production system will run. If you test in a different environment, every difference results in a risk that what happens under test won't happen in production. • As a result you want to set up your test environment to be as exact a mimic of your production environment as possible. Use the same database software, with the same versions, use the same version of operating system. Put all the appropriate libraries that are in the production environment into the test environment, even if the system doesn't actually use them. Use the same IP addresses and ports, run it on the same hardware. • Despite limits, your goal should still be to duplicate the production environment as much as you can, and to understand the risks you are accepting for every difference between test and production. 9. Make it Easy for Anyone to Get the Latest Executable • One of the most difficult parts of software development is making sure that you build the right software. It's very hard to specify what you want in advance and be correct; people find it much easier to see something that's not quite right and say how it needs to be changed. Agile development processes explicitly expect and take advantage of this part of human behavior. • To help make this work, anyone involved with a software project should be able to get the latest executable and be able to run it: for demonstrations, exploratory testing, or just to see what changed this week. • Doing this is pretty straightforward: make sure there's a well known place where people can find the latest executable. 10. Everyone can see what's happening

 Continuous Integration is all about communication, so you want to ensure that everyone can easily see the state of the system and the changes that have been made to it. 11. Automate Deployment • To do Continuous Integration you need multiple environments, one to run commit tests, one or more to run secondary tests. Since you are moving executables between these environments multiple times a day, you'll want to do this automatically. So it's important to have scripts that will allow you to deploy the application into any environment easily. • A natural consequence of this is that you should also have scripts that allow you to deploy into production with similar ease. You may not be deploying into production every day (although I've run into projects that do), but automatic deployment helps both speed up the process and reduce errors. It's also a cheap option since it just uses the same capabilities that you use to deploy into test environments. Benefits of Continuous Integration

• Reduced risk of cost and schedule

• Shared stable software that works properly.

• No long integration. Thus helps you to see where you are, what works, what doesn't.

• Easier to find and remove bugs.

• Helps in frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features. This helps break down the barriers between customers and development .

• Measurable and visible code quality.

XP Values

 Communication

 Simplicity

 Feedback

 Courage XP Values: Communication

 Poor communication in software teams is one of the root causes of failure of a project

 Stress on good communication between all stakeholders-- customers, team members, project managers

 Customer representative always on site

 Paired programming XP Values: Simplicity

 ‘Do the Simplest Thing That Could Possibly Work’  Implement a new capability in the simplest possible way  Refactor the system to be the simplest possible code with the current feature set

 ‘You Aren’t Going to Need It’  Never implement a feature you don’t need now XP Values: Feedback

 Always a running system that delivers information about itself in a reliable way

 The system and the code provides feedback on the state of development

 Catalyst for change and an indicator of progress XP Values: Courage

 Projects are people-centric

 Ingenuity of people and not any process that causes a project to succeed XP Criticism

 Unrealistic--programmer centric, not business focused

 Detailed specifications are not written

 Design after testing

 Constant refactoring

 Customer availability

 12 practices are too interdependent XP and Scrum…1

Scrum teams typically work in XP teams typically work in iterations (called sprints) that are iterations that are one or two from two weeks to one month weeks long. long. Scrum product owner prioritizes the product backlog but the team XP teams work in a strict priority determines the sequence in order. which they will develop XP and Scrum…2 Scrum teams do not allow XP teams are much more amenable changes into their sprints. to change within their iterations.

Scrum doesn't prescribe any XP prescribes engineering practices, engineering practices particularly things like test-driven development, the focus on automated testing, pair programming, simple design, refactoring. Levels of Testing 58

 The code contains requirement defects, design defects, and coding defects

 Nature of defects is different for different injection stages

 One type of testing will be unable to detect the different types of defects

 Different levels of testing are used to uncover these defects 59

User needs Acceptance testing

Requirement System testing specification

Design Integration testing

code Unit testing Unit Testing 60

 Different modules tested separately  Goal: to identify defects injected during coding test the internal logic of the module  It is a code verification technique, covered in previous chapter  UT is closely associated with coding, thus the programmer does UT  coding phase sometimes called “coding and unit testing”  A module is considered for integration only after it has been unit tested satisfactorily Integration Testing 61

 Goal : To check interaction of modules in a subsystem

 To test interfaces between modules

 Unit tested modules are combined to form subsystems, which are then tested

 Test cases to “exercise” the interaction of modules in different ways

 This testing activity tests the design System Testing & Acceptance 62 Testing • Entire software system is tested • Reference document for this process : requirements document • Goal: does the software implement the requirements? • It is a validation exercise for the system with respect to the requirements • Generally the final testing stage before the software is delivered • May be done with real data of the client, in client’s environment to demonstrate that the software is working properly • This testing focuses on the external behaviour of the system; internal logic of the program is not emphasised • Mostly functional testing is performed at this level Other forms of testing 63

 Performance testing  tools needed to “measure” performance  Stress testing  load the system to peak, load generation tools needed Regression testing 64

 Performed when some changes are made to an existing system  test that previous functionality works alright, so that the modification has no undesired side effect on the earlier services.  Test cases are executed on the modified system and its output compared with the earlier output to make sure that the system is working as before.  Thus previous test records are needed for comparisons  Regression testing 65

 RT of large systems can take a lot of time since all test cases need to be executed.  But, if a small change is made to the system, it is not required to execute the entire test-suite  Thus Prioritization of testcases needed when complete test suite cannot be executed for a change FAIL FAST SYSTEMS

 A fail-fast system is designed to immediately report at its interface any failure or condition that is likely to lead to failure. Fail-fast systems are usually designed to stop normal operation rather than attempt to continue a possibly flawed process. Such designs often check the system's state at several points in an operation, so any failures can be detected early. A fail- fast module passes the responsibility for handling errors, but not detecting them, to the next- higher system design level. Software review

 A software review is "A process or meeting during which a software product is examined by a project personnel, managers, users, customers, user representatives, or other interested parties for comment or approval".  In this context, the term "software product" means "any technical document or partial document, produced as a deliverable of a software development activity", and may include documents such as contracts, project plans and budgets, requirements documents, specifications, designs, source code, user documentation, support and maintenance documentation, test plans, test specifications, standards, and any other type of specialist work product. Varieties of software review

 Software reviews may be divided into three categories:

 Software peer reviews are conducted by the author of the work product, or by one or more colleagues of the author, to evaluate the technical content and/or quality of the work.[2]

 Software management reviews are conducted by management representatives to evaluate the status of work done and to make decisions regarding downstream activities.  Software audit reviews are conducted by personnel external to the software project, to evaluate compliance with specifications, standards, contractual agreements, or other criteria. Different types of Peer reviews

 Code review is systematic examination computer source code.  Pair programming is a type of code review where two persons develop code together at the same workstation.  Inspection is a very formal type of peer review where the reviewers are following a well-defined process to find defects.  Walkthrough is a form of peer review where the author leads members of the development team and other interested parties through a software product and the participants ask questions and make comments about defects.  Technical review is a form of peer review in which a team of qualified personnel examines the suitability of the software product for its intended use and identifies discrepancies from specifications and standards.  Walkthrough is a form of peer review where the author leads members of the development team and other interested parties through a software product and the participants ask questions and make comments about defects.

 Technical review is a form of peer review in which a team of qualified personnel examines the suitability of the software product for its intended use and identifies discrepancies from specifications and standards. process for formal reviews

 0. [Entry evaluation]: The Review Leader uses a standard checklist of entry criteria to ensure that optimum conditions exist for a successful review.

 1. Management preparation: Responsible management ensure that the review will be appropriately resourced with staff, time, materials, and tools, and will be conducted according to policies, standards, or other relevant criteria.  2. Planning the review: The Review Leader identifies or confirms the objectives of the review, organises a team of Reviewers, and ensures that the team is equipped with all necessary resources for conducting the review.

 3. Overview of review procedures: The Review Leader, or some other qualified person, ensures (at a meeting if necessary) that all Reviewers understand the review goals, the review procedures, the materials available to them, and the procedures for conducting the review.  4. [Individual] Preparation: The Reviewers individually prepare for group examination of the work under review, by examining it carefully for anomalies (potential defects), the nature of which will vary with the type of review and its goals.

 5. [Group] Examination: The Reviewers meet at a planned time to pool the results of their preparation activity and arrive at a consensus regarding the status of the document (or activity) being reviewed.  6. Rework/follow-up: The Author of the work product (or other assigned person) undertakes whatever actions are necessary to repair defects or otherwise satisfy the requirements agreed to at the Examination meeting. The Review Leader verifies that all action items are closed.

 7. [Exit evaluation]: The Review Leader verifies that all activities necessary for successful review have been accomplished, and that all outputs appropriate to the type of review have been finalised.