QUALITY ASSURANCE AND QUALITY CONTROL PROCESSES AND PROCEDURES FOR SMALL PROJECTS
Ashif Khoja B.E., Dharamsinh Desai Institute of Technology, India, 2003
PROJECT
Submitted in partial satisfaction of the requirements for the degree of
MASTER OF SCIENCE
in
COMPUTER SCIENCE
at
CALIFORNIA STATE UNIVERSITY, SACRAMENTO
FALL 2009
QUALITY ASSURANCE AND QUALITY CONTROL PROCESSES AND PROCEDURES FOR SMALL PROJECTS
A Project
by
Ashif Khoja
Approved by:
______, Committee Chair Ahmed Salem, Ph.D.
______, Second Reader Isaac Ghansah, Ph.D.
______Date
ii
Student: Ashif Khoja
I certify that this student has met the requirements for format contained in the University format manual, and that this project is suitable for shelving in the Library and credit is to be awarded for the Project.
______, Graduate Coordinator ______Cui Zhang, Ph.D. Date
Department of Computer Science
iii
Abstract
of
QUALITY ASSURANCE AND QUALITY CONTROL PROCESSES AND PROCEDURES FOR SMALL PROJECTS
by
Ashif Khoja
The business of software development and maintenance has become increasingly competitive. Software projects need to be cost-effective and provide high-quality products to compete in today's market. Today's software applications are very complex and software failures can result in financial damage and even threaten health or lives of human beings.
There is a dire need for an applicable, standardized and consistent Quality
Assurance (QA) model that can be consistently implemented throughout the software life cycle. The proposed quality assurance model consists of three major components: Quality
Assurance (QA), Quality Control (QC), and Testing.
______, Committee Chair Ahmed Salem, Ph.D.
______Date
iv
ACKNOWLEDGMENTS
I would like to thank my project advisor, Dr. Ahmed Salem for supporting the idea of the project and giving advice on going forward with it. I am thankful to him because of his continuous guidance on the work performance and documentation throughout the project.
I am also thankful to my second reader, Dr. Isaac Ghansah for providing great help as and when needed during the project. He has done great help in giving important advices and proof reading the document.
I am also grateful to Dr. Cui Zhang for helping in various aspects during my
Masters at California State University, Sacramento. She has shown the path during preparation for the project and provided great ease during completion of the project.
Finally, I am thankful to my family and friends for encouragement during tough times and guidance during all the time for completion of my Masters project. I am thankful to everyone else who has provided me help for the Masters project.
v
TABLE OF CONTENTS Page
Acknowledgments...... v
List of Tables ...... x
List of Figures ...... xi
Chapter
1 INTRODUCTION ...... 1
1.1 Quality...... 1
1.2 Quality Control ...... 2
1.3 Quality Assurance ...... 2
1.4 Software Quality Assurance ...... 3
1.5 SQA Activities ...... 3
1.6 Importance of Software Quality Program in Software Industry ...... 4
1.7 Example of Software Quality Model CMM ...... 7
2 BACKGROUND ...... 9
2.1 Microsoft Solution Framework (MSF) ...... 10
2.2 Extreme Programming ...... 11
2.3 Rational Unified Process (RUP) ...... 12
2.4 Proposed Software QA Model ...... 14
3 QUALITY ASSURANCE METHODOLOGY ...... 16
3.1 Defining the Quality Assurance Methodology ...... 16
3.1.1 Quality Assurance ...... 16
3.1.2 Quality Control (QC) ...... 18
3.1.3 Software Testing ...... 18 vi
3.2 Quality Model ...... 19
4 QUALITY ASSURANCE PROCESSES AND PROCEDURES ...... 21
4.1 Quality Management (QM) ...... 21
4.2 Quality Control (QC) ...... 22
4.3 Software Testing ...... 23
5 QUALITY ASSURANCE INTEGRATION WITH MODIFIED WATERFALL MODEL ...... 25
5.1 The Software Quality Assurance Process ...... 25
5.2 QA/QC Tasks and Activities ...... 27
5.3 SQA Integration into the ―Modified‖ Waterfall Model ...... 27
5.4 Adopting the Modified Waterfall Model ...... 28
5.5 System Concept Phase ...... 29
5.5.1 Inputs/Activities/Outputs ...... 29
5.5.2 Roles and Responsibilities ...... 34
5.5.3 Potential Risks and Constraints ...... 35
5.6 Software Requirements Phase ...... 35
5.6.1 Requirements Sub-phases ...... 35
5.6.2 Inputs/Activities/Outputs ...... 42
5.6.3 Roles and Responsibilities ...... 50
5.6.4 Potential Risks and Constraints ...... 52
5.7 Software Design Phase ...... 52
5.7.1 Input ...... 53
5.7.2 QA/QC Related Activities ...... 53
5.7.3 Output and Templates ...... 58 vii
5.7.4 Issues and Concerns ...... 59
5.7.5 Overall SQA Functions and Design Phase ...... 59
5.8 Software Development Phase ...... 60
5.8.1 Input ...... 61
5.8.2 QA/QC Related Activities ...... 61
5.8.3 Output and Templates ...... 65
5.8.4 Issues and Concerns ...... 65
5.9 Software Integration and System Test Phase ...... 66
5.9.1 Input ...... 67
5.9.2 QA/QC Related Activities ...... 67
5.9.3 Testing Strategies ...... 71
5.9.4 QA/QC Testing Activities...... 76
5.9.5 Output and Templates ...... 76
5.9.6 Issues and Concerns ...... 77
5.9.7 Overall SQA Functions and System Integration Phase ...... 77
5.10 Software Acceptance Test Phase ...... 79
5.10.1 Input ...... 79
5.10.2 QA/QC Related Activities ...... 80
5.10.3 Output and Templates ...... 83
5.10.4 Issues and Concerns ...... 83
5.11 Operation and Maintenance Phase ...... 84
5.11.1 Input ...... 84
5.11.2 QA/QC Related Activities ...... 84
viii
5.11.3 Output and Templates ...... 87
5.11.4 Issues and Concerns ...... 87
6 CONCLUSION AND FUTURE WORK ...... 88
References ...... 89
ix
LIST OF TABLES Page
1. Table 1 Sample Traceability Matrix (Forward To/From Requirements)……..46
2. Table 2 Detailed Traceability Matrix – User to SRS....…………………...... 47
3. Table 3 Detailed Traceability Matrix - SRS to Design ………………………48
4. Table 4 Detailed Traceability Matrix - SRS to System Testing……………....48
x
LIST OF FIGURES Page
1. Figure 1 Quality Model………………….……………………………………19
2. Figure 2 Software Development and QA …..……………………..………….24
3. Figure 3 SQA and the Modified Waterfall Model……………………………26
4. Figure 4 Modified Waterfall Model .………………………………………... 28
5. Figure 5 Software Concepts Phase ….………………………………………. 29
6. Figure 6 Requirements Phase ………………………………………………....35
7. Figure 7 Context Diagram ………………………………………………….....45
8. Figure 8 Four Directions of Requirement Traceability…………………….....46
9. Figure 9 Data Dictionary …………………………….…………………….....50
10. Figure 10 Design Phase…………….…………………………………………53
11. Figure 11 Software Development and Unit Testing ...………………………..61
12. Figure 12 Integration & System Testing……………………………………...67
13. Figure 13 Bug Life Cycle ………….…………………………………………69
14. Figure 14 Big Bang Testing ……….…………………………………………71
15. Figure 15 Incremental Testing……….……………………………………….72
16. Figure 16 Top-down Testing…………….……………………………………73
17. Figure 17 Bottom-up Testing……….………………………………………...74
18. Figure 18 Final Acceptance……….………………………………………….79
19. Figure 19 Operation & Maintenance..………………………………………. 84
xi 1
Chapter 1
INTRODUCTION
1.1 Quality
The American Heritage Dictionary defines quality as ―a characteristic or attribute of something.‖ As and attribute of an item ,quality refers to measureable characteristics things we can compare to known standards such as length, color, electrical properties, and malleability. However, software, largely an intellectual entity, is more challenging to characterize than physical objects.
Nevertheless, measures of a program‘s characteristics do exist. These properties include cyclomatic complexity, cohesion, number of function points, lines of code, and many others. When we examine an item based on it‘s measurable characteristics, two kinds of quality may be encountered: quality of design and quality of conformance.
Quality of design refers to the characteristics that designers specify for an item.
Quality of conformance is the degree to which the design specifications are followed during manufacturing.
In software development, quality of design encompasses requirement, specifications and the design of the system. Quality of conformance is an issue focused primarily on implementation. If the implementation follows the design and the resulting system meets its requirements and performance goals, conformance quality is high [1].
2
1.2 Quality Control
Variation control may be equated to quality control. But how do we achieve quality control? Quality control involves series of inspections, reviews, and tests used throughout the software process to ensure each work product meets the requirements placed upon it. Quality control includes a feedback loop to the process that created the work product. The combination of measurement and feedback allows us to tune the process when the work products created fail to meet their specifications.
A key concept of quality control is that all work products have defined, measurable specifications to which we may compare the output of each process. The feedback loop is essential to minimize the defects produced [1].
1.3 Quality Assurance
Quality assurance consists of a set of auditing and reporting functions that assess the effectiveness and completeness of quality control activities. The goal of quality assurance is to provide management with the data necessary to be informed about product quality, thereby gaining insight and confidence that product quality is meeting its goals.
Of course, if the data provided through quality assurance identify problems, it is management‘s responsibility to address the problems and apply the necessary resources to resolve quality issues [1].
3
1.4 Software Quality Assurance
Even the most jaded software developers will agree that high-quality software is an important goal. But how do we define quality? A wag once said, ―Every program does something right, it just may not be the thing that we want it to do.‖
Many definitions of software quality have been proposed in the literature.
Software quality is defined as conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software. This definition serves to emphasize three important points:
1. Software requirements are the foundation from which quality is measured. Lack of conformance to requirements is lack of quality.
2. Specified standards define a set of development criteria that guide the manner in which software is engineered. If the criteria are not followed, lack of quality will almost surely result.
3. A set of implicit requirements often goes unmentioned. If software conforms to its explicit requirements but fails to meet implicit requirements, software quality is suspect
[1].
1.5 SQA Activities
Software quality assurance is composed of a variety of tasks associated with tow different constituencies-the software engineers who do technical work ad an SQA group
4 that has responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting.
Software engineers address quality by applying solid technical methods and measures, conducting formal technical reviews, and performing well-planned software testing. The charter of the SQA group is to assist the software team in achieving a high- quality end product. The software Engineering Institute recommends a set of SQA activities that address quality assurance planning, oversight, record keeping, analysis, and reporting. These activities are performed by an independent SQA group that conducts the following activities:
Prepare an SQA plan for a project
Participates in the development of the project‘s software process descriptions
Reviews software engineering activities to verify compliance with the defined
software process.
Audits designed software work products to verify compliance with those defined as
part of the software process.
Ensures that deviations in software work and work products are documented and
handled according to a documented procedure.
Records any noncompliance and reports to senior management [1].
1.6 Importance of Software Quality Program in Software Industry
Solving problems is a high-visibility process; preventing problems is low- visibility. This is illustrated by an old parable. Many organizations are able to determine
5 who is skilled at fixing problems, and then reward such people. However, determining who has a talent for preventing problems in the first place, and figuring out how to incentivize such behavior, is a significant challenge.
Recent major computer system failures caused because of lack of quality:
In February of 2009 users of a major search engine site were prevented from
clicking through to sites listed in search results for part of a day. It was reportedly
due to software that did not effectively handle a mistakenly-placed "/" in an internal
ancillary reference file that was frequently updated for use by the search engine.
Users, instead of being able to click thru to listed sites, were instead redirected to an
intermediary site which, as a result of the suddenly enormous load, was rendered
unusable.
A large health insurance company was reportedly banned by regulators from selling
certain types of insurance policies in January of 2009 due to ongoing computer
system problems that resulted in denial of coverage for needed medications and
mistaken overcharging or cancelation of benefits. The regulatory agency was quoted
as stating that the problems were posing "a serious threat to the health and safety" of
beneficiaries.
A news report in January 2009 indicated that a major IT and management
consulting company was still battling years of problems in implementing its own
internal accounting systems, including a 2005 implementation that reportedly "was
attempted without adequate testing".
6
In August of 2008 it was reported that more than 600 U.S. airline flights were significantly delayed due to a software glitch in the U.S. FAA air traffic control system. The problem was claimed to be a 'packet switch' that 'failed due to a database mismatch', and occurred in the part of the system that handles required flight plans.
Software system problems at a large health insurance company in August 2008 were the cause of a privacy breach of personal health information for several hundred thousand customers, according to news reports. It was claimed that the problem was due to software that 'was not comprehensively tested'.
A major clothing retailer was reportedly hit with significant software and system problems when attempting to upgrade their online retailing systems in June 2008.
Problems remained ongoing for some time. When the company made their public quarterly financial report, the software and system problems were claimed as the cause of the poor financial results.
Software problems in the automated baggage sorting system of a major airport in
February 2008 prevented thousands of passengers from checking baggage for their flights. It was reported that the breakdown occurred during a software upgrade, despite pre-testing of the software. The system continued to have problems in subsequent months.
News reports in December of 2007 indicated that significant software problems were continuing to occur in a new ERP payroll system for a large urban school system. It was believed that more than one third of employees had received
7
incorrect paychecks at various times since the new system went live the preceding
January, resulting in overpayments of $53 million, as well as underpayments. An
employees' union brought a lawsuit against the school system, the cost of the ERP
system was expected to rise by 40%, and the non-payroll part of the ERP system
was delayed. Inadequate testing reportedly contributed to the problems.
Problem prevention will lessen the need for problem detection, panics and burn- out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, avoid a 'Process
Police' mentality, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the A typical scenario would be that more days of planning, reviews, and inspections will be needed, but less time will be required for late-night bug-fixing and handling of irate customers.
1.7 Example of Software Quality Model CMM
CMM (Capability Maturity Model) is a model of process maturity for software development - an evolutionary model of the progress of a company‘s abilities to develop software. In November 1986, the American Software Engineering Institute (SEI) in cooperation with Mitre Corporation created the Capability Maturity Model for Software.
8
Development of this model was necessary so that the U.S. federal government could objectively evaluate software providers and their abilities to manage large projects.
Many companies had been completing their projects with significant overruns in schedule and budget. The development and application of CMM helps to solve this problem. The key concept of the standard is organizational maturity. A mature organization has clearly defined procedures for software development and project management. These procedures are adjusted and perfected as required. In any software development company there are standards for processes of development, testing, and software application; and rules for appearance of final program code, components, interfaces, etc [2] [3].
The CMM model consists of five levels of maturity. The first level is labeled as
―chaotic‖. This is an indication that the organization is aware of its lack of quality standards and would like to improve on it. The levels are:
1. Initial (chaotic, ad hoc, heroic) - the starting point
2. Repeatable (project management, process discipline)
3. Defined (institutionalized) - confirmed as a standard business process
4. Managed (quantified) process management and measurement takes place
5. Optimizing (process improvement) deliberate process
optimization/improvement
9
Chapter 2
BACKGROUND
Software engineering is the establishment and use of sound engineering principles in order to develop an economical and reliable end product that works effectively and efficiently [4]. The establishment of sound software engineering principles implies the definition and use of repeatable processes causing repeatable results.
The software development life cycle (SDLC) is the entire process of formal, logical steps taken to develop a software product. Within the broader context of
Application Lifecycle Management (ALM), the SDLC is basically the part of process in which coding/programming is applied to the problem being solved by the existing or planned application.
The phases of SDLC can vary somewhat, but generally include the following:
Conceptualization
Requirements and cost/benefits analysis
Detailed specification of the software requirements
Software design
Develop
Testing
User and technical training
Integration and delivery (Implementation)
Maintenance
10
There are many QA program/models available that can be used to address software quality assurance and software quality control during the software development lifecycle, such as:
Microsoft Solution Framework (MSF)
Extreme Programming model (XP)
Rational Unified Process (RUP)
2.1 Microsoft Solution Framework (MSF)
The Microsoft Solution Framework [5] provides deep insight into the software development practiced at Microsoft. MSF represents Microsoft‘s attempt to spread its software development knowledge. The MSF is one of two complementary frameworks
(besides Microsoft Operations Framework (MOF)), forming an overall solution for large software companies occupied with medium sized and large projects. However, the MSF may be applied alone. It does not depend on MOF, which allows an easy implementation in medium-sized companies. Focusing on delivering high quality software, the major principles of the framework include iteratively adding functionality and the ―teamof- peers‖ approach without a dominating project leader. Both principles improve not only the quality of software but also the way it is built. Microsoft supplies a couple of templates for artifacts recommended in the MSF. Since the MSF does not include a huge set of recommended tools (which will change with the next release of Visual Studio) it can be easily integrated into existing development environments.
11
MSF provides most practices for software quality development and software quality assurance. Quality is formally defined as an objective, a really unique feature that differences the MSF from the other processes. This quality objective is continuously verified to keep the project on track and contributes to a very effective quality assurance within the MSF. CCM and pair programming are not provided by the MSF, but can be easily integrated into the framework to accomplish a maximum of quality assurance. The
MSF provides methods regarding requirements and architecture as both are key elements used during the overall process. Similar to RUP MSF starts at the beginning of a project, in the Envisioning phase, with requirements definition and management as well as drafting the first candidate architecture and prototypes. The focus on teams, defined as a team model (―team of peers‖) is another key strength of the framework. The approach is unique as it is the only teamwork approach that defines goals for each role. An internal client to the customer relationship enforces quality assurance.
2.2 Extreme Programming
Extreme Programming (XP) is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development [6], it advocates frequent
"releases" in short development cycles, which is intended to improve productivity and introduce checkpoints where new customer requirements can be adopted.
Other elements of Extreme Programming include: programming in pairs or doing extensive code review, unit testing of all code, avoiding programming of features until
12 they are actually needed, a flat management structure, simplicity and clarity in code, expecting changes in the customer's requirements as time passes and the problem is better understood, and frequent communication with the customer and among programmers.
The methodology takes its name from the idea that the beneficial elements of traditional software engineering practices are taken to "extreme" levels, on the theory that if some is good, more is better. It is unrelated to "cowboy coding", which is more free-form and unplanned. It does not advocate "death march" work schedules, but instead working at a sustainable pace [7]. Critics have noted several potential drawbacks, including problems with unstable requirements, no documented compromises of user conflicts, and lack of an overall design spec or document.
Extreme Programming [8] represents a completely new approach of developing software. For smaller projects, XP offers a good approach to achieve high software quality. The tight involvement of the customer, the main focus on testing and the approach to reduce design efforts support this goal. However, XP may lead to organizational problems when applied in large projects. For most projects, the tight integration of the customer during the development is hard to achieve.
2.3 Rational Unified Process (RUP)
The RUP [9] ranks among the leading software development processes available today. RUP is described as a use-case driven, architecture-centric, iterative and incremental methodology. Functional requirements are captured in use cases that drive the development process. Specified use cases drive the analysis design, and test
13 workflows. Use cases also drive the architecture, which in turn, influences the selection of use cases. This interplay between use cases and architecture evolves in the context of an iterative and incremental process.
The approach is to divide the project into a series of iterations where the most architecturally significant and/or technically complex use cases are tackled early in development. Errors found in architecture or complex use cases late in development can lead to exponential costs in rework and fixes. The RUP allows the user to build out these complex use cases as early iterations of development so that they are designed, coded and tested before the entire, integrated application is built. Bugs or design flaws are detected early without having to deconstruct the entire application or re-engineer it to address the flaws.
The main advantages are the support through Rational Software, which is constantly improving the process, the tightly coupled tool support and tool documentation, as well as Rational‘s support and mentoring in implementing the process.
RUP delivers a well structured framework, divided into phases and workflows, allowing easy navigation within the framework. Additional concepts of artifacts and workers are easy to understand and facilitate resource planning and work structuring. The process includes comprehensive quality assurance measures. Minimal standards as requirements engineering and iterative software development are included as well as testing, configuration management and collaboration with the customer during the overall process. Additional reviews form an important part of the process, continuously monitoring quality and progress.
14
RUP strongly verifies quality throughout the process. A well-defined review procedure is triggered at the end of iterations. This review verifies the achievement of all targets of the iteration and, discovers the reasons for failure, as they are an important input for the next iteration planning activities. Customer requirements are very well- handled within RUP, as the requirements discipline is heavily involved during Inception and Elaboration phases. To establish requirements traceability is one of the most important goals in RUP. The process is also very architecture driven. The first candidate architecture is already created during Inception phase.
2.4 Proposed Software QA Model
The comparison of the software process models RUP, MSF and XP shows that all three models define practices, which support software quality development as well as software quality assurance. These practices allow developing quality software with no need for further software quality assurance.
MSF and RUP are the most elaborated software process model regarding software quality support. XP defines less software quality support practices than MSF and RUP.
However, much software quality support is implicitly in XP principles, which affect the mindset of developers. XP is specialized for small and medium software projects, where its software quality support is as good as in MSF or RUP. Each of the processes may be enhanced by including best practices defined in one of the other processes. Pair
Programming, for example, can be easily included in the programming practice of MSF
15 or RUP. The explicit definition of quality as a goal is easy to include in RUP and XP.
There are already efforts of integrating the process models on a general level.
The model I am proposing here for small projects recommends Software Quality
Assurance Methodology initiated by the Software Quality Framework and defines the vision, policy and objectives of the SQA methodology. Once the framework is established, processes are built that imbed SQA into development. These processes are then executed as part of standard development activities. It has a clear vision and agreed upon quality policy in order to deliver high quality products, meet customers' requirements and exceed their expectations. This entails providing sufficient resources, training staff, managing work products, and monitoring and controlling projects and processes for continuous process improvement.
16
Chapter 3
QUALITY ASSURANCE METHODOLOGY
3.1 Defining the Quality Assurance Methodology
Software Quality Assurance Methodology (SQAM) is defined as a planned and systematic approach to the evaluation of the quality of and adherence to program, project and product standards as well as the processes, and procedures used to conduct business operations. SQAM includes the process of assuring that standards and procedures are established and are followed throughout the SDLC.
Compliance with agreed-upon standards and procedures is evaluated through process monitoring, product evaluation, and audits. Processes, products, services and projects for the lifecycle operations (from concept to retirement) should be subject to the quality goals, objectives and policies, and strive to achieve a culture of continuous process improvement that would place company in a position of performance excellence for its purpose and mission.
The SQAM makes clear distinctions between Quality Assurance, Quality Control
(QC) and Testing activities.
3.1.1 Quality Assurance
Quality Assurance is the function of software quality that assures standards, processes and procedures are appropriate for the project and are correctly implemented.
17
The purpose of Quality Assurance is to provide adequate visibility into project progress so that management can take effective actions when the software project's performance deviates significantly from the project plans. QA oversight involves tracking and reviewing software accomplishments and results against documented estimates, commitments, and plans, and adjusting these plans based on the actual accomplishments and results. By having a centralized SQA program with standardized documentation, templates, procedures, and guidelines, software development companies will be able to enforce quality standards and improve the quality of the process and the product throughout the software development life cycle.
Quality Assurance assures that software testing is performed in accordance with plans and procedures. Oversight validates this through the use of Quality Controls.
Quality Assurance reviews testing documentation for completeness and adherence to standards. Quality Assurance also monitors testing and provides follow-up on non- conformance issues and validates product readiness prior to implementation. The objectives of Quality Assurance in monitoring software testing are to assure that:
Test procedures are testing the software requirements in accordance with test plans.
Test procedures are verifiable.
The correct or "advertised" version of the software is being tested.
Test procedures are followed.
Non-conformances occurring during testing are noted and recorded.
Test reports are accurate and complete.
Regression testing is conducted to assure non-conformances have been corrected.
18
Resolution of all non-conformances based on risk rating occurs prior to delivery.
3.1.2 Quality Control (QC)
Quality Controls (QC) are the function of software quality used to verify that a project follows standards processes and procedures and that the project produces the required internal and external products.
QC has a focus on the product at the program and project level. The activities in
Quality Control plan manage, control and report on how products, artifacts, deliverables, services or other results of processes conform to requirements, standards and specifications. Variations are normally detected through verification and testing activity and are reported as defects.
3.1.3 Software Testing
Testing is a process of executing a program with the intent of finding errors in software products. It is a set of procedures performed to validate the quality of software products and artifacts through a variety of different types of software validation which may include unit testing, integration testing, system testing, regression testing, User
Acceptance Testing, performance testing, and security testing. The type of testing performed is frequently dictated by the phase in which a product is located within the
SDLC. For example, a product under development may undergo Unit Testing while a product in implementation may undergo User Acceptance Testing.
19
3.2 Quality Model
Software Quality Assurance Methodology is initiated by the Software Quality
Framework and defines the vision, policy and objectives of the SQA methodology. Once the framework is established, processes are built that imbed SQA into development.
These processes are then executed as part of standard development activities. The diagram below visually demonstrates the flow of SQA Methodology. I recommend adopting this general framework as a model for its own SQA methodology. Figure 1 shows Quality model which I have developed as part of this project.
General Process Execution Framework
Figure 1 Quality Model
General Framework
Quality Vision
Quality Policy
Quality Goals
Scope
Standards & Procedures
Process & Improvements
Method, Tools, Techniques
20
Quality Organization
Process
Quality Reporting
QA Planning
Quality Analysis
QC Planning
Quality Execution, Monitoring
Execution
QA Procedures
Audit Plan
Audit Checklist
Validation
Test Plan
Verification Checklist
Testing
21
Chapter 4
QUALITY ASSURANCE PROCESSES AND PROCEDURES
Software Quality Assurance (SQA) is a planned and systematic approach to evaluating the quality of and adherence to software product standards, processes, and procedures. QA processes and activities overlap and are integrated into the Software
Development Life Cycle so that SQA is built into every phase of the SDLC ensuring quality is addressed from conception through implementation of a software project. The
Software Quality Assurance Process is defined by the application of Quality
Management, Quality Control and Software Testing.
4.1 Quality Management (QM)
Quality Management, also referred to as Quality Assurance, is the function of software quality that assures standards, processes and procedures are appropriate for the project and are correctly implemented.
The purpose of QM is to provide adequate visibility into project progress so that management can take effective actions when the software project's performance deviates significantly from the project plans. QA oversight involves tracking and reviewing software accomplishments and results against documented estimates, commitments, and plans, and adjusting these plans based on the actual accomplishments and results. By having a centralized QA Program with standardized documentation, templates , procedures, and guidelines, company will be able to enforce quality standards and
22 improve the quality of the process and the product through the software development life cycle.
QM assures that software testing is performed in accordance with plans and procedures. Management validates this through the use of Quality Controls. QM reviews testing documentation for completeness and adherence to standards. QM also monitors testing and provides follow-up on non-conformance issues and validates product readiness prior to implementation. The objectives of QM in monitoring software testing are to assure that:
Test procedures are testing the software requirements in accordance with test plans.
Test procedures are verifiable.
The correct or "advertised" version of the software is being tested.
Test procedures are followed.
Non-conformances occurring during testing are noted and recorded.
Test reports are accurate and complete.
Regression testing is conducted to assure non-conformances have been corrected.
Resolution of all non-conformances based on risk rating occurs prior to delivery.
4.2 Quality Control (QC)
Quality Controls (QC) are the function of software quality used to verify that a project follows standards, processes and procedures, and that the project produces the required internal and external products.
23
QC has a focus on the product at the program and project level. QC activities are used to plan manage, control and report on how products, artifacts, deliverables, services or other results of processes conform to requirements, standards and specifications.
Variations are normally detected through verification and testing activities and are reported as defects.
4.3 Software Testing
Testing is a process of executing a program with the intent of finding errors in software products. It is a set of procedures performed to validate the quality of software products and artifacts through a variety of different types of software validation which may include unit testing, integration testing, system testing, regression testing, User
Acceptance Testing, performance testing, and security testing. The type of testing performed is frequently dictated by the phase in which a product is located within the
SDLC. For example, a product under development may undergo Unit Testing while a product in implementation may undergo User Acceptance Testing.
QM, QC and Software Testing activities occur through the SDLC. Figure 2 illustrates the timing on these activities within the SDLC.
24
Figure 2 Software Development and QA
25
Chapter 5
QUALITY ASSURANCE INTEGRATION WITH MODIFIED WATERFALL MODEL
5.1 The Software Quality Assurance Process
When the vision plan of a software development project includes the implementation of a predefined level of quality into the product, the project needs to develop a Quality Assurance Plan to achieve management‘s objectives and goals. This plan specifies what development artifacts across all phases of the SDLC must be created by the development team and are subjected to quality review or testing. Therefore, the
Quality Assurance Plan must be able to tailor the ―standard‖ SDLC in order to ensure the
SDLC creates the necessary artifacts for QA review.
Figure 3 illustrates how QA is integrated into the modified waterfall model. QA should be performed within each phase. As seen in the figure, each phase has corresponding QA tasks that need to be performed. For example, during the implementation and unit testing phase, the quality assurance tasks could be code reviews, inspections and/or walkthroughs. Also note that throughout the SDLC, the change control management process is ongoing. There must be a process for documenting and tracking changes of any artifacts during the life of the project.
26
Waterfall Model Software Quality Assurance
Concept Phase Concept Review
Requirement Requirement QA Definition
System & Design QA Software Design
Code QA Development & Unit Testing
Integration & Testing System Testing
Final Acceptance & Deployment Process
Operation & Change Request Maintenance Management
Figure 3 SQA and the Modified Waterfall Model
27
5.2 QA/QC Tasks and Activities
When implementing a Quality Program, Project Management should ensure standardized processes are in place to allow QA personnel to perform and to schedule quality control activities into the SDLC. These activities act as guidelines for the developers so that the software development process provides the desired end results.
The software quality assurance activities should be in the domain of developers and software testers while QA management monitors and adjusts the QA process to improve the results. Each phase in the development life cycle has its associated QA stage.
5.3 SQA Integration into the “Modified” Waterfall Model
SQA Integration into the SDLC is based on the analysis of existing company processes, practices, and SDLC methodology options using a modified waterfall SDLC methodology approach breaking the project down into two or more parts, sometimes called iterations, phases or stages. Figure 4 shows phases of Modified waterfall model.
28
Figure 4 Modified Waterfall Model
5.4 Adopting the Modified Waterfall Model
Adoption of the modified waterfall SDLC methodology model allows for
Better requirements.
Better change management and tracking.
Better SQA integration.
Earlier testing and providing an earlier reading on project status.
Better code management in the form of stubs and drivers, required for testing.
Some prototyping for areas deemed risky or difficult.
29
5.5 System Concept Phase
A new project starts when a business entity requires a problem to be solved using automation. The Concept phase is the starting point of the new software project.
Figure 5 illustrates the QA activities to be performed during the Concept Phase.
Figure 5 Software Concepts Phase
5.5.1 Inputs/Activities/Outputs:
A problem statement during this phase can often be summarized with a single statement. From there the problem solving statement is expanded into a more detailed description of how the problem could be solved. This phase describes what the automated solution needs to have and what the user would like it to have. Input, Activities and
Outputs in System Concept Phase are:
Inputs
Problem Statement
A problem statement is a description of the problem requiring an
automated solution. This phase describes what the solution needs to have
and what the user would like it to have.
30
Activities
Create a vision document. The vision document describes the purpose of
the project is about and what it will eventually become. This vision is
normally defined based on the business requirements.
Create a project scope document. The scope of the project is determined
based on the vision document. The scope need not include the complete
product but only an initial stage with more stages following in the next
iteration.
Identify QA resources which can be hardware, software and employees.
The QA Manager should secure the resources needed to perform quality
assessment functions throughout the project.
Evaluation standards and procedures. QA/QC personnel assess the
development team‘s exiting quality management system (quality
assurance plan, tasks, and lessons learned from past projects etc.) to assure
that standards and procedures are in place as required by the
organizational quality assurance requirements for the new project.
Inspect all documents to make sure the proposal is viable.
Review the proposal to make sure all functionality for ensuring a quality
product has been considered and included.
Inspect the vision document for achievability and feasibility.
Review the proposal to make sure all functionality for ensuring a quality
product has been considered and included.
31
Outputs
The Vision Plan
The Quality Assurance Plan
The Quality Activity Schedule
Development Charter (optional)
Configuration Management Plan
A feasibility study to determine if the project makes economical sense.
A Project Change Management plan, to define early on how changes will
be managed to ensure a quality product.
Outputs templates/documents details:
Vision Plan
The Vision Plan applies to the whole project and may change slowly along with changes in the product‘s strategic positioning or as objectives change over time. The
Vision Plan is mostly driven by business rules and objectives.
The Software Quality Assurance Plan
This is a document specifying exactly what is expected of the Quality Assurance process. The quality policies of the developing organization are specified here listing exactly what will be subjected to inspections and what level inspection will be required.
32
The Quality Assurance Schedule
This normally contains tasks, activities, and the timeline schedule indicating when during the development process these QA activities must be completed. Typically these activities would include:
Formal and informal reviews
Software testing
Measurement
Record keeping and reporting.
Review of project plans to ensure they follow the defined process for the project.
Change control processes and procedure
Review of project to ensure the work performed is following the project plans.
Process improvement assessments
The Project manager and quality assurance personnel together determine the schedule for Quality Assurance activities and the schedule is captured in the project and iteration plan, which may then be referenced from the Quality Assurance Plan.
The Development Charter (Optional)
The Development Charter is normally created once the development team is formed. The Charter describes who is on the development team. How the team works to solve problems and other general information regarding the development team‘s structure.
33
For an environment such as small company, each development team can generate a charter which will stay in effect as long as the team stays together.
The Configuration Management Plan
The Configuration Management Plan provides an overview of the organization‘s configuration, task configuration, and process configuration. Software is usually made up of several programs/applications. Each application and its related documentation and data can be labeled as "configurable items―(CI). The number of Configurable Items in any software project and the grouping of artifacts that make up a CI is a management choice. The end product is a group of CIs. The status of a CI at a given point in time is referred to as the baseline of that CI. The baseline serves as a reference point in the software.
The Configuration Management Plan describes:
How CI‘s will be named so that they can be uniquely identified.
How CI‘s will be controlled to ensure unique tracking of each version of the CI. A
new version is created each time a baseline is updated and finalized. Each version of
the baseline should be stored in version control.
How each CI will be tracked and accounted for in reporting.
How audits will be performed on the CI‘s and the overall Configuration
Management process.
34
5.5.2 Roles and Responsibilities
Large projects may rely on System Analysts but small projects may utilize developers to perform some technical roles.
Input
Vision plan
Quality plan
Business rules
QA/QC Activities
System concept review
Review lessons learned
Resource planning
Review Change Control procedures
Output
Feasibility study
Quality requirements
Software Quality Assurance Plan
Quality activity schedule
Change Control Process
35
5.5.3 Potential Risks and Constraints
Availability of resources to implement the project.
Availability of personnel to execute the development phase.
Availability of personnel to run the system after development.
5.6 Software Requirements Phase
Figure 6 illustrates the QA activities to be performed during the Software
Requirements Phase. QA in this phase is focused on creating actionable, measurable, testable requirements that relate to business needs. Requirements traceability and functional test cases, both key QA controls, result from this phase.
Figure 6 Requirements Phase
5.6.1 Requirements Sub-phases
The Requirement Phase consists sub-phases designed to arrive at comprehensive business, system and technical requirements. Each phase and the activities associated with that phase are defined below.
36
Requirements Sub-phase Definition
Requirement Elicitation . Requirements Elicitation is a series of workshops designed to arrive at the
end user‘s description of the system to be developed. These are not
functional requirements but rather statements of what the end user would
like the system or application to accomplish.
. Requirements Elicitation results in general requirements as described by
the end user.
Requirement Gathering Phase . Requirements gathering focuses on working with focus groups
(individuals from each area of focus pertaining to the functionality of the
end product) to create detailed requirements. The focus groups should use
the general requirements obtained from the Requirements Elicitation phase
as a starting point for discussions.
. Requirements Gathering results in use detailed requirements and
documented use cases.
Requirements Analysis . Detailed review of the functional and non-functional requirements
developed during the Requirement Gathering phase to ensure each
requirement is actionable, measurable, and testable and relates to business
needs.
37
. Functional requirements are requirements that specify the functions of the
system – what the system will do.
. Non-functional requirements are requirements that do not relate to the
functions of the system. This includes run time, security and performance
of the application.
. Requirement Analysis results in Software Requirements Specifications
and functional test cases.
Requirements Management Phase . Requirements Management is an integral part of software requirements
phase and generally refers to the collection of activities undertaken by the
project managers, business analysts, engineering leads, in order to gather,
store, track, prioritize and implement requirements. In essence it is the
process of establishing and maintaining an agreement with the customer
on the requirements for the software project throughout the software
lifecycle.
Activities to Perform
Requirement Elicitation . Create an agenda for each workshop that focuses on which functions the
application should have and which functions are nice to have.
. Establish objectives for each workshop.
38
. Set ground rules for the workshop members to stay on focus.
o Required functions
o Nice to have functions
. Use time box discussions- set fixed periods of time for each topic on the
agenda and keep discussions within the set timeframe.
. Keep the workshops small. Teams of more than six people can become
difficult to manage.
. Document the results of each workshop and create a list of the high level
requirements arrived at from all the workshops
Requirement Gathering Phase . Identify user classes and their characteristics. A user class is a class of
users who perform similar tasks within a system or process.
. Select a product champion for each type of user. The champion is
someone who can serve as the voice of the user class. The champion
should be a user with a reasonable amount of knowledge of the existing
system or process as well as the day to day tasks of that group.
. Establish a focus group of user champions. The focus group should
represent a broad spectrum of different types of users. Each user should
know about his/her own area of expertise as well as a general knowledge
of the other tasks that need to be performed.
39
. Using the general requirements gathered from the Requirements
Elicitation Workshops, solicit input from the focus group in regards to the
functional and quality characteristics of the proposed end product.
. Document the detailed requirements emerging from the focus groups.
. Work with the focus groups to create ―Use Cases‖ demonstrating the need
for each requirement and potentially identifying additional requirements.
A use case describes event sequences for an actor to use the system. It is a
narrative description of the process. Conduct the following steps when
developing the use cases.
o Identify typical system events and expected responses for each use
case. These system events may be events that occur regularly or
infrequently such as daily, weekly or monthly processing tasks.
o Observe users performing similar tasks in the existing system or
process. Observation may identify sub-tasks not included in
narrative descriptions.
o Examine problem reports from the current system. By examining
formal problem reports, the system analyst can get an insight about
current shortcomings in the existing system and identify solutions
for the new system.
o Create use case diagrams. For each use case diagram, establish:
40
1. Boundary - System boundary can be a computer system,
organization boundary, or department boundary.
2. Actors - An external entity (person or machine) that interacts
with or uses the system.
3. Sequence of events description - A high level process of what
an actor will do with a system.
. Categorize the use case into primary and secondary functions. Primary
functions are those required for system operation and comprise the main
system functionality. Secondary functions or those functions that aren‘t
used very frequently and are ―nice to have‖.
o Description Level.
o Essential - A general description of the business process. Do not
include technology information.
o Real - Design oriented, shows reports, examples. Uses
technological descriptions. Real use cases are undesirable during
analysis and should only be used during analysis for specific
reasons. Real use cases are handy for requirements gathering.
. Submit each documented use case to version control to establish a base
line for use in requirements traceability.
41
Requirements Analysis . Review each use case and validate that the requirements documented
within each describe an action to be taken in the system that can be
measured and tested. Validate that the requirement maps to a specific need
or request.
. Elicit additional information as required from end users.
. Develop the Software Requirements Specification (SRS) document. The
SRS is a complete description of the system frequently expressed in
diagrams as well as narrative and includes the use cases. The SRS is
frequently used to create a Procurement document.
. Create functional test cases describing how each requirement will be
tested.
. Create requirements traceability tracing each requirement from its use case
to its functional test case.
. Create acceptance criteria describing how end users will validate that the
final product meets their needs. Acceptance criteria will later be used to
create acceptance test cases.
Requirements Management Phase . Typical requirement management activities includes: stakeholder
management, capturing and document requirements, requirements,
management planning, and tracing and monitoring requirements.
42
. Define Change Control Process
. Manage Version Control
. Perform Change impact analysis
. Requirement Status Tracking
. Requirement Tracing
. Measure requirement volatility
5.6.2 Inputs/Activities/Outputs:
A problem statement during this phase can often be summarized with a single statement. From there the problem solving statement is expanded into a more detailed description of how the problem could be solved. This phase describes what the automated solution needs to have and what the user would like it to have. Inputs, Activities and
Outputs in Software Requirements Phase are:
Inputs
The Vision Plan
The Quality Assurance Plan
Configuration Management Plan
Change Control Plan
43
Activities
Verify all requirements are identified in the preliminary test plan.
Analyze requirements to validate completeness and testability of the
requirements.
Analyze proposed use cases to determine if the use cases are in scope, testable,
and feasible.
Map software requirements to system requirements.
Validate that security/privacy critical software requirements are uniquely
identified and included in a preliminary software requirements traceability
matrix.
Outputs
Software management plan
Use cases
Software Requirements Specifications
Context Diagrams
Traceability Matrix
Database models
Data dictionary
44
Outputs templates and documents details:
Software Management Plan
The software management plan is a document specifying how the development team will manage the software development project. An outline of Management Plan
Template is shown below.
Use Cases
A use case describes event sequences for an actor to use the system. It is a
narrative description of the process.
Software Requirements Specification (SRS)
The SRS is a complete description of the system frequently expressed in diagrams
as well as narrative and includes the use cases.
Context Diagrams
A context diagram is a data flow diagram that shows how the system will receive
and send data flows to the external entities involved. It is a graphical tool to assist the
clients and designers in conceptualizing the flow of data. Data flow is usually depicted
in one very large diagram to provide a picture the data flowing across the entire system.
45
Figure 7 illustrates an example of Context Diagram.
Figure 7 Context Diagram
Requirements Traceability
Requirement Traceability lists each requirement and then cross-referenced to each step in the development process so that each requirement can be traced to its design element, development code and test case. The traceability matrix forms a link between the requirements, the final product, and every phase in-between. Traceability demonstrates that all requirements have been built and tested. Software traceability
Validates that system functionality meets the customer requirements and that no
superfluous functionality has been implemented.
Assists in the identification of which specifications might be affected when
customer requirements change.
46
Improves communication and cooperation among teams as the result of a change
request.
Improves understanding of the system by the customer and thus the system
acceptance.
Figure 8 demonstrates the requirement traceability process.
Figure 8 Four directions of requirement traceability
Table 1 demonstrates a simplified version of what should be recorded in a basic traceability matrix.
User Use Functional Design Code Module Test Requirement Case Requirement Element Case Sort the UC-28 Catalog.query.sort Class Catalog.sort() Search.7 database catalog Search.8 Import data UC-29 Catalog.query.import Class Catalog.import() Search.12 and test catalog Catalog.validate() Search.13
Table 1 Sample Traceability Matrix (Forward To/From Requirements)
47
Table 2 describes a more detailed traceability matrix. The numbering is useful for referring back to requirements at other stages of development. The use of numbers reduces the risk of confusion as compared to requirements that are referenced by using descriptive names.
Customer Customer SRS Change CRM Risk Volatility Requirement Requirement Requirement Tracking Number Factor Number Number (CRM) 1. Briefly List the SRS Mark the List of Describe Mark the 1.1 describe the Requirement( requiremen CRM the risk requireme 1.2 requirement. s) that ts that have Numbers associated nt which 1.3 If appropriate, implement changed with this has high refer to this requireme volatility Requirements requirement nt documentatio 1.1 n section that 1.2 identifies this 1.3 requirement 2 2.1 2.2
Table 2 Detailed Traceability Matrix – User to SRS
Table 3 shows matrix links requirements to the design phase to ensure that all requirements are traced to all design components. The table below is the recommended traceability table for adoption.
48
Number Name/ Design Risk Comments Description Unit Factor 1. Briefly describe the List design Describe risk Add relevant (System requirement. If component factors information to Requirement appropriate, refer to implementing involved with this unit number) Requirements each each req. 1.1 documentation section requirement 1.2 that identifies this 1.1.1 1.3 requirement 1.1.2 1.1.3 2.1 2.1.1 2.2 2.1.2
Table 3 Detailed Traceability Matrix - SRS to Design
Once the requirements is traced to design and bi-directional traceability is accomplished, QA should validate that all requirement are linked and traced to testing through test cases. Table 4 describes a more detailed traceability matrix – SRS to System
Testing.
Number Name/ System Test Case Test Case Description Comments 1. Briefly describe the List System Test(s) Comments and (System requirement. If that test and verify this observations Requirement appropriate, refer to requirement is met. Number) Requirements 1.1.1.1 1.1.1 documentation section 1.1.1.2 that identifies this requirement 2. 3.
Table 4 Detailed Traceability Matrix - SRS to System Testing
49
Database Models
Also referred to as Data Structure which defines how data is accessed and represented. A data model instance can be represented in one of three possible formats:
Conceptual schema: describes the semantics of the model
Logical schema: describes the semantics, as represented by a particular data
manipulation technology. This consists of descriptions of tables and columns, object
oriented classes, and XML tags, among other things
Physical schema: describes the physical means by which data are stored. This is
concerned with partitions, CPUs, table spaces, etc.
Data Dictionary
The data dictionary contains a detailed description of each data item in use by the software application. The data dictionary will list (in table format) for each data item:
Table name
Column name
Item name
Item label
Item description
Field size
Field type
Item properties
Item references
50
Figure 9 illustrates example of fields of Data Dictionary.
Figure 9 Data Dictionary
5.6.3 Roles and Responsibilities
Input, QA/QC Activities, Output and roles in Software Requirements Phase are:
Input
Vision document
Scope document
Requirements
Use cases
Configuration Management Plan
51
QA/QC Activities
Requirements analysis
Requirement review
Requirements verification
Software specification review
Prototyping
Data structure analysis
Monitor Change Control process
Output
Software management plan
Software requirements specification
Context Diagram
Traceability Matrix
Database modeling
Data dictionary
52
5.6.4 Potential Risks and Constraints
Written requirements are not feasible, clearly stated or consistent with user requests
and prevent effective software analysis.
End user, system analyst and QA analysts necessary to participants in requirements
definition and allocation are not available.
5.7 Software Design Phase
The software system is typically designed in two phases:
The Architectural Design, or Preliminary Design, Phase results in a high level
design of the system where requirements are fully allocated to software components.
The Detailed Design phase expands the architectural design to the lowest level of
requirements. The detailed design is baselined at the conclusion of the critical
design review.
Interface Control documents are completed and test plans revised and all design
documents are placed under configuration control as well as under version control.
53
Figure 10 illustrates design phase and related QA activities in perspective:
Figure 10 Design Phase
5.7.1 Input
Detailed Requirements
Use cases
Data dictionary
High level models
Context diagrams
Traceability Matrix
5.7.2 QA/QC Related Activities:
During the Design phases, the QA/QC team:
Focuses on the developer‘s design artifacts
Monitor design processes such as attending design review/walkthrough meetings.
54
If software development folders or other similar documentation are used, they
should be initiated early in the design phase and frequently assessed for accuracy
and compliance to required content by the QA/QC team.
The QA/QC team also assures that all requirements have been allocated to software
components and that configuration management mechanisms are in place.
The QA/QC team updates the Traceability Matrix demonstrating that the developers
have considered all aspects of the requirements in the design.
Developers also create tests for the modules being designed.
During the unit testing phase, the QA/QC team should monitor the testing process to
confirm the effectiveness of the tests and the completeness of the test results.
General Design Activities to improve Quality:
Best ―Design‖ practices:
Iterate:
The best design never occurs during the first attempt. A soon as the design
process seems to get stuck; it is time to rethink the whole process, which normally
improves the overall design.
Divide & Conquer:
There is too much detail to see the overall picture so break the process down into
manageable steps.
Top-down design or Bottom-up Design:
55
They both produce the same result but approach the problem from different
perspectives. Examples of top-down design process:
View the entire system as a unit listing the main features and requirements
Divide system into subsystems or packages by placing each feature in a
package
Divide the feature of a package into classes
Design each class specifying data types and methods
Design the internals of each method
Make extensive use of:
information-hiding
inheritance
object orientation
polymorphism
design patterns
Design Specification document walkthrough (QC activities)
Preparing the Design Specification
Participants: Development team, testers and system analysts
Input: Proposed design specifications
Activity: Read the document and discuss language and design decisions while the
author takes notes.
Output: After editing and approval, the document is base lined.
56
Below is a sample Design Walkthrough:
Is the interface consistent with architectural design
Does the algorithm accomplish the desired functionality?
Is the algorithm logically correct?
Is the logical complexity reasonable?
Have error handling and "anti-bugging" been specified?
Are local data structures properly defined?
Are structured programming constructs used throughout?
Is design detail amenable to implementation language?
Which are used: operating system or language dependent features?
Has maintainability been considered?
Design Specification document Critical Design Review (CDR) - (QC activities)
Purpose:
Verify that the modified detailed system design is complete & correct
Verify that the design satisfies both functional and technical system
requirements
Verify that the design adheres to SQA standards and guidelines.
Critical Design Review (CDR) Planning/Preparation:
Determine CDR participants, format, execution
57
Schedule CDR facilities
Develop CDR agenda
Notify participants of CDR
Tailor and/or expand CDR checklist
Critical Design Review (CDR) Execution:
Facilitate CDR
Validate detailed system design
Assure quality compliance of critical design
Document CDR
Evaluate CDR
Critical Design Review (CDR) Reporting:
Prepare CDR summary report
Distribute CDR summary report
Critical Design Review (CDR) Follow-Up:
Collect and annotate CDR metrics
Generate quality metric reports
Collect and annotate CDR action items
Track and report CDR action items
58
Fundamental principles strongly recommended for designing quality software:
Modularity
Coupling & Cohesion
Abstraction
Information hiding
Minimal complexity
Design considerations to improve quality;
Maintainability
Reliability
Reusability
Testability
Security
5.7.3 Output and Templates:
Detailed design documentation
Test plans
59
5.7.4 Issues and Concerns:
During the design phase the designers should guard against over-complex designs. This is also the most likely time to incorporate scope-creep by gold-plating the design. What might look like a nice idea might be too complex to implement.
5.7.5 Overall SQA Functions and Design Phase
The design phase as outlined above is critical part of the SDLC and its adequacy and quality is essential for the success and the quality of the final product. Therefore,
SQA shall perform the following tasks:
Ensure that the software design process and associated design reviews are
conducted in accordance with standards and procedures established by the project
and as described in the SQA plan.
Ensure that action items resulting from reviews of the design are resolved in
accordance with these standards and procedures.
Ensure that the method, such as the Software Development File (SDF) or Unit
Development Folder (UDF) used for tracking and documenting the development
of a software unit is implemented and is kept current.
Ensure that lifecycle documents and the traceability matrix are prepared and kept
current and consistent.
Ensure that design walkthroughs / formal inspections evaluate compliance of the
design to the requirements, identify defects in the design, and alternatives are
evaluated and reported.
60
Verify walkthroughs of all modules are conducted.
Identify defects, verify resolution for previously identified defects, and ensure
change control integrity.
Review and audit the content of system design documents.
Determine whether requirements, accompanying design and tools conform to
standards.
Review demonstration prototypes for compliance with requirements and
standards.
5.8 Software Development Phase
During the Software Development Phase, the software is coded and unit tested.
This is a critical phase in the SDLC where compliance to standards and procedures is imperative. Developers should not work in a vacuum, or treat their code as secret works of superior art. The development process should be subject to code inspections on a team level. Figure 11 illustrates the QA activities to be performed during the Software
Development Phase.
61
Figure 11 Software Development and Unit Testing
5.8.1 Input:
Design documentation
Use cases
Quality requirements
5.8.2 QA/QC Related Activities:
The QA/QC team can confirm that the development process adheres to the quality requirements by:
Participating in code walkthroughs and inspections.
Assessing configuration control processes and software development records.
Confirming updates to the software requirements traceability matrix.
Analyzing log trends in software problem reports.
62
Monitoring and tracking action items from system reviews, peer reviews, and monitors risks.
Monitoring problem reports and verifying they are acted upon and that the necessary retesting occurs with the required results.
Verifying that the artifacts being built comply with the quality requirements.
Ensuring that report results are communicated to management to provide insight into the development processes and quality of the product.
Verifying that the coding standards have been observed.
Verifying that quality of code is evaluated with the aid of standard metrics such as module length, complexity and comment rate.
Utilizing static analysis tools should be used to support metric data collection and evaluation.
Ensuring that the code follows established standards of style, structure, and documentation.
Ensuring that the code is being properly tested and integrated, and that revisions made in coded modules are properly identified.
Ensuring that code reviews are being held as scheduled.
63
SQA related activities during this phase include:
Code inspections:
Code inspections brief description: Team members commenting about algorithms,
data structures, code style, and method used.
Aim/purpose: A development team meeting to have one member read the code, line
by line, of a module developed by that member.
Participants: Coders, QA/QC testers (as observers only)
Input: Code modules
Output: Improved code, QC report stating results of the meeting
When most effective: After the code of each module has been written, but before the
module integration takes place.
Code Review:
Code review practices fall into two main categories: formal code review and lightweight code review.
Formal code review, such as a Fagan inspection, involves a careful and detailed process with multiple participants and multiple phases. Formal code reviews are the older, traditional method of review, in which software developers attend a series of meetings and review code line by line, usually using printed copies of the material.
Formal inspections are extremely thorough and have been proven effective at finding
64 defects in the code under review. However, some criticize formal reviews as taking too long to be practical.
Lightweight code review typically requires less overhead than formal code inspections, though it can be equally effective when done properly. Lightweight reviews are often conducted as part of the normal development process:
Over-the-shoulder – One developer looks over the author's shoulder as the latter
walks through the code.
Email pass-around – Source code management system emails code to reviewers
automatically after checking is made.
Pair Programming – Two authors develop code together at the same workstation,
such is common in Extreme Programming.
Tool-assisted code review – Authors and reviewers use specialized tools designed
for peer code review.
Some of these may also be labeled a "Walkthrough" (informal) or "Critique" (fast
and informal).
Coding Practices to improve the quality of the software:
Place the design inside the code with the use of comments
Place a summery comment block before each method describing its purpose
Include Pre-conditions and post-conditions for each method
65
The whole team should standardize on formatting, indentation, and bracket
placements inside the code.
Internal comments should specify reasons for coding a section in a certain way, not
just what it does.
5.8.3 Output and Templates:
At the end of the phase:
Required products should be ready for delivery as specified by a Test Completion
Report, subject to modification during Integration and Test.
Final test plans and procedures are completed along with preliminary User‘s Guides.
All documents such as user guides and design documentation to be reviewed by the
QA/QC team.
5.8.4 Issues and Concerns:
The main concern during development is budgeting enough time for debugging the system. The standard seems to be to estimate a total development time frame for a project and then to double that time for actual development.
Summary of the inputs, activities, and outputs of the software development phase:
Input
Peer review reports & inspections results
Code walkthroughs reports
66
Code Inspections results
QA/QC Activities
Follow up on peer review reports & code inspection reports
Analyze trends in problem reports
Observe system testing
Analyze system test reports
Monitor Change Control process
Output
Software Product
Formal test plans
Deployment documentation
Preliminary user manuals
Test Readiness Review
5.9 Software Integration and System Test Phase
The objectives of the Integration and system test phases are to integrate the software units into a completed subsystem or system; discover and correct any non- conformances or software problem reports; and, demonstrate that the software system meets its requirements.
67
Figure 12 illustrates integration phase and related QA activities in perspective:
Figure 12 Integration & System Testing
5.9.1 Input:
Software modules
Integration plans
Test plans
Preliminary user manuals
5.9.2 QA/QC Related Activities:
Integration Testing – SQA perspective:
Individual program units may work in isolation but may not work correctly when
integrated
Localization of defects
Defect may be manifested not at source, but in a different program unit
68
Interface Errors:
Interface misuse
A calling component calls another component and makes an error in its use of its
interface e.g. parameters in the wrong order
Interface misunderstanding
A calling component embeds assumptions about the behavior of the called
component which are incorrect
Timing errors
The called and the calling component operate at different speeds and out-of-date
information is accessed
A bug life cycle needs to be followed by QA and QC members.
Bug life cycle:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the
Application under Test (AUT).
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle.
69
The life cycle of the bug can be shown diagrammatically in Figure 13.
Figure 13 Bug Life Cycle
The different states of a bug can be summarized as follows:
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
70
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.
A sample guideline for assignment of Priority Levels during the product test phase includes:
1. Critical / Show Stopper — An item that prevents further testing of the product or
function under test can be classified as Critical Bug. No workaround is possible for
such bugs. Examples of this include a missing menu option or security permission
required to access a function under test.
2. Major / High — A defect that does not function as expected/designed or cause other
functionality to fail to meet requirements can be classified as Major Bug. The
workaround can be provided for such bugs. Examples of this include inaccurate
calculations; the wrong field being updated, etc.
3. Average / Medium — The defects which do not conform to standards and
conventions can be classified as Medium Bugs. Easy workarounds exists to achieve
71
functionality objectives. Examples include matching visual and text links which
lead to different end points.
4. Minor / Low — Cosmetic defects which does not affect the functionality of the
system can be classified as Minor Bugs.
5.9.3 Testing Strategies:
Testing strategies are ways of approaching the testing process
Big bang testing
Incremental testing
Top-down testing
Bottom-up testing
Figure 14 illustrates the Big Bang Testing details.
"Big bang" testing
I' Ctrl P' Test sequence O' I' P' O' J All J Q R Q
M Design R M Ctrl
Errors are difficult to locate
Figure 14 Big Bang Testing
72
Figure 15 illustrates the Incremental Testing details.
Incremental testing
A T1 T1 A T1 T2 A B T2 T2 B T3 T3 B C T3 T4 C T4 D T5
Test sequence Test sequence Test sequence 1 2 3
Figure 15 Incremental Testing
Top-Down:
Start with the high-levels of a system and work your way downwards
Advantages
Works well with top-down development
Finds architectural errors early
Disadvantage
System relies on what‘s underneath
73
. May need too much infrastructure before testing is possible (tends
toward big-bang integration)
. May be difficult to develop program stubs that you‘re confident
simulate the infrastructure
Figure 16 illustrates Top-down Testing details.
Top-down testing
Testing Level 1 Level 1 sequence . . .
Level 2 Level 2 Le vel 2 Level 2 Level 2 stubs
Level 3 stubs
Figure 16 Top-down Testing
Bottom-Up Testing:
Start with the lower levels of the system and work upward
Advantages
Appropriate for object-oriented systems
74
Or where infrastructure components are critical
Disadvantages
Needs test drivers to be implemented
May not simulate eventual calling environment.
Does not find major design problems until late in the process
Figure 17 illustrates Bottom-up testing details.
Bottom-up testing
Test drivers Testing Level N Level N Level N Level N Level N sequence
Test drivers Level N–1 Level N–1 Level N–1
Figure 17 Bottom-up Testing
75
The problem with any integration strategy is that test engineers need to simulate the presence of certain modules before actually incorporated them. With top-down integration, test engineers must simulate the presence of low-level modules by using
―stub‖ modules. These stubs are replaced one at a time by their actual counterparts. The stubs must have a certain amount of functionality built in so that they can fake out the higher-level modules into thinking that they are the real thing. Stubs can show simple trace messages to indicate when control is properly passed into the lower-level module, and they can display any passed parameters to demonstrate whether the data interface is functioning correctly. Stubs may also have to return some data, either real or simulated, to the calling module to allow execution of the cluster you‘re now testing to continue.
Similarly, the clusters that are being tested in a bottom-up approach aren‘t designed to function without the higher-level modules attached, so you have to use temporary
―driver‖ modules to get the program to execute. The drivers can simply invoke the top- level module in the cluster you‘re now testing, or it can pass in some parameters both to test the data interface and to allow the cluster to do its thing and return some results to the driver.
Neither top-down nor bottom-up integration is perfect; Test engineers may want to select some combination strategy for your individual projects. The method to choose may be based on where most problems are expected. If control is complex and processing simple, top-down offers the advantage of testing the control modules first. If the data interface between your system and the outside environment is a big issue,
76 bottom-up may be the best approach. In general, drivers are more complex than stubs, so you may have to do a little more work if you use a straight bottom-up method.
5.9.4 QA/QC Testing Activities:
The QA/QC team assesses the development records, test reports, and test artifacts to substantiate the readiness of the software for final delivery.
In addition the QA/QC team:
Continues to monitor and assess the developer‘s Configuration Management System to
analyze and record trends in software problem reports
Review the accuracy of the Requirements Traceability Matrix.
Final User‘s Guides should be completed prior to acceptance testing and reviewed by
the QA/QC team.
A test readiness review concludes this phase, at which time the developer provides
evidence that the software system is ready for Acceptance Testing.
5.9.5 Output and Templates:
Deployment records
Test records
Test artifacts
Final user manuals
77
5.9.6 Issues and Concerns:
It is not possible to guarantee that all errors have been found. Testing only ensure that
the some of the most common errors have been found.
A main concern is the fixing of one error causes a problem somewhere else. The need
for regression testing can not be overemphasized.
5.9.7 Overall SQA Functions and System Integration Phase:
Software integration and test activities combine individually developed components together in the developing environment to ensure components work together to complete the software and system functionality. In the integration and test phase of the development lifecycle, the testing focus shifts from an individual component correctness to the proper operation of interfaces between components, the flow of information through the system, and the satisfaction of system requirements. The following is an extensive list of the overall SQA functions:
Ensure that software test activities are identified, test environments have been
defined, and guidelines for testing have been designed.
Verify the software integration process, software integration testing activities and
the software performance testing activities are being performed in accordance
with the SQA plan, the software design, the plan for software testing, and
established software standards and procedures.
Ensure software integration testing is being accomplished in accordance with
established software standards and procedures.
78
Document and ensure that the approved test procedures are being followed, that accurate records of test results are being kept, that all discrepancies discovered during the tests are being properly reported, that test results are being analyzed, and the associated test reports are completed.
Document and verify that appropriate corrective action process effectively allows discrepancies discovered during software integration tests are identified, analyzed, and corrected.
Verify that software unit tests, and software integration tests are re-executed as necessary to validate corrections made to the code; and the software unit's design, code, and test is updated based on the results of software integration testing, and corrective action process.
Ensure that the responsibility for testing and for reporting on results has been assigned to a specific team member(s).
Ensure that procedures are established for monitoring testing.
Review the Software Test Plan and Software Test Procedures for compliance with requirements and standards.
Monitor test activities, witness tests, and certify test results.
79
5.10 Software Acceptance Test Phase
During the Acceptance Test Phase, formal acceptance procedures are executed to demonstrate that the system meets customer requirements and that the right product was developed.
Figure 18 illustrates acceptance phase and related QA activities in perspective:
Figure 18 Final Acceptance
5.10.1 Input:
Test reports
Test artifacts
Acceptance test Plan
80
5.10.2 QA/QC Related Activities:
The acceptance testing phase consists of a dedicated testing team in combination with user representatives testing the final product to confirm it complies with the requirements.
The acceptance test can be conducted as follows:
Formal testing:
This involves the implementation of a formal acceptance test plan which has been
drawn up during the requirement gathering phase. When all test activities detailed
in the test plan has been executed to the satisfaction of the users, the product is
formally accepted.
Production simulation :
With production simulation the new product is put into a normal operating
environment but in parallel to the original environment. The original environment
continues to operate as the main production environment. If the simulation
environment proves to produce the same output as the actual environment after
some predetermined time, the simulation becomes the actual environment.
Where the developers may typically use simulated or dummy data when conducting testing procedures, the Acceptance Testing Phase will perform similar tests using realistic data. The QA/QC team continues to focus on:
Test activities
Documentation
81
Configuration management
Software and hardware baseline management
Software problem reports
Overall readiness of the system
This phase concludes with an Acceptance Review (AR).
Acceptance Testing Guidelines and Procedures:
Acceptance Testing is often the final step before rolling out the application.
Usually the end users who will be using the applications test the application before
‗accepting‘ the system/software.
Before the acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed
before acceptance Testing is done.
As various levels of testing have been completed most of the technical bugs have
already been fixed before acceptance testing.
Test Cases:
To ensure an effective acceptance testing, test cases are created. The test cases can be created using various use-cases identified during the requirements phase. The test cases ensure proper coverage of all the scenarios during testing. During acceptance testing the specific focus is the exact real world usage of the software/system. The testing
82 is done in an environment that simulates the production environment.
Acceptance Testing – How to Test?
The user acceptance testing is usually a black box type of testing.
The focus is on the functionality and the usability of the application rather than
the technical aspects.
It is generally assumed that the system/software would have already undergone
Unit, Integration and System Level Testing.
It is critical that acceptance testing be carried out in an environment that closely
resembles the real world or production environment.
The steps taken for acceptance testing typically involve the following:
Acceptance Test Planning:
As always the planning process is the most important of all the steps. This affects the effectiveness of the testing process. The planning process outlines the acceptance testing strategy. It also describes the key focus areas, entry and exit criteria.
Designing acceptance Test Cases:
The acceptance test cases help the Test Engineer to test the system/software thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios. The use-cases created during the requirements phase may be used as inputs for creating test-cases. The inputs from business analysts and subject matter experts
(QA/QC) are also used for creating test cases. Each acceptance test-case describes in a
83 simple language the precise steps to be taken to test feature or function. Test-cases are executed according to test plan and QA guidelines.
Resolving Issues:
The issues/defects found during testing are discussed with the project team, subject matter experts and business analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.
Sign Off:
Upon successful completion of the acceptance testing and resolution of the issues found indicates the acceptance of the application. This step is important in commercial software sales. Once the user ―accept‖ the software delivered they indicate that the software meets their requirements.
5.10.3 Output and Templates:
Acceptance review
Software maintenance plan
Final project documentation
5.10.4 Issues and Concerns:
The end-user might have concerns about the end product based on negative past experiences or because of issues they think has been overlooked. This might cause the end-user to postpone the acceptance signoff.
84
5.11 Operation and Maintenance Phase
During this phase, the software is baselined and used in its intended environment.
Software corrections and modifications are made to sustain/enhance its operational capabilities and to upgrade its capacity to support its users.
Figure 19 illustrates operational phase and related QA activities in perspective:
Figure 19 Operation & Maintenance
5.11.1 Input:
Final user documentation
Final product
5.11.2 QA/QC Related Activities:
The QA/QC team typical tasks:
Assess updated software documentation,
Monitor changes to the operational baseline,
85
Monitor Configuration management controls, and the software problem reporting
system.
Verify and document that the level of QA/QC team involvement should match the
extent and criticality of changes being made to the software.
Ensure that when long term engineering is required, the sponsor should ensure that a
periodic assessment is performed to assure a stable and mature software system.
Ensure that the Software QA Plan is updated to reflect the required operation and
maintenance activities.
Monitor software quality throughout the OM phase to check that it is not degraded.
Verify that the software configuration is properly managed.
Ensure that documentation and code are kept up-to-date.
Ensure that Mean Time Between failures (MTBF) increases;
Ensure that Mean Time to Repair (MTTR) decreases.
MTBF and MTTR should be regularly estimated from the data in the configuration
status accounts.
User support
There are two types of user:
End user
Operator
86
An 'end user' utilizes the products or services of a system. An 'operator' controls and monitors the hardware and software of a system. A user may be an end user, an operator, or both.
User support activities include:
Training users to operate the software and understand the products and
Services
Providing direct assistance during operations
Set-up
Data management.
Problem reporting
Users should document problems in Software Problem Reports (SPRs). These should be genuine problems that the user believes lie in the software, not problems arising from unfamiliarity with it. Each SPR should report one and only one problem and contain:
Software configuration item title or name
Software configuration item version or release number
Priority of the problem with respect to other problems
A description of the problem
Operating environment
Recommended solution (if possible).
87
The priority of a problem has two dimensions:
Criticality (critical/non-critical)
Urgency (urgent/routine).
Software Maintenance
Software maintenance should be a controlled process that ensures that the software continues to meet the needs of the end user. This process consists of the following activities:
Change software
Release software
Install release
Validate release.
5.11.3 Output and Templates
Operational reviews
Maintenance Requests
Requirements review
5.11.4 Issues and Concerns
The use of formal change request procedures are just as critical during the operational use as during development. This ensures the completeness of all development documentation.
88
Chapter 6
CONCLUSION AND FUTURE WORK
Software development is complex, and is error prone. Many problems that are faced during software development can be tackled, by adopting a good software quality assurance model. The model and framework developed in this project give a much better way of addressing quality assurance, quality control and software testing for small projects. The model takes advantage of existing quality assurance and quality control processes and procedures.
The model we are proposing here for small projects recommends Software
Quality Assurance Methodology initiated by the Software Quality Framework and defines the vision, policy and objectives of the SQA methodology. Once the framework is established, processes are built that imbed SQA into development. Majority of software development carried out by small and medium sized software development organizations all over the world are not capable of bearing the cost of implementing available software quality models like CMM, SPICE, ISO, etc. Model developed in this project can be a great help for such organizations with minor changes in order to adopt the organization‘s development process to improve the quality of product.
Future work might include more elaboration on checklists and templates. This is an important element to standardize all software artifacts. Also adding some case studies to demonstrate the validity and applicability of the proposed model would be very helpful.
89
REFERENCES
[1] R. S. Pressman, ―Software Engineering: A Practitioner's Approach,‖ 6th ed., Boston,
Mass: McGraw-Hill, 2005.
[2] Mark C Paulk; Charles V Weber; Bill Curtis; Chrissis, Mary Beth, ―The Capability
Maturity Model: Guidelines for Improving the Software Process‖, Boston: Addison
Wesley, 1995.
[3] R. Hadden, ―How scalable are CMM key practices?‖, Crosstalk: The Journal of
Defense Software Engineering, Vol. 11, No. 4, pp. 18-20, 23, April, 1998.
[4] F. L Bauer, "Software Engineering", Information Processing, North-Holland
Publishing Co., pp. 530–538, 1972.
[5] Microsoft Cooperation: Microsoft Solutions Framework White Paper, Microsoft
Press, 1999.
[6] Ken Auer and Roy Miller, ―Extreme Programming Applied: Playing To Win‖,
Addison-Wesley Professional, 2001.
[7] Clair Tristram, "Everyone's a Programmer", Technology Review, pp. 39, Nov 2003.
[8] K. Beck, ―Extreme Programming Explained: Embrace Change‖, Addison-Wesley,
2000.
[9] Ph. Kruchten, ―The Rational Unified Process: An Introduction‖, Addison-Wesley,
2003.