APPLY PRINCIPLES OF CREATING COMPUTER SOFTWARE BY DEVELOPING A COMPLETE PROGRAMME TO MEET GIVEN BUSINESS SPECIFICATIONS 115392

PURPOSE OF THE UNIT STANDARD This unit standard is intended:  To provide a expert knowledge of the areas covered  For those working in, or entering the workplace in the area of Systems Development  To demonstrate an understanding of how to create (in the computer language of choice) a complete computer program that will solve a given business problem, showing all the steps involved in creating computer software

People credited with this unit standard are able to:  Interpret a given specification to plan a computer program solution  Design a computer program to meet a business requirement  Create a computer program that implements the design  Test a computer program against the business requirements  Implement the program to meet business requirements  Document the program according to industry standards

The performance of all elements is to a standard that allows for further learning in this area. LEARNING ASSUMED TO BE IN PLACE AND RECOGNITION OF PRIOR LEARNING The credit value of this unit is based on a person having the prior knowledge and skills to:  Apply the principles of Computer Programming  Design, develop and test computer program segments to given specifications UNIT STANDARD RANGE N/A

INDEX

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 260

Competence Requirements Page Unit Standard 115392 alignment index Here you will find the different outcomes explained which you need to be proved 261 competent in, in order to complete the Unit Standard 115392. Unit Standard 115392 263 Interpret a given specification to plan a computer program solution 269 Design a computer program to meet a business requirement 279 Create a computer program that implements the design 303 Test a computer program against the business requirements 320 Implement the program to meet business requirements 334 Document the program according to industry standards 344 Self-assessment Once you have completed all the questions after being facilitated, you need to check the progress you have made. If you feel that you are competent in the areas mentioned, you may tick the blocks, if however, you feel that you require 363 additional knowledge, you need to indicate so in the block below. Show this to your facilitator and make the necessary arrangements to assist you to become competent.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 261

Unit Standard 115392 – Alignment Index

SPECIFIC OUTCOMES AND RELATED ASSESSMENT CRITERIA SO 1 Interpret a given specification to plan a computer program solution. The plan proposes a description of the problem to be solved by the development of AC 1 the computer program that is understandable by an end-user and meets the given specification. AC 2 The plan integrate the research of problems in term of data and functions The plan includes an evaluation of the viability of developing a computer program to AC 3 solve the problem identified and compares the costs of developing the program with the benefits to be obtained from the program The plan concludes by choosing the best solution and documenting the program AC 4 features that will contain the capabilities and constraints to meet the defined problem. SO 2 Design a computer program to meet a business requirement. The design incorporates development of appropriate design documentation and is AC 1 desk-checked AC 2 The design of the program includes program structure components. AC 3 The design of the program includes program logical flow components. AC 4 The design of the program includes data structures and access method components. SO 3 Create a computer program that implements the design The creation includes coding from design documents, to meet the design AC 1 specifications AC 2 Names created in the program describe the purpose of the items named The creation includes conformance with the design documentation, and differences AC 3 are documented with reasons for deviations SO 4 Test a computer program against the business requirements. The testing includes assessment of the need to develop a testing program to assist AC 1 with stress testing The testing includes the planning, developing and implementing a test strategy that AC 2 is appropriate for the type of program being tested The testing includes the recording of test results that allow for the identification and AC 3 validation of test outcomes SO 5 Implement the program to meet business requirements The implementation involves checking the program for compliance with user AC 1 expectations and any other applicable factors The implementation involves training of users to enable them to use the software to AC 2 their requirements The implementation involves planning of installation of the program that minimises AC 3 disruption to the user SO 6 Document the program according to industry standards. The documentation includes annotation of the program with a description of program AC 1 purpose and design specifics. The documentation includes the layout of the program code including indentation AC 2 and other acceptable industry standards The documentation includes full internal and external documentation, with a level of AC 3 detail that enables other programmers to analyse the program The documentation reflects the tested and implemented program, including changes AC 4 made during testing of the program

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 262

CRITICAL CROSS FIELD OUTCOMES UNIT STANDARD CCFO IDENTIFYING Identify, solve problems and make decisions in relation to the current systems development environments UNIT STANDARD CCFO ORGANISING Organise and manage him/her self and his/her activities responsibly and effectively UNIT STANDARD CCFO COLLECTING Collect, analyse, organise, and critically evaluate information UNIT STANDARD CCFO COMMUNICATING Communicate effectively using visual, mathematical and or language skills in the modes of oral and/ or written persuasion when engaging with systems development UNIT STANDARD CCFO SCIENCE Use science and technology effectively and critically, showing responsibility towards the environment and health of others UNIT STANDARD CCFO DEMONSTRATING Demonstrate an understanding of the world as a set of related systems by recognising that problem solving contexts do not exists in isolation UNIT STANDARD CCFO CONTRIBUTING Contribute to his/her full personal development and the social and economic development of the society at large by being aware of the importance of:  Reflecting on and exploring a variety of strategies to learn more effectively, exploring education and career opportunities and developing entrepreneurial opportunities.

ESSENTIAL EMBEDDED KNOWLEDGE  Performance of all elements is to be carried out in accordance with organisation standards and procedures, unless otherwise stated. Organisation standards and procedures may cover: quality assurance, documentation, security, communication, health and safety, and personal behaviour. An example of the standards expected is the standards found in ISO 9000 Certified Organisations.  Performance of all elements complies with the laws of South Africa, especially with regard to copyright, privacy, health and safety, and consumer rights.  All activities must comply with any policies, procedures and requirements of the organisations involved, the ethical codes of relevant professional bodies and any relevant legislative and/ or regulatory requirements.  Performance of all elements should be performed with a solid understanding of the use of development tools needed in the areas applicable to the unit standard. Examples of such tools are, but is not limited to CASE tools, programming language editors with syntax checking, program source version control systems.  Performance of all elements should make use of International capability models used for Software Development. Examples of such models include (but is not limited to) the ISO SPICE model as well as the CMM model for Software Development.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 263

All qualifications and part qualifications registered on the National Qualifications Framework are public property. Thus the only payment that can be made for them is for service and reproduction. It is illegal to sell this material for profit. If the material is reproduced or quoted, the South African Qualifications Authority (SAQA) should be acknowledged as the source.

SOUTH AFRICAN QUALIFICATIONS AUTHORITY REGISTERED UNIT STANDARD: Apply principles of creating computer software by developing a complete programme to meet given business specifications

SAQA US UNIT STANDARD TITLE ID 115392 Apply principles of creating computer software by developing a complete programme to meet given business specifications ORIGINATOR SGB Computer Sciences and Information Systems PRIMARY OR DELEGATED QUALITY ASSURANCE FUNCTIONARY - FIELD SUBFIELD Field 10 - Physical, Mathematical, Computer and Life Sciences Information Technology and Computer Sciences ABET UNIT STANDARD PRE-2009 NQF LEVEL NQF LEVEL CREDITS BAND TYPE Undefined Regular Level 5 Level TBA: Pre-2009 was 12 L5 REGISTRATION STATUS REGISTRATION START REGISTRATION END SAQA DECISION DATE DATE NUMBER Reregistered 2018-07-01 2023-06-30 SAQA 06120/18 LAST DATE FOR ENROLMENT LAST DATE FOR ACHIEVEMENT 2024-06-30 2027-06-30

In all of the tables in this document, both the pre-2009 NQF Level and the NQF Level is shown. In the text (purpose statements, qualification rules, etc), any references to NQF Levels are to the pre-2009 levels unless specifically stated otherwise.

This unit standard does not replace any other unit standard and is not replaced by any other unit standard.

PURPOSE OF THE UNIT STANDARD This unit standard is intended:

 To provide a expert knowledge of the areas covered  For those working in, or entering the workplace in the area of Systems Development  To demonstrate an understanding of how to create (in the computer language of choice) a complete computer program that will solve a given business problem, showing all the steps involved in creating computer software

People credited with this unit standard are able to:

 Interpret a given specification to plan a computer program solution  Design a computer program to meet a business requirement  Create a computer program that implements the design  Test a computer program against the business requirements  Implement the program to meet business requirements  Document the program according to industry standards

The performance of all elements is to a standard that allows for further learning in this area.

LEARNING ASSUMED TO BE IN PLACE AND RECOGNITION OF PRIOR LEARNING The credit value of this unit is based on a person having the prior knowledge and skills to:

 Apply the principles of Computer Programming  Design, develop and test computer program segments to given specifications

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 264

UNIT STANDARD RANGE N/A

Specific Outcomes and Assessment Criteria:

SPECIFIC OUTCOME 1 Interpret a given specification to plan a computer program solution.

ASSESSMENT CRITERIA

ASSESSMENT CRITERION 1 The plan proposes a description of the problem to be solved by the development of the computer program that is understandable by an end-user and meets the given specification. ASSESSMENT CRITERION RANGE Functions, input and output requirements

ASSESSMENT CRITERION 2 The plan integrate the research of problems in term of data and functions ASSESSMENT CRITERION RANGE At least two different possible solutions

ASSESSMENT CRITERION 3 The plan includes an evaluation of the viability of developing a computer program to solve the problem identified and compares the costs of developing the program with the benefits to be obtained from the program. ASSESSMENT CRITERION RANGE At least two different possible solutions

ASSESSMENT CRITERION 4 The plan concludes by choosing the best solution and documenting the program features that will contain the capabilities and constraints to meet the defined problem.

SPECIFIC OUTCOME 2 Design a computer program to meet a business requirement.

ASSESSMENT CRITERIA

ASSESSMENT CRITERION 1 The design incorporates development of appropriate design documentstion and is desk-checked. ASSESSMENT CRITERION RANGE Documented with the appropriate design tool

ASSESSMENT CRITERION 2 The design of the program includes program structure components. ASSESSMENT CRITERION RANGE Either of: structure charts or UML structure notations

ASSESSMENT CRITERION 3 The design of the program includes program logical flow components. ASSESSMENT CRITERION RANGE Decision trees, flowcharts, pseudo code, decision tables

ASSESSMENT CRITERION 4

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 265

The design of the program includes data structures and access method components. ASSESSMENT CRITERION RANGE At least one of: direct access files, indexed files, database tables

SPECIFIC OUTCOME 3 Create a computer program that implements the design. OUTCOME RANGE Programming language of choice

ASSESSMENT CRITERIA

ASSESSMENT CRITERION 1 The creation includes coding from design documents, to meet the design specifications ASSESSMENT CRITERION RANGE According to industry or company standards for the language chosen

ASSESSMENT CRITERION 2 Names created in the program describe the purpose of the items named ASSESSMENT CRITERION RANGE According to industry or company standards for the language chosen

ASSESSMENT CRITERION 3 The creation includes conformance with the design documentation, and differences are documented with reasons for deviations ASSESSMENT CRITERION RANGE Performance, Maintainability

SPECIFIC OUTCOME 4 Test a computer program against the business requirements.

ASSESSMENT CRITERIA

ASSESSMENT CRITERION 1 The testing includes assessment of the need to develop a testing program to assist with stress testing

ASSESSMENT CRITERION 2 The testing includes the planning, developing and implementing a test strategy that is appropriate for the type of program being tested. ASSESSMENT CRITERION RANGE Cycle 1(new data) and cycle 2 (existing data) testing

ASSESSMENT CRITERION 3 The testing includes the recording of test results that allow for the identification and validation of test outcomes. ASSESSMENT CRITERION RANGE Cycle 1(new data) and cycle 2 (existing data) testing

SPECIFIC OUTCOME 5 Implement the program to meet business requirements.

ASSESSMENT CRITERIA

ASSESSMENT CRITERION 1

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 266

The implementation involves checking the program for compliance with user expectations and any other applicable factors

ASSESSMENT CRITERION 2 The implementation involves training of users to enable them to use the software to their requirements

ASSESSMENT CRITERION 3 The implementation involves planning of installation of the program that minimises disruption to the user ASSESSMENT CRITERION RANGE Depending on program type (on-line or batch or Internet)

SPECIFIC OUTCOME 6 Document the program according to industry standards.

ASSESSMENT CRITERIA

ASSESSMENT CRITERION 1 The documentation includes annotation of the program with a description of program purpose and design specifics. ASSESSMENT CRITERION RANGE According to industry or company standards for the language chosen

ASSESSMENT CRITERION 2 The documentation includes the layout of the program code including indentation and other acceptable industry standards ASSESSMENT CRITERION RANGE According to industry or company standards for the language chosen

ASSESSMENT CRITERION 3 The documentation includes full internal and external documentation, with a level of detail that enables other programmers to analyse the program ASSESSMENT CRITERION RANGE According to industry or company standards for the language chosen

ASSESSMENT CRITERION 4 The documentation reflects the tested and implemented program, including changes made during testing of the program ASSESSMENT CRITERION RANGE According to industry or company standards for the language chosen

UNIT STANDARD ACCREDITATION AND MODERATION OPTIONS The relevant Education and Training Quality Authority (ETQA) must accredit providers before they can offer programs of education and training assessed against unit standards

Moderation of assessment will be overseen by the relevant ETQA according to the moderation guidelines in the relevant qualification and the agreed ETQA procedures

UNIT STANDARD ESSENTIAL EMBEDDED KNOWLEDGE  Performance of all elements is to be carried out in accordance with organisation standards and procedures, unless otherwise stated. Organisation standards and procedures may cover: quality assurance, documentation, security, communication, health and safety, and personal behaviour. An example of the standards expected is the standards found in ISO 9000 Certified Organisations.  Performance of all elements complies with the laws of South Africa, especially with regard to copyright, privacy, health and safety, and consumer rights.  All activities must comply with any policies, procedures and requirements of the organisations involved, the ethical codes of relevant professional bodies and any relevant legislative and/ or regulatory requirements.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 267

 Performance of all elements should be performed with a solid understanding of the use of development tools needed in the areas applicable to the unit standard. Examples of such tools are, but is not limited to CASE tools, programming language editors with syntax checking, program source version control systems systems.  Performance of all elements should make use of International capability models used for Software Development. Examples of such models include (but is not limited to) the ISO SPICE model as well as the CMM model for Software Development.

Critical Cross-field Outcomes (CCFO): UNIT STANDARD CCFO IDENTIFYING Identify, solve problems and make decisions in relation to the current systems development environments UNIT STANDARD CCFO ORGANISING Organise and manage him/her self and his/her activities responsibly and effectively UNIT STANDARD CCFO COLLECTING Collect, analyse, organise, and critically evaluate information UNIT STANDARD CCFO COMMUNICATING Communicate effectively using visual, mathematical and or language skills in the modes of oral and/ or written persuasion when engaging with systems development

UNIT STANDARD CCFO SCIENCE Use science and technology effectively and critically, showing responsibility towards the environment and health of others

UNIT STANDARD CCFO DEMONSTRATING Demonstrate an understanding of the world as a set of related systems by recognising that problem solving contexts do not exists in isolation

UNIT STANDARD CCFO CONTRIBUTING Contribute to his/her full personal development and the social and economic development of the society at large by being aware of the importance of:  Reflecting on and exploring a variety of strategies to learn more effectively, exploring education and career opportunities and developing entrepreneurial opportunities.

UNIT STANDARD ASSESSOR CRITERIA N/A

REREGISTRATION HISTORY As per the SAQA Board decision/s at that time, this unit standard was Reregistered in 2012; 2015.

UNIT STANDARD NOTES Supplementary information:

1. Where not specified otherwise, all options in the range statement must be covered to confirm that a learner is competent in the specific outcome

Definitions:

1. Structured use of language and pseudo code have equal meaning 2. CASE Tools - This is not limited to traditional Computer Aided Software Engineering tools, but can include normal office productive tools, like word processing and presentation tools 3. Industry Standards include company standard done to industry specification

Sub-Sub-Field (Domain):

Systems Development

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 268

Interpret a given specification to plan a computer program solution Time: 180 minutes Activity: Self and Group

AC 1 he plan proposes a description of the problem to be solved by the development of the computer program that is understandable by an T end-user and meets the given specification. Functions, input and output requirements A comprehensive introduction to Input Process Output tables. Learn how to effectively model the important processing going on in your system. One of the first things we need to do in software development is understand the problem. We can't begin to plan the most effective solution until we properly understand what it is we are trying to solve. Input Process Output tables, or IPO tables for short, are an effective way to model the important processing going on in your system. Let's consider the three parts of the table: Output - A piece of information which we want. Input - Data which is required in order to create the required outputs. Process - The steps involved in creating the outputs from the inputs. An Input Process Output table then is a table listing what inputs are required to create a set of desired outputs and the processing required to make that transformation occur. Here is a simple example for calculating an average of a set of numbers:

Calculate average Input Process Output List of numbers Add the numbers together Divide the sum by the total number of Average numbers.

Notice there are many things missing here. We say nothing about how the output is displayed or what is done with it. We don't mention where the inputs came from. Actions are largely ignored. With an IPO table we are interested in data and little else. This helps us to be very specific and not get distracted by other details. The rest of this section we will explain how Input Process Output tables should be used in software design and development. There is no official way they are to be used or implemented however so you will come across other ways of implementing them and you are more than welcome to deviate from what I have explained here if you feel it better helps you achieve your goals.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 269

The essential difference is that in generic software product development, the specification is owned by the product developer. For custom product development, the specification is owned and controlled by the customer. The implications of this are significant – the developer can quickly decide to change the specification in response to some external change (e.g. a competing product) but, when the customer owns the specification, changes have to be negotiated between the customer and the developer and may have contractual implications.

For users of generic products, this means they have no control over the software specification so cannot control the evolution of the product. The developer may decide to include/exclude features and change the user interface. This could have implications for the user’s business processes and add extra training costs when new versions of the system are installed. It also may limit the customer’s flexibility to change their own business processes. Possible non- functional requirements for the ticket issuing system include: a) Between 0600 and 2300 in any one day, the total system down time should not exceed 5 minutes. b) Between 0600 and 2300 in any one day, the recovery time after a system failure should not exceed 2 minutes. ) Between 2300 and 0600 in any one day, the total system down time should not exceed 20 minutes. All these are availability requirements – note that these vary according to the time of day. Failures when most people are traveling are less acceptable than failures when there are few customers. d) After the customer presses a button on the machine, the display should be updated within 0.5 seconds. e) The ticket issuing time after credit card validation has been received should not exceed 10 seconds. f) When validating credit cards, the display should provide a status message for customers indicating that activity is taking place. This tells the customer that the potentially time-consuming activity of validation is still in progress and that the system has not simply failed. g) The maximum acceptable failure rate for ticket issue requests is 1: 10000.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 270

AC 2 he plan integrate the research of problems in term of data and functions T What Is Programming? Programming is the process of taking an algorithm and encoding it into a notation, a programming language, so that it can be executed by a computer. Although many programming languages and many different types of computers exist, the important first step is the need to have the solution. Without an algorithm there can be no program. Computer science is not the study of programming. Programming, however, is an important part of what a computer scientist does. Programming is often the way that we create a representation for our solutions. Therefore, this language representation and the process of creating it becomes a fundamental part of the discipline.

Algorithms describe the solution to a problem in terms of the data needed to represent the problem instance and the set of steps necessary to produce the intended result. Programming languages must provide a notational way to represent both the process and the data. To this end, languages provide control constructs and data types.

Control constructs allow algorithmic steps to be represented in a convenient yet unambiguous way. At a minimum, algorithms require constructs that perform sequential processing, selection for decision-making, and iteration for repetitive control. As long as the language provides these basic statements, it can be used for algorithm representation. All data items in the computer are represented as strings of binary digits. In order to give these strings meaning, we need to have data types. Data types provide an interpretation for this binary data so that we can think about the data in terms that make sense with respect to the problem being solved. These low-level, built-in data types (sometimes called the primitive data types) provide the building blocks for algorithm development.

For example, most programming languages provide a data type for integers. Strings of binary digits in the computer’s memory can be interpreted as integers and given the typical meanings that we commonly associate with integers (e.g. 23, 654, and -19). In addition, a data type also provides a description of the operations that the data items can participate in. With integers, operations such as addition, subtraction, and multiplication are common. We have come to expect that numeric types of data can participate in these arithmetic operations.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 271

The difficulty that often arises for us is the fact that problems and their solutions are very complex. These simple, language-provided constructs and data types, although certainly sufficient to represent complex solutions, are typically at a disadvantage as we work through the problem-solving process. We need ways to control this complexity and assist with the creation of solutions.

AC 3 he plan includes an evaluation of the viability of developing a computer program to solve the problem identified and compares the costs of T developing the program with the benefits to be obtained from the program. Cost Benefit Analysis for Projects – A Step-by-Step Guide Track project costs in real time. Try ProjectManager.com and get dashboards and reporting tools that help you track costs, resources and progress. When managing a project, one is required to make a lot of key decisions. There is always something that needs executing, and often that something is critical to the success of the venture. Because of the high stakes, good managers don’t just make decisions based on gut instinct. They prefer to minimize risk to the best of their ability and act only when there is more certainty than uncertainty.

But how can you accomplish that in a world with myriad variables and constantly shifting economics? The answer: consult hard data collected with reporting tools, charts and spreadsheets. You can then use that data to evaluate your decisions with a process called cost benefit analysis (CBA). An intelligent use of cost benefit analysis will help you minimize risks and maximize gains both for your project and your organization.

What Is Cost Benefit Analysis? Cost benefit analysis in project management is one more tool in your toolbox. This one has been devised to evaluate the cost versus the benefits in your project proposal. It begins with a list, as so many processes do.

There’s a list of every project expense and what the benefits will be after successfully executing the project. From that you can calculate the return on investment (ROI), internal rate of return (IRR), net present value (NPV) and the payback period.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 272

The difference between the cost and the benefits will determine whether action is warranted or not. In most cases, if the cost is 50 percent of the benefits and the payback period is not more than a year, then the action is worth taking.

The Purpose of Cost Benefit Analysis The purpose of cost benefit analysis in project management is to have a systemic approach to figure out the pluses and minuses of various paths through a project, including transactions, tasks, business requirements and investments. Cost benefit analysis gives you options, and it offers the best approach to achieve your goal while saving on investment.

There are two main purposes in using CBA: To determine if the project is sound, justifiable and feasible by figuring out if its benefits outweigh costs. To offer a baseline for comparing projects by determining which project’s benefits are greater than its costs.

The Process of Cost Benefit Analysis According to the Economist, CBA has been around for a long time. In 1772, Benjamin Franklin wrote of its use. But the concept of CBA as we know it dates to Jules Dupuit, a French engineer, who outlined the process in an article in 1848. While it’s not clear if this Founding Father followed this exact process, it has evolved to include these 10 steps: What Are the Goals and Objectives of the Project? The first step is perhaps the most important because before you can decide if a project is worth the effort, you need a clear and definite idea of what it is set to accomplish. What Are the Alternatives? Before you can know if the project is right, you need to compare it to other projects and see which the best path forward is. Who Are the Stakeholders? List all stakeholders in the project. What Measurements Are You Using? You need to decide on the metrics you’ll use to measure all costs and benefits. Also, how will you be reporting on those metrics? What Is the Outcome of Costs and Benefits? Look over what the costs and benefits of the project are, and map them over a relevant time period. What Is the Common Currency? Take all the costs and benefits you’ve collected, and convert them to the same currency to make an apples-to-apples comparison. What Is the Discount Rate? This will express the amount of interest as a percentage of the balance at the end of a certain period.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 273

What Is the Net Present Value of the Project Options? This is a measurement of profit that is calculated by subtracting the present values of cash outflows from the present values of cash inflows over a period of time. What Is the Sensitivity Analysis? This is a study of how the uncertainty in the output can be apportioned to different sources of uncertainty in its inputs. What Do You Do? The final step after collecting all this data is to make the choice that is recommended by the analysis.

How to Evaluate the Cost Benefit Analysis The data you collected is used to help you determine whether the project will have a positive or negative consequence. Keep the following things in mind as you’re evaluating that information: What are the effects on users? What are the effects on nonusers? Are there any externality effects? Is there a social benefit? It’s also important to apply all relevant costs and benefits commonly. That is, the time value of the money spent. You can do this by converting future expected costs and benefits into current rates. Naturally, there is risk inherent in any venture, and risk and uncertainty must be considered when evaluating the CBA of a project. You can calculate this with probability theory. Uncertainty is different than risk, but it can be evaluated using a sensitivity analysis to illustrate how results respond to parameter changes.

How Accurate is Cost Benefit Analysis? How accurate is CBA? The short answer is it’s as accurate as the data you put into the process. The more accurate your estimates, the more accurate your results. Some inaccuracies are caused by the following: Relying too heavily on data collected from past projects, especially when those projects differ in function, size, etc., to the one you’re working on Using subjective impressions when you’re making your assessment The improper use of heuristics (problem solving employing a practical method that is not guaranteed) to get the cost of intangibles Confirmation bias or only using data that backs up what you want to find

Are There Limitations to Cost Benefit Analysis?

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 274

Cost benefit analysis is best suited to smaller to mid-sized projects that don’t take too long to complete. In these cases, the analysis can lead those involved to make proper decisions. However, large projects that go on for a long time can be problematic in terms of CBA. There are outside factors, such as inflation, interest rates, etc., that impact the accuracy of the analysis.

There are other methods that complement CBA in assessing larger projects, such as NPV and IRR. Overall, though, the use of CBA is a crucial step in determining if any project is worth pursuing.

Planning Our online Gantt charts have features to plan your projects and organize your tasks, so they lead to a successful final deliverable. If things change, and they will, the Gantt is easy to edit, so you can pivot quickly.

Resource Management Another snag that can waylay a project are your resources. ProjectManager.com has resource management tools that track your materials, supplies and your most valuable resource: the project team. If they’re overworked, morale erodes and production suffers.

The workload page on ProjectManager.com is color-coded to show who is working on what and gives you the tools to reassign to keep the workload balanced and the team productive.

Real-Time Cost Tracking The surest way to kill any project is for it to bleed money. ProjectManager.com lets you set a budget for your project from the start. This figure is then reflected in reports and in the charts and graphs of the real-time dashboard, so you’re always aware of how costs are impacting your project. ProjectManager.com has the features you need to lead your project to profitability.

Cost benefits analysis is a data-driven process and requires a project management software robust enough to digest and distribute the information.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 275

AC 4 he plan concludes by choosing the best solution and documenting the program features that will contain the capabilities and constraints to T meet the defined problem. Software development

Software development Core activities Processes Requirements Design Engineering Construction Testing Debugging Deployment Maintenance Paradigms and models Agile Cleanroom Incremental Prototyping Spiral V model Waterfall Methodologies and frameworks ASD DevOps DAD DSDM FDD IID Kanban Lean SD LeSS MDD MSF PSP RAD RUP SAFe Scrum SEMAT TSP OpenUP UP XP

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 276

Supporting disciplines Configuration management Documentation Software quality assurance (SQA) Project management User experience Practices ATDD BDD CCO CI CD DDD PP SBE Stand-up TDD Tools Debugger Profiler GUI designer Modeling IDE Build automation Release automation Infrastructure as code Testing Standards and Bodies of Knowledge BABOK CMMI IEEE standards ISO 9001 ISO/IEC standards PMBOK SWEBOK ITIL IREB Glossaries Artificial intelligence Computer science Electrical and electronics engineering

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 277

Software development is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process.

Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.

The software can be developed for a variety of purposes, the three most common being to meet specific needs of a specific client/business (the case with custom software), to meet a perceived need of some set of potential users (the case with commercial and open source software), or for personal use (e.g. a scientist may write software to automate a mundane task).

Embedded software development, that is, the development of embedded software, such as used for controlling consumer products, requires the development process to be integrated with the development of the controlled physical product. System software underlies applications and the programming process itself, and is often developed separately.

The need for better quality control of the software development process has given rise to the discipline of software engineering, which aims to apply the systematic approach exemplified in the engineering paradigm to the process of software development. There are many approaches to software project management, known as software development life cycle models, methodologies, processes, or models. The waterfall model is a traditional version, contrasted with the more recent innovation of agile software development

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 278

Design a computer program to meet a business requirement Time: 180 minutes Activity: Self and Group

AC 1 he design incorporates development of appropriate design documentation and is desk-checked. T Steps to Design There are three fundamental steps you should perform when you have a program to write: Define the output and data flows. Develop the logic to get to that output. Write the program. Notice that writing the program is the last step in writing the program. This is not as silly as it sounds. Remember that physically building the house is the last stage of building the house; proper planning is critical before any actual building can start. You will find that actually writing and typing in the lines of the program is one of the easiest parts of the programming process. If your design is well thought out, the program practically writes itself; typing it in becomes almost an afterthought to the whole process.

Step 1: Define the Output and Data Flows Before beginning a program, you must have a firm idea of what the program should produce and what data is needed to produce that output. Just as a builder must know what the house should look like before beginning to build it, a programmer must know what the output is going to be before writing the program.

Anything that the program produces and the user sees is considered output that you must define. You must know what every screen in the program should look like and what will be on every page of every printed report. Some programs are rather small, but without knowing where you're heading, you may take longer to finish the program than you would if you first determined the output in detail. Liberty BASIC comes with a sample program called Contact3.bas that you can run. Select File, Open, and select Contact3.bas to load the file from your disk. Press Shift+F5 to run the program and then you should see the screen. No contacts exist when you first run the program, so nothing appears in the fields initially.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 279

A field, also known as a text box, is a place where users can type data. Even Liberty BASIC's small Contact Management program window has several fields. If you were planning to write such a contact program for yourself or someone else, you should make a list of all fields that the program is to produce onscreen. Not only would you list each field but you also would describe the fields. In addition, three Windows command buttons appear in the program window. The Table details the fields on the program's window.

Fields That the Contact Management Program Displays

Field Type Description

Contacts Scrolling list Displays the list of contacts

Name Text field Holds contact's name

Address Text field Holds contact's address

City Text field Holds contact's city

State Text field Holds contact's state

Zip Text field Holds contact's zip code

Phone # Text field Holds contact's phone number

Stage Fixed, Displays a list of possible stages this contact might reside scrolling list in, such as being offered a special follow-up call or perhaps this is the initial contact

Notes Text field Miscellaneous notes about the contact such as whether the contact has bought from the company before

Filter Fixed, Enables the user to search for groups of contacts based on Contacts scrolling list the stage the contacts are in, enabling the user to see a list of all contacts who have been sent a mailing

Edit Command Enables the user to modify an existing contact button

Add Command Enables the user to add a new contact button

OK Command Enables the user to close the contact window button

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 280

Many of the fields you list in an output definition may be obvious. The field called Name obviously will hold and display a contact's name. Being obvious is okay. Keep in mind that if you write programs for other people, as you often will do, you must get approval of your program's parameters. One of the best ways to begin is to make a list of all the intended program's fields and make sure that the user agrees that everything is there. As you'll see in a section later this hour named "Rapid Application Development," you'll be able to use programs such as Visual Basic to put together a model of the actual output screen that your users can see. With the model and with your list of fields, you have double verification that the program contains exactly what the user wants.

Input windows such as the Contacts program data-entry screen are part of your output definition. This may seem contradictory, but input screens require that your program place fields on the screen, and you should plan where these input fields must go. The output definition is more than a preliminary output design. It gives you insight into what data elements the program should track, compute, and produce. Defining the output also helps you gather all the input you need to produce the output.

CAUTION Some programs produce a huge amount of output. Don't skip this first all-important step in the design process just because there is a lot of output. Because there is more output, it becomes more important for you to define it. Defining the output is relatively easy— sometimes even downright boring and time-consuming. The time you need to define the output can take as long as typing in the program. You will lose that time and more, however, if you shrug off the output definition at the beginning. The output definition consists of many pages of details. You must be able to specify all the details of a problem before you know what output you need. Even command buttons and scrolling list boxes are output because the program will display these items.

In Hour 1, you learned that data goes into a program and the program outputs meaningful information. You should inventory all the data that goes into the program. If you're adding Java code to a Web site to make the site more interactive, you will need to know if the Web site owners want to collect data from the users. Define what each piece of data is. Perhaps the site allows the user to submit a name and e-mail address for weekly sales mailings. Does the company want any additional data from the user such as physical address, age, and income?

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 281

Object-Oriented Design Throughout this tutorial you will learn what object-oriented programming (OOP) is all about. Basically, OOP turns data values, such as names and prices, into objects that can take on a life of their own inside programs. Hour 14, "Java Has Class," will be your first detailed exposure to objects.

A few years ago some OOP experts developed a process for designing OOP programs called object-oriented design (OOD). OOD made an advanced science out of specifying data to be gathered in a program and defining that data in a way that was appropriate for the special needs of OOP programmers. Grady Booch was one of the founders of object-oriented design. It is his specifications from over a decade ago that help today's OOP programmers collect data for the applications they are about to write and to turn that data into objects for programs.

In the next hour, "Getting Input and Displaying Output," you'll learn how to put these ideas into a program. You will learn how a program asks for data and produces information on the screen. This I/O (input and output) process is the most critical part of an application. You want to capture all data required and in an accurate way.

Something is still missing in all this design discussion. You understand the importance of gathering data. You understand the importance of knowing where you're headed by designing the output. But how do you go from data to output? That's the next step in the design process—you need to determine what processing will be required to produce the output from the input (data). You must be able to generate proper data flows and calculations so that your program manipulates that data and produces correct output. The final sections of this hour will discuss ways to develop the centrepiece—the logic for your programs.

In conclusion, all output screens, printed reports, and data-entry screens must be defined in advance so you know exactly what is required of your programs. You also must decide what data to keep in files and the format of your data files. As you progress in your programming education you will learn ways to lay out disk files in formats they require. When capturing data, you want to gather data from users in a way that is reasonable, requires little time, and has prompts that request the data in a friendly and unobtrusive manner. That's where rapid application development (discussed next) and prototyping can help.

Prototyping

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 282

In the days of expensive hardware and costly computer usage time, the process of system design was, in some ways, more critical than it is today. The more time you spent designing your code, the smoother the costly hands-on programming became. This is far less true today because computers are inexpensive and you have much more freedom to change your mind and add program options than before. Yet the first part of this hour was spent in great detail explaining why up-front design is critical.

The primary problem many new programmers have today is they do absolutely no design work. That's why many problems take place, such as the one mentioned earlier this hour about the company that wanted far more in their Web site than the programmer ever dreamed of.

Although the actual design of output, data, and even the logic in the body of the program itself is much simpler to work with given today's computing tools and their low cost, you still must maintain an eagle-eye toward developing an initial design with agreed-upon output from your users. You must also know all the data that your program is to collect before you begin your coding. If you don't, you will have a frustrating time as a contract programmer or as a corporate programmer because you'll constantly be playing catch-up with what the users actually want and failed to tell you about.

One of the benefits of the Windows is its visual nature. Before Windows, programming tools were limited to text-based design and implementation. Designing a user's screen today means starting with a programming language such as Visual Basic, drawing the screen, and dragging objects to the screen that the user will interact with, such as an OK button. Therefore, you can quickly design prototype screens that you can send to the user. A prototype is a model, and a prototype screen models what the final program's screen will look like. After the user sees the screens that he or she will interact with, the user will have a much better feel for whether you understand the needs of the program.

Although Liberty BASIC does not provide any prototyping tools, programming languages such as Visual C++ and Visual Basic do. The figure shows the Visual Basic development screen. The screen looks rather busy, but the important things to look for are the Toolbox and the output design window.

To place controls such as command buttons and text boxes on the form that serves as the output window, the programmer only has to drag that control from the Toolbox window to

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 283

the form. So to build a program's output, the programmer only has to drag as many controls as needed to the form and does not have to write a single line of code in the meantime. Program development systems such as Visual Basic provide tools that you can use to create output definitions visually.

Once you place controls on a Form window with a programming tool such as Visual Basic, you can do more than show the form to your users. You actually can compile the form just as you would a program and let your user interact with the controls. When the user is able to work with the controls, even though nothing happens as a result, the user is better able to tell if you understand the goals of the program. The user often notices if there is a missing piece of the program and can also offer suggestions to make the program flow more easily from a user's point of view.

CAUTION The prototype is often only an empty shell that cannot do anything but simulate user interaction until you tie its pieces together with code. Your job as a programmer has only just begun once you get approval on the screens, but the screens are the first place to begin because you must know what your users want before you know how to proceed.

Rapid Application Development A more advanced program design tool used for defining output, data flows, and logic itself is called Rapid Application Development, or RAD for short. Although RAD tools are still in their infancy, you will find yourself using RAD over the span of your career, especially as RAD becomes more common and the tools become less expensive.

RAD is the process of quickly placing controls on a form—not unlike you just saw done with Visual Basic—connecting those controls to data, and accessing pieces of prewritten code to put together a fully functional application without writing a single line of code. In a way, programming systems such as Visual Basic are fulfilling many goals of RAD. When you place controls on a form, as you'll see done in far more detail in Hour 16, "Programming with Visual Basic," the Visual Basic system handles all the programming needed for that control.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 284

You don't ever have to write anything to make a command button act like a command button should. Your only goal is to determine how many command buttons your program needs and where they are to go.

But these tools cannot read your mind. RAD tools do not know that, when the user clicks a certain button, a report is supposed to print. Programmers are still needed to connect all these things to each other and to data, and programmers are needed to write the detailed logic so that the program processes data correctly. Before these kinds of program development tools appeared, programmers had to write thousands of lines of code, often in the C programming language, just to produce a simple Windows program.

At least now the controls and the interface are more rapidly developed. Someday, perhaps a RAD tool will be sophisticated enough to develop the logic also. But in the meantime, don't quit your day job if your day job is programming, because you're still in demand.

Teach your users how to prototype their own screens! Programming knowledge is not required to design the screens. Your users, therefore, will be able to show you exactly what they want. The prototyped screens are interactive as well. That is, your users will be able to click the buttons and enter values in the fields even though nothing happens as a result of that use. The idea is to let your users try the screens for a while to make sure they are comfortable with the placement and appearance of the controls.

Top-Down Program Design For large projects, many programming staff members find that top-down design helps them focus on what a program needs and helps them detail the logic required to produce the program's results. Top-down design is the process of breaking down the overall problem into more and more detail until you finalize all the details. With top-down design, you produce the details needed to accomplish a programming task.

The problem with top-down design is that programmers tend not to use it. They tend to design from the opposite direction (called bottom-up design). When you ignore top-down design, you impose a heavy burden on yourself to remember every detail that will be needed; with top-down design, the details fall out on their own. You don't have to worry about the petty details if you follow a strict top-down design because the process of top-down design takes care of producing the details.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 285

One of the keys to top-down design is that it forces you to put off the details until later. Top- down design forces you to think in terms of the overall problem for as long as possible. Top- down design keeps you focused. If you use bottom-up design, it is too easy to lose sight of the forest for the trees. You get to the details too fast and lose sight of your program's primary objectives.

Here is the three-step process necessary for top-down design: Determine the overall goal. Break that goal into two, three, or more detailed parts; too many more details cause you to leave out things. Put off the details as long as possible, then keep repeating steps 1 and 2 until you cannot reasonably break down the problem any further. You can learn about top-down design more easily by relating it to a common real-world problem before looking at a computer problem. Top-down design is not just for programming problems. Once you master top-down design, you can apply it to any part of your life that you must plan in detail. Perhaps the most detailed event that a person can plan is a wedding. Therefore, a wedding is the perfect place to see top-down design in action.

What is the first thing you must do to have a wedding? First, find a prospective spouse (you'll need a different book for help with that). When it comes time to plan the wedding, the top- down design is the best way to approach the event. The way not to plan a wedding is to worry about the details first, yet this is the way most people plan a wedding.

They start thinking about the dresses, the organist, the flowers, and the cake to serve at the reception. The biggest problem with trying to cover all these details from the beginning is that you lose sight of so much; it is too easy to forget a detail until it's too late. The details of bottom-up design get in your way.

What is the overall goal of a wedding? Thinking in the most general terms possible, "Have a wedding" is about as general as it can get. If you were in charge of planning a wedding, the general goal of "Have a wedding" would put you right on target. Assume that "Have a wedding" is the highest-level goal.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 286

The overall goal keeps you focused. Despite its redundant nature, "Have a wedding" keeps out details such as planning the honeymoon. If you don't put a fence around the exact problem you are working on, you'll get mixed up with details and, more importantly, you'll forget some details. If you're planning both a wedding and a honeymoon, you should do two top-down designs, or include the honeymoon trip in the top-level general goal.

This wedding plan includes the event of the wedding—the ceremony and reception—but doesn't include any honeymoon details. (Leave the honeymoon details to your spouse so you can be surprised. After all, you have enough to do with the wedding plans, right?)

Now that you know where you're heading, begin by breaking down the overall goal into two or three details. For instance, what about the colours of the wedding, what about the guest list, what about paying the minister...oops, too many details! The idea of top-down design is to put off the details for as long as possible. Don't get in any hurry. When you find yourself breaking the current problem into more than three or four parts, you are rushing the top- down design. Put off the details. Basically, you can break down "Have a wedding" into the following two major components: the ceremony and the reception.

The next step of top-down design is to take those new components and do the same for each of them. The ceremony is made up of the people and the location. The reception includes the food, the people, and the location. The ceremony's people include the guests, the wedding party, and the workers (minister, organist, and so on—but those details come a little later).

Don't worry about the time order of the details yet. The top-down design's goal is to produce every detail you need (eventually), not to put those details into any order. You must know where you are heading and exactly what is required before considering how those details relate to each other and which come first.

Eventually, you will have several pages of details that cannot be broken down any further. For instance, you'll probably end up with the details of the reception food, such as peanuts for snacking. (If you start out listing those details, however, you could forget many of them.) Now move to a more computerized problem; assume you are assigned the task of writing a payroll program for a company. What would that payroll program require?

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 287

You could begin by listing the payroll program's details, such as this: Print payroll checks. Calculate taxes.

What is wrong with this approach? If you said that the details were coming too early, you are correct. The perfect place to start is at the top. The most general goal of a payroll program might be "Perform the payroll." This overall goal keeps other details out of this program (no general ledger processing will be included, unless part of the payroll system updates a general ledger file) and keeps you focused on the problem at hand.

This might be the first page of the payroll's top-down design. Any payroll program has to include some mechanism for entering, deleting, and changing employee information such as address, city, state, zip code, number of exemptions, and so on. What other details about the employees do you need? At this point, don't ask. The design is not ready for all those details. The first page of the payroll program's top-down design would include the highest level of details.

There is a long way to go before you finish with the payroll top-down design, but Figure 3.3 is the first step. You must keep breaking down each component until the details finally appear. Only after you have all the details ready can you begin to decide what the program is going to produce as output. Only when you and the user gather all the necessary details through top- down design can you decide what is going to comprise those details.

Step 2: Develop the Logic After you and the user agree to the goals and output of the program, the rest is up to you. Your job is to take that output definition and decide how to make a computer produce the output. You have taken the overall problem and broken it down into detailed instructions that the computer can carry out. This doesn't mean that you are ready to write the program—quite the contrary. You are now ready to develop the logic that produces that output.

The output definition goes a long way toward describing what the program is supposed to do. Now you must decide how to accomplish the job. You must order the details that you have so they operate in a time-ordered fashion. You must also decide which decisions your program must make and the actions produced by each of those decisions.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 288

Throughout the rest of this 24-hour tutorial, you'll learn the final two steps of developing programs. You will gain insight into how programmers write and test a program after developing the output definition and getting the user's approval on the program's specifications.

CAUTION Only after learning to program can you learn to develop the logic that goes into a program, yet you must develop some logic before writing programs to be able to move from the output and data definition stage to the program code. This "chicken before the egg" syndrome is common for newcomers to programming. When you begin to write your own programs, you'll have a much better understanding of logic development.

In the past, users would use tools such as flowcharts and pseudocode to develop program logic. It is said that a picture is worth a thousand words, and the flowchart provides a pictorial representation of program logic. The flowchart doesn't include all the program details but represents the general logic flow of the program. The flowchart provides the logic for the final program. If your flowchart is correctly drawn, writing the actual program becomes a matter of rote. After the final program is completed, the flowchart can act as documentation to the program itself.

The flowchart depicts the payroll program's logic graphically. Flowcharts are made up of industry-standard symbols. Plastic flowchart symbol outlines, called flowchart templates, are still available at office supply stores to help you draw better- looking flowcharts instead of relying on freehand drawing. There are also some programs that guide you through a flowchart's creation and print flowcharts on your printer.

Although some still use flowcharts today, RAD and other development tools have virtually eliminated flowcharts except for depicting isolated parts of a program's logic for documentation purposes. Even in its heyday of the 1960s and 1970s, flowcharting did not completely catch on. Some companies preferred another method for logic description called pseudocode, sometimes called structured English, which is a method of writing logic using sentences of text instead of the diagrams necessary for flowcharting.

Pseudocode doesn't have any programming language statements in it, but it also is not free- flowing English. It is a set of rigid English words that allow for the depiction of logic you see so often in flowcharts and programming languages. As with flowcharts, you can write pseudocode for anything, not just computer programs. A lot of instruction manuals use a

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 289

form of pseudocode to illustrate the steps needed to assemble parts. Pseudocode offers a rigid description of logic that tries to leave little room for ambiguity.

Here is the logic for the payroll problem in pseudocode form. Notice that you can read the text, yet it is not a programming language. The indention helps keep track of which sentences go together. The pseudocode is readable by anyone, even by people unfamiliar with flowcharting symbols.

For each employee: If the employee worked 0 to 40 hours then Net pay equals hours worked times rate. Otherwise, If the employee worked between 40 and 50 hours then Net pay equals 40 times the rate; Add to that (hours worked –40) times the rate times 1.5. Otherwise, Net pay equals 40 times the rate; Add to that 10 times the rate times 1.5; Add to that (hours worked –50) times twice the rate. Deduct taxes from the net pay. Print the pay check.

Step 3: Writing the Code The program writing takes the longest to learn. After you learn to program, however, the actual programming process takes less time than the design if your design is accurate and complete. The nature of programming requires that you learn some new skills. The next few hourly lessons will teach you a lot about programming languages and will help train you to become a better coder so that your programs will not only achieve the goals they are supposed to achieve, but also will be simple to maintain.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 290

AC 2 he design of the program includes program structure components. Fundamentals of Program Design Every computer program is built from components, data, and control. T For a single-user application (used by one person at a time), which normally reads data, saves it in a data structure, computes on the data, and writes the results, there is a standard way of organizing the component structure, data structure, and control structure: First, design the program's component structure with three components, organized in a model-view-controller pattern. Next, decide what form of data structure (array, table, set, list, tree, etc.) will hold the program's data. The data structure will be inserted in the program's model component. Then, write the algorithm that defines the execution steps --- the control structure. The algorithm will be placed inside the program's controller. Determine the form of input and output (disk file, typed text in a command window, dialogs, a graphical-use interface, etc.) that the program uses. This will be embedded in the program's view. Once the four-step design is finished, then it is time to convert the design into (Java) coding. We now consider each stage of the design process.

Component structure Again, the program's job is to read information into the computer and save it in a format which lets the computer compute answers from the data that can be written. The program can be written as one large piece of code, but this forces a programmer to think about all parts of the program at once.

Years of experience has shown that it is better to design a program in three parts, in a model-view-controller pattern: The data flows into the program through the view component and is directed by the controller into the model. The controller component manipulates the data and tells the view to output the answer. A bit more precisely, we have that The controller component holds the algorithm that is the instructions that tell the computer when to read data, compute on it, and write the answers.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 291

The model component holds the structures that save the data so that it can be easily computed upon. For example, if the program is a spreadsheet program, then the model holds a table (grid) that represents (``models'') the spreadsheet. Or, if the program is the file manager for Linux, then the model is a tree structure that represents the folder-and-file structure of the disk-file system.

The view component holds the operations that connect the program to the input and output devices (the disk or display or printer....). All three components are important, but the key to building a good quality program is selecting the appropriate data structure for the model component.

Data structure When you solve a problem with a computer program, always ask first, How should the program store the information upon which it computes? Sometimes people talk about ``modelling'' the problem within the computer; the way the data is held is called the model. Recall the previous examples: If the program is a spreadsheet program, then the information should be held in a data structure that is a grid. If the program is a bank-account database, then the information should be grouped into customer accounts, each with a unique ID, saved in an array or set. If the program is a file-system manager, then the information are files and folders that are organized in a tree-like structure. Each of these problems required a distinct data structure in the solution. It helps to draw a picture of the structure. On the other hand, if you are writing the file- system manager for Linux, then your program must hold folders and files, and the picture of the model might look like this: The picture should suggest to you the kind of computer variables and data structures you will require to build the solution. The purpose of a data-structures course is to train you at using a variety of such structures. The model component is ``passive'' --- another program component, the controller, inserts data into the structure, asks for computations, and extracts the answers.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 292

Control structure Every program follows a series of steps to solve a problem. The series of steps is called an algorithm; it controls the computer's work. It is best to begin with an ``outline'' of the algorithm. The outline can be written in a graphical form, called a flowchart. For example, here is an outline, written as a flowchart, of an algorithm for reading and totalling the votes of the presidential election: Many people like to develop their algorithms with a flowchart, because the various paths can be developed one at a time, and the arrows make it less likely to forget a case in the development. The algorithm is inserted into the controller component, and when writing the program in Java, you can code the algorithm into the main method.

Methods for the components Often we see phrases in the algorithm that are not at the level of Java instructions. (Examples: ``add one to Kerry's count''; ``print the totals in the model.'') These phrases are clues that you should write procedures (methods) that do the work described by the phrases. For example, since the phrase, ``add one to Kerry's count'' implicitly mentions the data structure in the model, we might revise our description of the model component to have a method that does what the phrase suggests: This makes the algorithm in the controller easier to write, because it merely invokes the model's method, meaning that the controller does not have to deal with the details of the data structure. (This arrangement is also important if we must change the data structure within the model: the coding of the methods in the model are changed, and we do not rewrite the controller.)

Later, we write the code for the methods. If the coding is complicated, we might wish to write flow charts, possibly defining even more methods.

Input and output A program that reads and writes data will normally use a prebuilt component. For example, when a program prints output to the command window, it uses the System. Out component and its methods, print and println. For input, if we use the JOptionPane object from javax.swing to generate an input dialog, then the application we design would look like this:

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 293

Finally, in some cases, the programmer will build a customized input-output component that uses frames, text fields, buttons, etc. This makes the View component more complex still. In the next lectures, we design and build applications using this design process.

Software Development Methodology The design stage is only one stage in the stages one undertakes to build and deliver a software application. A standard development methodology goes in three stages: Requirements: The program's intended user tells us how she wishes to use the program. The user must tell us stories and draw pictures that explain the different ways the program might be used. Each possible usage is called a use-case. (Use-cases are presented in the next lecture.)

Design: The programmer studies the use-cases and applies the knowledge to designing the component structure, data structure, and control structure of the solution, as described above. Once the program is designed, the programmer does a ``safety check,'' explaining how each use-case executes with the design.

Implementation and Testing: The program is written to match the design, and it is tested to verify that it behaves correctly. The testing usually proceeds in two stages: unit testing, where each component is tested by itself as much as possible, and integration testing, where the entire assembled program is tested on the use-case.

At this point, the program is given to the user, who will almost certainly respond with more requirements that cause the above three-step process to repeat.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 294

AC 3 he design of the program includes program logical flow components. OSI Protocol T There are three logical components in OSI: application process, open system, and transmission medium. The application process is the process conducted in a terminal or in a computer. Open system is a platform that provides the information processing and communication function between peer application processes. The transmission medium is a line that transmits information and signals between open systems. Open system provides the functions for interconnecting two or more systems and includes such equipment as a terminal or a workstation and a network in which terminals and computers are interconnected.

In the OSI reference model, seven layer protocols are defined, from a physical-level protocol to an application-level protocol. The lower-level protocols, such as a physical-level protocol and a data link-level protocol, define the functions of the communications hardware. The upper-level protocols, such as an application-level protocol and a presentation-level protocol, define the functions of communication processing. The protocols are a set of communication functions between peer nodes, that is, the interface between them.

The protocols are well defined to ensure the transparency of the interconnection between peer entities and between neighbouring layers. An upper-level protocol issues a request for the communication functions provided by the adjacent lower-level layer. The adjacent lower- level layer provides its functions to the adjacent upper-level layer, although it does not control the adjacent upper-level layer.

Layered structure of the OSI protocol. The OSI reference model is composed of seven layers. The functions of the Nth layer are composed of the entity, service, and protocol of the Nth layer. The Nth entity creates the Nth service by using the (N − l)th entity. The Nth service is provided to the (N + l)th entity. The Nth service is divided into connection-type service and connectionless-type service. In case of the connection-type service, a connection is established between a source node and its destination node before the data transmission begins. After finishing the data transmission, the communication link is disconnected.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 295

A virtual circuit of the packet switching system is an example of connection-type service. On the other hand, in the case of the connectionless-type service, the Nth entity is a functional module for the communication between a source and its destination. The entity has the functions for communication between peer nodes and the functions for communication between the entity and the adjacent upper-level entity or between the entity and the adjacent lower-level entity.

The Nth service provides the communication functions to the (N+ l)th entity. Generally speaking, the Nth entity provides the Nth service to the (N + l)th entity by using the (N− l)th service provided by the (N − l)th entity in cooperation with the peer Nth entity. The access point in which the (N+ l)th entity receives the Nth service is defined as the Nth service access point (SAP). The information exchanged through the Nth SAP is defined as the Nth service primitive.

The Nth connection is a communication channel between the Nth entity and the peer Nth entity. The channel is used for data transmission between the (N+ l)th entity and the peer (N + l)th entity. The Nth connection is given a specific identifier. The identifier is attached to the transmitting data. Therefore the Nth entity can send the data to the (N + l)th entity by recognizing the identifier.

The Nth protocol is defined as the protocol by which the Nth entity communicates with the peer Nth entity. In the protocols, there are the protocols for establishment of the connection, information control, and other necessary actions.

The unit of the data block in the Nth layer is defined as the Nth protocol data unit (PDU). As shown in Figure 3.3, the (N + l)th PDU is manipulated as the Nth service data unit (SDU). Basically, the (N+ l)th PDU is replaced by the Nth SDU. According to the data length, more than one (N + l)th PDU are integrated into a single Nth SDU. The Mh PDU is created by attaching the Nth protocol control identifier (PCI) to the Nth SDU.

Structure of the data unit. Generally speaking, the current layer control information is attached to the adjacent upper- level layer PDU to generate the current layer SDU. Figure 3.3 shows the relationship between PDU and SDU.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 296

Define node physical architecture The functionality for the logical components in the ESS logical architecture is partitioned among the logical nodes and captured in the ESS node logical architecture as described in the previous section. This is accomplished by distributing the logical components to each logical node based on partitioning considerations that are somewhat independent of how the components are implemented. For example, it makes sense for the Entry Sensor logical component to be part of the Site Installation node and not part of the Central Monitoring Station node regardless of what technology is used to realize the Entry Sensor.

The logical components at each node are then allocated to physical components at each node to constitute the ESS node physical architecture. Allocation of logical components to hardware components in Site Installation and Central Monitoring Station nodes. Allocation of logical components to software components in Site Installation and Central Monitoring Station nodes.

The design constraints that were identified during the system requirements analysis in Section 17.3.3 are imposed on the physical architecture as part of the logical-to-physical allocation. For example, a logical component may be allocated to a particular COTS component that has been imposed as a design constraint. A reference physical architecture may also constrain the solution space with predefined or legacy components such as a set of common services.

As an example, the reference software architecture for the Central Monitoring Station software is a multi-layered software architecture that includes specific types of components associated with each architecture layer—that is, presentation, mission application, infrastructure, and operating system layers.

The logical-to-physical component allocations may also be based on leveraging architectural patterns. The patterns may represent common solutions and their associated technologies. For example, the Event Detection Mgr and System Controller constitute a logical design pattern that can be implemented using a common software design solution.

Alternative physical architectures are often defined by allocating logical components to alternative physical components that are subject to trade-off analysis. As an example, the Entry Sensor includes alternative allocations to an Optical Sensor and a Contact Sensor, and the Contact Sensor was selected as the preferred alternative. This is a key decision, so

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 297

the rationale for this decision is attached to the allocate relationship and refers to the applicable trade study that resulted in this decision.

Trade studies are performed to select the preferred physical architecture based on selection criteria that optimize the measures of effectiveness and measures of performance. In this example, the ESS probability of intruder detection and probability of false alarm may drive the Site Installation performance requirements, while the number and type of Site Installations that are monitored and emergency response times may drive the Central Monitoring Station performance requirements. Performance requirements must be subject to trade-off with availability, cost, and other critical requirements to arrive at a balanced system solution.

When a logical component is allocated to software, the software component must also be allocated to a corresponding hardware component to execute it. In addition to software allocation, persistent data are allocated to hardware components that store the data, and operational procedures are allocated to operators that execute the procedures. A similar approach that was used to model the ESS node logical architecture can be applied to the ESS node physical architecture.

The ESS Node Physical block is defined as a subclass of the ESS block and decomposed into physical nodes as shown in Figure 17.35. In addition to the Site Installation and Central Monitoring Station nodes, the Communication Network is also a node in the node physical architecture, while it was abstracted away in the node logical architecture. The Site Installation physical node is further specialized in to the Site Installation-SFR, Site Installation-MFR, and Site Installation-Business, to correspond to the single- family residences, multi-family residences, and small businesses as it was for the logical site installation nodes.

Block definition diagram showing the ESS Node Physical block as a subclass of the ESS block and its decomposition into the Site Installation and Central Monitoring Station physical nodes. The physical components have stereotypes applied to represent the kind of component, such as «hardware» or «software». Site Installation physical node block definition diagram showing the hierarchy of physical components. Central Monitoring Station physical node block definition diagram showing the hierarchy of physical components.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 298

The activity partitions correspond to the components of the ESS node physical architecture. The activity diagram captures the interaction between the hardware and the Site Software, as well as the operators of the system. The Site Software aggregates all of the software components that were allocated to the Site Processor and is stereotyped as a configuration item. This software executes on the Site Processor, although this is not shown as an activity partition in the activity diagram. The detailed interaction among the software components as described later in this section must preserve the interaction that was specified in the logical architecture and node logical architecture. The other activity partitions correspond to the hardware components and security operator.

Since the ESS Node Physical block is a subclass of the ESS block, it inherits its features— including its ports—from the ESS block. However, the physical ports on the ESS Node Physical block may not share a common type with the ports on the original ESS black box, which may have been defined as logical ports. When dealing with the flow of data, the physical interface is often specified by a communications protocol, and the logical interface represents the information content.

Therefore, these physical ports on the ESS Node Physical block need to replace the logical ports from the original ESS block. This can be accomplished by defining a multiplicity on the original ports as 0...1, such that the ESS Node Physical block does not have to use the original port definitions. It does this by redefining the multiplicity as 0 and then adding its own ports as required. Once this is done, the logical ports from the ESS Node Logical block can be allocated to the physical ports of the ESS Node Physical block. An alternative to replacing the port is to defer typing the port on the original ESS black box and type them on the ESS Node Logical and ESS Node Physical blocks.

The item flows are defined as logical item flows in the logical architecture that are allocated to physical item flows in the physical architecture. The item flow definitions have been deferred in this example, pending the detailed interface specifications on the parts. The ESS node physical architecture defines the physical components of the system, including hardware, software, persistent data, and other stored items (e.g., fluid, energy), and operational procedures that are performed by operators. The software components and persistent data stores are nested within the hardware component to which they are allocated. Several software parts have been allocated to the Site Processor.

The allocation of software to hardware is an abstraction of a UML deployment of a software component to a hardware processor. The ESS node physical architecture serves to integrate

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 299

the hardware and software components and operators of the system. The ESS Node Physical Design package contains nested packages for Structure and Behaviour of the node physical architecture. In addition, the Node Physical Design package also contains packages for the Site Installation and the Central Monitoring Station, which each contain additional nested packages for the hardware, software, persistent data, and operational procedures.

The physical components of the system that are part of the ESS node physical architecture are contained in these nested packages. The following subsections describe the activities to architect and specify the software, data, and hardware architecture. In addition, the subsections describe how to define specialty views of the architecture, such as security, and specify the operational procedures needed to operate the system.

AC 4 he design of the program includes data structures and access method components. T Data structure A data structure known as a hash table. In computer science, a data structure is a data organization, management, and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data.

Usage Data structures serve as the basis for abstract data types (ADT). The ADT defines the logical form of the data type. The data structure implements the physical form of the data type. Different types of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B- tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers. Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 300

Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Data structures can be used to organize the storage and retrieval of information stored in both main memory and secondary memory.

Implementation Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by a pointer—a bit string, representing a memory address, that can be itself stored in memory and manipulated by the program. Thus, the array and record data structures are based on computing the addresses of data items with arithmetic operations, while the linked data structures are based on storing addresses of data items within the structure itself.

The implementation of a data structure usually requires writing a set of procedures that create and manipulate instances of that structure. The efficiency of a data structure cannot be analysed separately from those operations. This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly by the operations that may be performed on it, and the mathematical properties of those operations (including their space and time cost).

Examples There are numerous types of data structures, generally built upon simpler primitive data types: An array is a number of elements in a specific order, typically all of the same type (depending on the language, individual elements may either all be forced to be the same type, or may be of almost any type). Elements are accessed using an integer index to specify which element is required. Typical implementations allocate contiguous memory words for the elements of arrays (but this is not always a necessity). Arrays may be fixed-length or resizable. A linked list (also just called list) is a linear collection of data elements of any type, called nodes, where each node has itself a value, and points to the next node in the linked list. The principal advantage of a linked list over an array is that values can always be efficiently inserted and removed without relocating the rest of the list. Certain other operations, such as random access to a certain element, are however slower on lists than on arrays. A record (also called tuple or struct) is an aggregate data structure. A record is a value that contains other values, typically in fixed number and sequence and typically indexed by names. The elements of records are usually called fields or members.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 301

A union is a data structure that specifies which of a number of permitted primitive types may be stored in its instances, e.g. float or long integer. Contrast with a record, which could be defined to contain a float and an integer; whereas in a union, there is only one value at a time. Enough space is allocated to contain the widest member data type. A tagged union (also called variant, variant record, discriminated union, or disjoint union) contains an additional field indicating its current type, for enhanced type safety. An object is a data structure that contains data fields, like a record does, as well as various methods which operate on the data contents. An object is an in-memory instance of a class from a taxonomy. In the context of object-oriented programming, records are known as plain old data structures to distinguish them from objects. In addition, graphs and binary trees are other commonly used data structures.

Language support Most assembly languages and some low-level languages, such as BCPL (Basic Combined Programming Language), lack built-in support for data structures. On the other hand, many high-level programming languages and some higher-level assembly languages, such as MASM, have special syntax or other built-in support for certain data structures, such as records and arrays. For example, the C (a direct descendant of BCPL) and Pascal languages support structs and records, respectively, in addition to vectors (one-dimensional arrays) and multi-dimensional arrays. Most programming languages feature some sort of library mechanism that allows data structure implementations to be reused by different programs. Modern languages usually come with standard libraries that implement the most common data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, and the Microsoft .NET Framework. Modern languages also generally support modular programming, the separation between the interface of a library module and its implementation. Some provide opaque data types that allow clients to hide implementation details. Object-oriented programming languages, such as C++, Java, and Smalltalk, typically use classes for this purpose. Many known data structures have concurrent versions which allow multiple computing threads to access a single concrete instance of a data structure simultaneously

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 302

Create a computer program that implements the design Time: 180 minutes Activity: Self and Group

AC 1 he creation includes coding from design documents, to meet the design specifications T How to design a computer program This tutorial assumes you're designing a standalone computer program that runs with a conventional GUI or command-line interface, but many of the techniques can also apply to programs that will become part of a bigger system. A standalone program is one that is justified all by itself, like a word processor or a game, but even if it was a cog in a bigger system it'd still have the same qualities: it would focus on one job, it would take some kind of input that the system produces and transform it into some kind of output that the user or system consumes. Some examples of good standalone programs are: Address book Spreadsheet Calculator Trip planner Picture editor

Whereas a program designed to be part of a larger system can be something like: A program that imports purchase orders and saves them to a database A program that prints packing slips for orders stored in a database A web browser An email client An MP3 player

The first set of programs are all useful on their own, but the second set all need something else to complete them such as a web server or email server. Even the MP3 player needs a program that can create new MP3 files for it to play. The impact to you is that the second set of programs have to cooperate with some kind of protocol or file format that you didn't design yourself, and so they become part of your spec. In this section we are going to ignore that aspect so we can concentrate on the basics.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 303

What is the difference between designing and coding? There is no difference between designing a program on paper and coding it, but code tends to be harder to understand and change. So the goal of the paper planning phase is to invent a pattern that helps you understand the code and isolate it from changes made in other parts of the program. In fact, most of your time will be spent creating ways to isolate code from changes made to other code, and most of this tutorial will be about how you can do this one thing. A popular metaphor for design versus coding is the architect who gives blueprints to the builder, and the builder represents the programmer. But it's an imperfect metaphor because it forgets the compiler, which does the same thing as the builder in the architect metaphor. If we revised the metaphor then the architect would have to go through three phases to design a bridge: come up with a way to organize the blueprints, then come up with a way to prevent changes in the design of the bolts or girders from forcing a redesign of the entire bridge, and then design the bridge.

In real life construction projects the builders also co-design the bridge along with the architect because they discover problems that the architect missed, even offering solutions for them based on their expertise as builders. Not just problems with the measurements and materials either, but problems with the landscape, weather and budget. With programming the compiler will be a co-designer because it'll also tell you about problems you missed. Not just problems with the syntax, but problems with the way you use types. Many modern will go as far as to suggest ways to fix those problems based on the expertise of the programmers who made it.

What makes a language "expressive"? The more work you can imply with a single symbol or keyword, and the more open a symbol is to modification by other symbols, then the more expressive the language is. For example, the loop. First we had to define loops by initializing a counter, then establishing a label for the beginning of the loop, and then increment the counter, test for a condition, and GOTO the beginning of the loop again: 10 DIM i = 1 20 PRINT i 30 i = i + 1 40 IF i < 10 GOTO 20

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 304

Then languages got to be a little bit more expressive, and it took less work to say the same thing: For (int i = 0; i < 10; i++) print(i); The new language added symbols and syntax that encapsulated more meaning, not just making it easier to see that it was a loop, but even the syntax for incrementing the counter got shorter. These were symbols that implied a lot of meaning, but there's also the ability to modify the meaning of symbols:

Tomorrow = Now + "24 hours" Not only was the plus operator overloaded to perform the appropriate arithmetic on dates, but implicit conversion was invoked to convert the string literal into a timespan value. This is why the choice of language has a big impact on how much design work you'll do on paper before you begin coding.

The more type checking and static analysis the compiler does and the more expressive the language is then the easier it is to organize and isolate code. Imagine that languages could be placed on a graph like the one below. Here we see that assembly languages require the most planning in advance of coding, but DSLs (Domain Specific Languages) can be so specialized for their task that you can jump into coding almost right away.

The numbers on this graph have been exaggerated to illustrate the idea, but the gist is that every language aims to sit somewhere between intense planning and carefree coding. And even though it seems like it would be nice to operate at the bottom-right corner all the time, it isn't necessarily an advantage; one of the trade-offs of expressibility in any language-- human or computer languages--is applicability. Every abstraction comes at a price that makes the language less appropriate for some other task.

Even if your problem lives at the bull’s eye of a language's target there will still never be a language so expressive that you'll never need any kind of paper-planning. There will only ever be languages that make it safer to move into coding earlier.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 305

How to think about the design of a program All programs transform input into output, even if that input was hard-coded into the source at design-time. It's obvious enough to be forgotten and programmers can get so lost in the fuss of implementing features that they forget what the real purpose of the program was. Proper design can help you and your team mates avoid losing sight of the real goal.

The code that does the conversion is called the business logic. The business logic will act on data structures called the model. Your program will need to convert (or "deserialize") its input into the model, transform it, and then serialize the model into output. The model will be whatever makes it easiest to perform the transformation and does not need to resemble what the user thinks of the data as. For example, a calculator converts decimal numbers into binary, performs arithmetic on them there, and then converts the binary back into decimal. The binary representation of the decimal numbers is the calculator's model.

Everything else is called overhead, and that will be code that implements the user interface, sanitizes the input, creates connections to services, allocates and disposes of memory, implements higher-level data structures like trees and lists and so-on. One of your goals will be to separate the overhead from the business logic so that you can make changes to one without having to modify the other.

Some mistakes to avoid at this stage: Don't create a class or module called "BusinessLogic" or similar. It shouldn't be a black-box. Don't put business logic in a library1. You'll want the ability to change it more often than the overhead, and libraries tend to get linked to other programs--multiplying the consequences of changing it Don't put business logic or overhead into the model unless it's only enforcing constraints. The code that defines the model should expose some interfaces that let you attach what you need. More on that later

Some of the things that you can include in the model is code that enforces meaningful relationships. So if your genology program has a "Person" class with an "Offspring" collection then you should include code in the model that prevents a person from being their own grandparent. But the function to discover cousin relationships in a family tree is business logic and doesn't belong in the model.

Although it might make sense that you can't be your own cousin, and that this rule could be enforced in the model, you also need to draw a line between what enforcement is necessary

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 306

to prevent common bugs and what's going to create complex bugs through mazes of logic that are difficult to follow. 1 - "Anything Is Appropriate As Long As You Know What you’re Doing." One man's business logic is another man's overhead, and your project might be packaged as a library to be consumed by a bigger system. This rule is meant to apply to the point you're at in the food chain.

Top-Down versus Bottom-Up Top-Down programming is about writing the business logic first in pseudocode or non- compiling code and then working down to the specifics, and it's the first program design technique taught in schools. Let’s say that you're designing a coin-counting program, and you write what you want the perfect main() to look like: void main() { CoinHopper hopper = new CoinHopper(Config.CoinHopperHardwarePort); decimal deposit = 0; while (hopper.count > 0) { CoinType coin = hopper.next(); deposit += coin.value; }

decimal processingFee = deposit * 0.08; Output.Write("Total deposit: {0}, processing fee: {1}, net deposit: {2}", deposit, processingFee, deposit - processingFee); }

The program doesn't compile yet, but you've now defined the essence of the program in one place. With top-down programming you start like this and then you focus on defining what's going on in the CoinHopper class and what abstraction is going on in the Output class.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 307

Bottom-up programming is the natural opposite, where the programmer intuits that he'll need a way to control the coin counting hardware and begins writing the code for that before writing the code for main(). One of the reasons for doing it this way is to discover non- obvious requirements that have a major impact on the high-level design of the program.

Lets say that when we dig into the nitty-gritty of the API for the hardware we discover that it doesn't tell us what type of coin is there, it just gives us sensory readings like the diameter and weight. Now we need to change the design of the program to include a module that identifies coins by their measurements.

What usually happens in the real-world is that all of the top-down design is done on paper and the actual coding goes bottom-up. Then when the coding effort falsifies an assumption made on paper the high-level design is easier to change.

"A further disadvantage of the top-down method is that, if an understanding of a fault is obtained, a simple fix, such as a new shape for the turbine housing, may be impossible to implement without a redesign of the entire engine." - from Richard Feynman's report of the Space Shuttle Challenger Disaster

High-level design patterns "Design patterns" might have come into the programmer's lexicon around the time a book by the same name was published, but the concept has existed since the dawn of man. Given a problem, there are certain traditional ways of solving it. Having windows on two adjacent walls in a room is a design pattern for houses that ensures adequate light at any time of day. Or putting the courthouse on one side of a square park and shops around the other sides is another kind of design pattern that focuses a town's community.

The culture of computer programming has developed hundreds of its own design patterns: high level patterns that affect the architecture of the whole program--forcing you to choose them in advance--and low-level patterns that can be chosen later in development. High-level patterns are baked into certain types of frameworks. The Model-View- Controller (MVC) pattern is ubiquitous for GUIs and web applications and nearly impossible to escape in frameworks like Cocoa, Ruby-on-Rails, and .Net, but that doesn't mean you have to use it for every program; some programs don't have to support random user interaction and wouldn't benefit from MVC.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 308

MVC is well described elsewhere, so here are some of the other high-level patterns and what they're good for.

The 1-2-3 This is the first pattern you learned in programming class: the program starts, asks the user for input, does something with it, prints it out and halts. One-two-three. "Hello world" is a 1- 2-3. It's the simplest pattern and still useful and appropriate in many situations. It's good for utilities, housekeeping tasks and one-off programs.

One of the biggest mistakes made by programmers is to get too ambitious. They start designing an MVC program, take weeks or months to code and debug it, but then it dwarfs the magnitude of the problem. If someone wants a program that doesn't need to take its inputs randomly and interactively, then produces a straightforward output, then it's a candidate for 1-2-3.

The Read-Execute-Print Loop (REPL) REPL is the 1-2-3 kicked up a notch. Here the program doesn't halt after printing its output, it just goes back and asks for more input. Command-line interfaces are REPLs, and most interpreted programming languages come with a REPL wrapped around the interpreter. But if you give the REPL the ability to maintain state between each command then you have a powerful base to build simple software that can do complex things. You don't even need to implement a full programming language, just a simple "KEYWORD {PARAMETER}" syntax can still be effective for a lot of applications.

The design of a REPL program should keep the session's state separate from the code that interprets commands, and the interpreter should be agnostic to the fact that it's embedded in a REPL so that you can reuse it in other patterns. If you're using OOP then you should also create an Interface to abstract the output classes with, so that you aren't writing to STDOUT directly but to something like myDevice.Write(results). This is so you can easily adapt the program to work on terminals, GUIs, web interfaces and so-on.

The Pipeline Input is transformed one stage at a time in a pipeline that runs continuously. It's good for problems that transform large amounts of data with little or no user interaction, such as consolidating records from multiple sources into a uniform store or producing filtered lists according to frequently changing rules. A news aggregator that reads multiple feed standards (like RSS and Atom), filters out dupes and previously read articles, categorizes and then

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 309

distributes the articles into folders is a good candidate for the Pipeline pattern, especially if the user is expected to reconfigure or change the order of the stages.

You write your program as a series of classes or modules that share the same model and have a loop at their core. The business logic goes inside the loop and treats each chunk of data atomically--that is, independent of any other chunk. If you're writing your program on Unix then you can take advantage of the pipelining system already present and write your program as a single module that reads from stdin and writes to stdout.

Some language features that are useful for the Pipeline pattern are coroutines (created with the "yield return" statement) and immutable data types that are safe to stream.

The Workflow This is similar to the Pipeline pattern but performs each stage through to completion before moving onto the next. It's good for when the underlying data type doesn't support streaming, when the chunks of data can't be processed independently of the others, when a high degree of user interaction is expected on each stage, or if the flow of data ever needs to go in reverse. The Workflow pattern is ideal for things like processing purchase orders, responding to trouble tickets, scheduling events, "Wizards", filing taxes, etc.

You'd wrap your model in a "Session" that gets passed from module to module. A useful language feature to have is an easy way to serialize and deserialize the contents of the session to-and-from a file on disk, like to an XML file.

The Batch Job The user prepares the steps and parameters for a task and submits it. The task runs asynchronously and the user receives a notification when the task is done. It's good for tasks that take more than a few seconds to run, and especially good for jobs that run forever until cancelled. Google Alerts is a good example; it's like a search query that never stops searching and sends you an email whenever their news spider discovers something that matches your query.

Yet even something as humble as printing a letter is also a batch job because the user will want to work on another document while your program is working on queuing it up. Your model is a representation of the user's command and any parameters, how it should respond when the task has produced results, and what the results are. You need to encapsulate all of it into a self-contained data structure (like "PrintJobCriteria" and

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 310

"PrintJobResults" classes that share nothing with any other object) and make sure your business logic has thread-safe access to any resources so it can run safely in the background.

When returning results to the user you must make sure you avoid any thread entanglement issues. Framework classes like BackgroundWorker are ideal for marshalling the results back onto the GUI.

Combining architectural patterns You can combine high-level patterns into the same program, but you will need to have a very good idea of what the user's problem is because one model will always be more dominant than the others. If the wrong model is dominant then it'll spoil the user's interaction with the program.

For example, it wouldn't be a good idea to make the Pipeline dominant and implement 1-2-3 in one of its modules because you'll force the user to interact excessively for each chunk of data that goes through the pipeline. This is exactly the problem with the way file management is done in Windows: you start a pipeline by copying a folder full of files, but an intermediate module wants user confirmation for each file that has a conflict. It's better to make 1-2-3 dominant and let the user define in advance what should happen to exceptions in the pipeline.

User Interface patterns A complete discussion of UI patterns would be beyond the scope of this tutorial, but there are two meta-patterns that are common between all of them: modal and modeless. Modal means the user is expected to do one thing at a time, while modeless means the user can do anything at any time. A word processor is modeless, but a "Wizard" is modal. Regardless of what the underlying architectural pattern is, the UI needs to pick one of these patterns and stick to it.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 311

Some programs can mix the two effectively if the task lends itself, like tax-filing software because it lets you jump to points randomly in the data entry stage, but doesn't let you move onto proofing and filing until the previous stage is complete. Almost all major desktop software is modeless and users tend to expect it. To support this kind of UI there are two mechanisms supported by most toolchains; event-driven and data binding.

Event driven Sometimes called "call-backs", but when they're called "Events" it's usually because they've been wrapped in a better abstraction. You attach a piece of code to a GUI control, and when the user does something to the control (click it, focus it, type in it, mouse-over it, etc) your code is invoked by the control. The call-back code is called a handler and is passed a reference to the control and some details about the event.

Event-driven designs require a lot of code to examine the circumstances of the event and update the UI. The GUI is dumb, knows nothing about the data, needs to be spoon-fed the data, and passes the buck to your code whenever the user does something. This can be desirable when the data for the model is expensive to fetch, like when it's across the network or there's more than can fit in RAM.

Events are an important concept in business logic, overhead and models, too. The "PropertyChanged" event is very powerful when added to model classes, for example, because it fits so well with the next major UI mechanism below:

Data Binding Instead of attaching code to a control you attach the model to the control and let a smart GUI read and manipulate the data directly. You can also attach PropertyChanged and CollectionChanged events to the model so that changes to the underlying data are reflected in the GUI automatically, and manipulation on the model performed by the GUI can trigger business logic to run. This technique is the most desirable if your model's data can be retrieved quickly or fit entirely in RAM.

Data binding is complemented with conventional event-driven style; like say your address- book controls are bound directly to the underlying Address object, but the "Save" button invokes a classic Event handler that actually persists the data.

Data binding is extremely powerful and has been taken to new levels by technologies like Windows Presentation Foundation (WPF). A substantial portion of your program's behaviour

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 312

can be declared in data bindings alone. For example: if you have a list-box that is bound to a collection of Customer objects then you can bind the detail view directly to the list- box's SelectedItem property and make the detail view change whenever the selection in the list-box does, no other code required. A perfectly usable viewer program can be built by doing nothing more than filling the model from the database and then passing the whole show over to the GUI. Or like MVC without the C.

What it'll mean for the design of your program is greater emphasis on the model. You'll either design a model that fits the organization of the GUI, or build adaptors that transform your model into what's easiest for the GUI to consume. The style is part of a broader concept known as Reactive Programming.

Designing models Remember that your data model should fit the solution, not the problem. Don't look at the definition of the problem and start creating classes for every noun that you see, and nor should you necessarily create classes that model the input or output--those may be easily convertible without mapping them to anything more sophisticated than strings.

Some of the best candidates for modelling first are actions and imaginary things, like transactions, requests, tasks and commands. A program that counts coins probably doesn't need an abstract Coin class with Penny, Nickel, Dime and Quarter dutifully subclassing it, but it should have a Session class with properties to record the start time, end time, total coin count and total value. It might also contain a Dictionary property to keep separate counts for each type of coin, and the data type representing the coin type may just end up being an Enum or string if that's all it really needs.

Once you've modelled the actions you can step forward with your business logic a little until you begin to see which nouns should be modelled with classes. For example, if you don't need to know anything about a customer except their name, then don't create a Customer class, just store their name as a string property of an Order or Transaction class2.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 313

2 - There can be a benefit to creating "wrappers" for scalar values for the sake of strong- typing them. In this case Customer would just store a name, but the fact that it's an instance of Customer means the value can't be accidentally assigned to a PhoneNumber property. But I wouldn't use this technique unless your language supports Generics (more below).

Organizing business logic Earlier I said that it's bad form to create a class called BusinessLogic and cram it all in there because it makes it look like a black box. Let's say our coin-counter's business logic can be expressed as "Count all US-issue coins, discard foreign coinage and subtract an 8% processing fee." Our design has evolved since our first attempt and now we have two classes: Tabulator and CoinIdentifier.

CoinIdentifier has a public static method that takes input from the coin sensors in the machinery and spits out a value that identifies what kind of coin it is. It's probably going to need changes once a year or less as new coins are issued, so we have a good reason to isolate it from the rest of the code.

Tabulator handles a CoinInserted event raised by the hardware and passes the sensor data-- contained within the event's arguments--to CoinIdentifier. Tabulator keeps track of the counts for all coin types regardless of whether they're US-issue or not. The essence of this code is the least likely to change, so we make sure it doesn't have to know how to identify coins or judge what ought to be done with them.

Tabulator raises a CoinCounted event instead, and we put its handler in the main module of the program to decide whether to keep the coin or eject it into the returns tray. The code which is the most likely to change frequently--like which coins to reject and how much to keep as a processing fee--stays in the main module, with constants like the processing fee read from a configuration file.

Organizing business logic means grouping it by purpose, avoiding bundles of unrelated functions, and creating Interfaces that let us connect the modules while insulating them from changes in each other. Another benefit is that well isolated code is easier to test in isolation, and that makes it safer to change other code without forcing a re-test of everything that uses it.

Keeping models, overhead and business logic separate

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 314

The last topic I'll discuss in this tutorial is also the most important, because it gives you more flexibility to change the higher-level design. The future maintainability of your code depends on it, but in large projects even the initial coding will succeed or fail on this principle.

This is what most of the book Design Patterns is about. If you buy it then be aware that some of its patterns may be obsolete in your language because they've been absorbed into language features. The "Iterator" pattern, for example, is now hidden in a language feature called list comprehensions. The pattern is still there, but now it's automatic and the language just became more expressive.

New patterns are being invented all the time, but these are the basic principles they all strive for: Overhead code needs to be as program-agnostic as possible Assume that every line of business logic and every property of the model will change and that the overhead code should expect it. You can do this by pushing that overhead code into its own classes and using Interfaces on models and business logic to define the very least of what the overhead needs to know about an object. You can also now find languages like Java, C# and Visual Basic that support Generics, which extend the type-checking power of the compiler (and by extension, the IDE). Here's how to make them work for you:

Use Interfaces to abstract an object's capabilities If your overhead code needs to do something to an object then put an Interface on that object's class that exposes whatever is needed. For example, your overhead code has to route a message based on its destination, so the message class would implement an interface you'd call IDeliverable that defines a Destination property

Use Generics to abstract an algorithm's capabilities When an algorithm doesn't care what kind of object its manipulating then it should use generics to make the algorithm transparent to the type-checking power of the compiler

Defer specifics until runtime Near the beginning I said that every program transforms input into output, and that input will include more than just what the user types or clicks after the program starts. It's also going to include configuration files and, for the sake of clarity, you can even consider your program's main () module to be like a compiled and linked configuration file; its job is to assemble the abstracted pieces at the last minute and give them the kiss of life.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 315

You can defer specifics with a design principle called Inversion of Control (IoC). The simplest expression is when you pass objects that know their own specifics into methods that don't, such as when a Sort () algorithm takes collections of objects that all implement IComparable. Sort () doesn't need to know how to compare two objects, it just churns through them calling object1.Compare (object2). The code for Sort () concerns only what Sort() does, not what's peculiar about the things it's sorting.

Blocking events Events can be used to invert control and pass the buck up the chain, too, and when the child code waits for the event to be handled--rather than continuing on asynchronously--we say it's a blocking event. Event names that begin with Resolve often want the parent code to look for something it needs. An XML parser, for example, might use a ResolveNamespace event rather than force the programmer to pass all possible namespaces or a namespace resolving object at the beginning. The handler has the job of finding the resource and it passes it back by setting a property in the event's arguments.

Dependency Injection A more complex expression of IoC is called Dependency Injection (DI). This is when you identify a need--such as writing to different kinds of databases or output devices--and pass implementations of them into an object that uses them. Our Tabulator class from earlier, for example, could be passed an instance of the exact breed of CoinIdentifier in its constructor. We have one that works on US coins and another that works on Canadian, and a configuration file tells the main() which one to create.

The second method of DI is to call a static method from within the class that needs the resource. This is popular for logging frameworks: say your class has a member called _logger and in your constructor you set it by calling the static method LogManager.GetLogger(). GetLogger() in turn implements the Factory design pattern to pick and instantiate the appropriate kind of logger at runtime. With a tweak to a config file you can go from writing to text files to sending log events over the network or storing them in a database.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 316

To choose between the two I like to consider how critical the dependency is to the purpose of the module. When the dependency is critical I pass it as a parameter to the constructor so that it's clear and explicit what's going on in your code. When it's for a side-concern like logging I hide them with calls to static methods because I want to avoid ruining the readability of the business logic.

A third method is to use a dependency injection framework like Ninject or Unity. These support more sophisticated methods for both explicit and implicit injection of the dependency, but they're only worth if your design uses DI extensively. Manual DI has a sweet spot of around 1-to-3 injections per object while frameworks start at about 4 or more. One of the problems they bring is more configuration and prep time before they're usable, and overdosing on them can hide too much.

Parting shots Don't O.D. on third-party structural libraries There's you, the language's vendor, and Bob who has this wicked awesome Dependency Injection framework and Monad library. One... maybe two Bob's Frameworks are okay, but don't overdose on them or even you won't know how your program works

Brevity is the soul of an Interface When you design interfaces for your classes to abstract their capabilities, remember to keep them short. It's normal for an Interface to specify only one or two methods and for classes to implement 5 or 6 Interfaces

Explain your program through its structure See if you can group your functions into classes and name those classes so that when you look at the project tree you can sense what the program does before you even look at the code

Embrace messiness... and refactoring During development (and deadlines) it's normal to smudge overhead code and business logic into places they don't belong before you can figure out what patterns will separate them again nicely. It's okay to be messy, but it's important to clean up afterwards with some refactoring

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 317

AC 2 ames created in the program describe the purpose of the items named Code names can be about secrecy, but when it comes to software development, it’s N usually not so much about secrecy as it is about the convenience of having a name for a specific version of the software. It can be very practical to have a unique identifier for a project to get everyone on the same page and avoid confusion.

And we want to name our darlings, don’t we? So what kind of code names are developers out there coming up with? Here is a collection of code names for software products from companies like Google, Microsoft, Apple, Canonical, Red Hat, Adobe, Mozilla, Automattic and more. We tried to give some background information wherever possible. You’ll notice that some code name schemes are definitely more out there than others.

AC 3 he creation includes conformance with the design documentation, and differences are documented with reasons for deviations T Specification (technical standard) A specification often refers to a set of documented requirements to be satisfied by a material, design, product, or service. A specification is often a type of technical standard. There are different types of technical or engineering specifications (specs), and the term is used differently in different technical contexts. They often refer to particular documents, and/or particular information within them. The word specification is broadly defined as "to state explicitly or in detail" or "to be specific".

A requirement specification is a documented requirement, or set of documented requirements, to be satisfied by a given material, design, product, service, etc. It is a common early part of engineering design and product development processes, in many fields. A functional specification is a kind of requirement specification, and may show functional block diagrams.

A design or product specification describes the features of the solutions for the Requirement Specification, referring to either a designed solution or final produced solution. It is often used to guide fabrication/production. Sometimes the term specification is here used in connection with a data sheet (or spec sheet), which may be confusing.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 318

A data sheet describes the technical characteristics of an item or product, often published by a manufacturer to help people choose or use the products. A data sheet is not a technical specification in the sense of informing how to produce. An "in-service" or "maintained as" specification, specifies the conditions of a system or object after years of operation, including the effects of wear and maintenance (configuration changes).

Specifications are a type of technical standard that may be developed by any of various kinds of organizations, both public and private. Example organization types include a corporation, a consortium (a small group of corporations), a trade association (an industry-wide group of corporations), a national government (including its military, regulatory agencies, and national laboratories and institutes), a professional association (society), a purpose- made standards organization such as ISO, or vendor-neutral developed generic requirements. It is common for one organization to refer to (reference, call out, cite) the standards of another. Voluntary standards may become mandatory if adopted by a government or business contract.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 319

Test a computer program against the business requirements Time: 180 minutes Activity: Self and Group

AC 1 he testing includes assessment of the need to develop a testing program to assist with stress testing T 5 key software testing steps every engineer should perform In recent years, the term "shift-left testing" has entered the software engineering vernacular. But what does that mean? In plain English, it means conducting more software testing during the software development phase in order to reduce defects and save the business from costly bugs.

Shift-left testing is often used to describe increased involvement by quality assurance (QA) engineers during the development phase in an effort to detect defects as early as possible, before software engineers have handed the program over to QA for more extensive testing. Most of the time, it means developing and executing more executing more automated testing of the UI and APIs.

However, there are some basic and essential software testing steps every software developer should perform before showing someone else their work, whether it's for shift-left testing, formal testing, ad hoc testing, code merging and integration, or just calling a colleague over to take a quick look. The goal of this basic testing is to detect the obvious bugs that jump out immediately. Otherwise, you get into an expensive and unnecessary cycle of having to describe the problem to the developer, who then has to reproduce it, debug it, and solve it, before trying again.

Here are the essential software testing steps every software engineer should perform before showing their work to someone else. 1. Basic functionality testing Begin by making sure that every button on every screen works. You also need to ensure that you can enter simple text into each field without crashing the software. You don't have to try out all the different combinations of clicks and characters, or edge conditions, because that's what your testers do—and they're really good at that.

The goal here is this: don't let other people touch your work if it's going to crash as soon as they enter their own name into the username field. If the feature is designed to be accessed

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 320

by way of an API, you need to run tests to make sure that the basic API functionality works before submitting it for more intensive testing. If your basic functionality testing detects something that doesn't work, that's fine. Just tell them that it doesn't work, that you're aware of it, and that they shouldn't bother trying it. You can fix it later, just don't leave any surprises in there.

2. Code review Another pair of eyes looking at the source code can uncover a lot of problems. If your coding methodology requires peer review, perform this step before you hand the code over for testing. Remember to do your basic functionality testing before the code review, though.

3. Static code analysis There are tools that can perform analysis on source code or bytecode without executing it. These static code analysis tools can look for many weaknesses in the source code, such as security vulnerabilities and potential concurrency issues. Use static code analysis tools to enforce coding standards, and configure those tools to run automatically as part of the build.

4. Unit testing Developers will write unit tests to make sure that the unit (be it a method, class, or component) is working as expected and test across a range of valid and invalid inputs. In a continuous integration environment, unit tests should run every time you commit a change to the source code repository, and you should run them on your development machine as well. Some teams have coverage goals for their unit tests and will fail a build if the unit tests aren't extensive enough.

Developers also work with mock objects and virtualized services to make sure their units can be tested independently. If your unit tests fail, fix them before letting someone else use your code. If for any reason you can't fix them right now, let the other person know what has failed, so it won't come as a surprise when they come across the problem.

5. Single-user performance testing Some teams have load and performance testing baked into their continuous integration process and run load tests as soon as code is checked in. This is particularly true for back-end code. But developers should also be looking at single-user performance on the front end and making sure the software is responsive when only they are using the system. If it's taking more than a few seconds to display a web page taken from a local or emulated

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 321

(and therefore responsive) web server, find out what client-side code is slowing things down and fix it before you let someone else see it.

Finding the right balance Make time to run as many of these tests as possible before you hand your code over to anyone else, because leaving obvious bugs in the code is a waste of your time and your colleagues' time. Of course, you'll need to find the balance between writing code vs. testing that suits you.

"Here's the mix that worked for me," said Igor Markov, LoadRunner R&D Manager at HP Software. "40 percent of my time is spent designing and writing code; 5 percent is spent on code review and static code analysis; 25 percent on unit testing and integration testing; and 30 percent on basic functionality testing and single user performance testing,"

Leaving obvious bugs in the code isn't going to do your reputation any good either. "A developer who doesn't find the obvious defects is never going to shine," continued Markov. "Developers need to produce working software that's easy to use.

No one wants developers that only do pure coding. What helped me to develop my career was the fact that I always invested more time in designing, reviewing, and testing my code than in actually writing it." Producing working software that's easy to use requires exactly this type of balance. How do you balance your time between designing, reviewing, and testing code?

AC 2 he testing includes the planning, developing and implementing a test strategy that is appropriate for the type of program being tested. T Cycle 1(new data) and cycle 2 (existing data) testing Using Test Cycles for Data-Driven Testing There are a variety of testing techniques available that can be easily adapted and applied to testing data-driven projects. One of those techniques that greatly helps in planning, coordinating and tracking testing is the test cycle technique. In this technique, testing is organized and performed in cycles that can be defined to simulate specific dates.

What is the Test Cycle Concept?

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 322

First, let's define a test cycle as any defined period of testing. A test cycle could simulate a day, a week, a month, or no time period at all. The ability to simulate a given period of time, however, is what makes test cycles an ideal technique for date-sensitive testing. Exactly what happens during a test cycle depend on the technology involved. For example, in a traditional legacy mainframe environment a test cycle usually consists of three parts: online data entry, batch processing, and the verification of batch results. In an environment which does not contain batch processing, the test cycle consists of interactive processing only.

For each test cycle, a simulated processing time period can be defined. That is why test cycles are an ideal way to plan and organize data-driven testing. One test cycle can be set for 12/31/2011, another cycle defined as 1/1/2012, another at 1/2/2012 and so on. The test environment date for each test cycle will need to be set using a date simulation tool.

The number of test cycles required for a test will depend on the amount of simulated time to be spanned during the test. For example, if you are simply testing the action that will occur on a certain date, you will only need a few cycles -probably the days surrounding the boundary or threshold date. However, if you are going to perform a more extensive test, such as a business process that lasts over a week, you will need to define test cycles that allow a longer span of testing.

For example, if you are testing a 30-day cancellation period across the year-end, you might have one cycle defined as 12/15/2011 and another at 1/15/2012. You would also want other test cycles defined at 2/28/2012 and 2/29/2012 to test leap year processing. With the test cycle approach and a date simulation tool and a data aging tool, you can define cycles as far in the future as you like. So, for testing leap year processing, you could also have cycles for 2/28/2016 and 2/29/2016.

Within each test cycle, one or more tests are defined to be performed. In some test cycles, it may be desirable to define no tests depending on the cases being tested. The tests may be defined using test scripts and/or test cases.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 323

The Process of Defining and Using Test Cycles So far, we've discussed the concepts of using test cycles. Let's look at the nuts and bolts of planning a data-driven testing using test cycles.

Step 1 - Make sure you have the right tools You will need a date simulation tool to easily change your test environment dates. You will also need a data aging tool to advance the dates in the test data and keep the relationships in sync.

Step 2 - Define the dates you will need to simulate These simulated system dates will depend upon the extent of your testing -- namely, the levels of date accuracy you need to validate. There are four basic categories of date correctness to consider: No value for current date will cause any interruption in operation. No matter what the date is in the future, the system will work correctly.

Date-based functionality must behave consistently for dates prior to, during and after year 2000. All functions using dates as a basis should be correct. This includes calculations in the 19th, 20th, and 21st centuries, and calculations that span those centuries. In all interfaces and data storage, the century in any date must be specified either explicitly or by unambiguous algorithms or interfacing rules. Either the century must be explicitly shown in the date (e.g., as a four-position field, or by using a century indicator) or by using a logic routine to interpret the date based on a window of time.

Leap years must be identified and processed correctly. If your system processes data from early in the 20th century, you need to be able to distinguish the year 1900 from 2000 for leap year purposes. Your specific system dates will depend on your applications, business and technology.

Step 3 - Build a Test Cycle Matrix Spreadsheets are great tools for this. You need to leave a least the first two columns blank, but then define the test cycles along the top of the spreadsheet.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 324

Step 4 - Define the test cases or business cases to be placed on the matrix Test cases and business cases are those entities you intend to test. These cases will go through one or more cycles of testing and will execute several test scripts or test scenarios. This approach to testing is what give the test cycle concept so much power.

You get to simulate not only the effect of the century rollover, but you also simulate how people and things are actually processed through your systems from beginning to end. This is in contrast to simply testing one program at a time in a standalone fashion.

Some examples of test/business cases would be a policyholder, a customer, a patient, a taxpayer, etc. Each of these entities would then have attributes that would make it unique. For example, if you are testing policyholders, you might have one policyholder with a deductible of R500 and another with a R1, 000 deductible. The number of test/business cases you will test will depend upon how detailed you need the test to be and how much test coverage you need relative to the risk involved.

Step 5 - Define the order of testing for each test/business case and place the tests in the appropriate cell on the spreadsheet. Each cell can contain a reference to a test or tests that are to be performed for a particular test/business case in a particular test cycle. You might opt to skip a cycle or two for some cases and double up or have several tests in other cycles. Once again, this is an example of how test cycles help you simulate the real world.

Just like your live production databases were not instantly created in your business, the test data entered into the system cycle by cycle will continuously build. Keep in mind, however, that every test/business case added to the test will be one more item to maintain throughout the test.

Step 6 - Define the tests in detail. For every test indicated on the test cycle matrix, a detailed description of the test will be needed for documentation both before and after the test. The details should include controls (when will the test start and stop, etc.), input, expected output, and the procedure to be followed in performing the test. An ideal way to document these aspects of a test for interactive software is to use a test script. You must determine how much detail is reasonable, given the amount of time you have left for testing.

Step 7 - Put it all together.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 325

After using this method for many years, I have developed what seems to be a fairly smooth procedure for organizing a major test based on the test cycle concept. I used to organize the tests in manila folders. I now use electronic folders for each test/business case defined. There will be a folder for each row on the matrix (spreadsheet). Name each folder with a business case ID number. This should also correspond to the ID on the matrix. Next, place everything you will need for the business case in the folder. This will include test data and test scripts or test procedures.

A way to simplify things and to find the right test information quickly is to include a document in the business case folder that shows the test cycles, the test scripts/procedures performed in each test cycle, and a signoff column to be initialled by the person who tested the business case.

The final piece is to create as many major folders as you have test cycles. Place the business case folders in the test cycle folders by test cycle and in business case ID order. Each test cycle folder should contain a certain number of folders.

Step 8 - Execute the test. You will start the test by setting the system date with the date simulator to the first test cycle date. If a bed of test data will be used from the start, you will need to make sure the dates in the test data are correct.

Starting with the folders in the cycle one folder, perform the tests in each folder for cycle one only. During the test, you might create documentation you would like to save, such as screen prints or reports. These can simply be placed in the business case folder. In this way, the test is self-documenting.

When the test is complete, move the business case folder to the next test cycle folder in which it will be used. If batch processing is part of the test cycle or test procedure, then the folder will go back into the same test cycle folder from which it was retrieved. After batch processing is complete, then the folder can be pulled, evaluated, and moved on to the next cycle folder in which it will be used.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 326

This process continues until the folder is finished and placed in a "done" folder. Eventually, all of the business case folders will be filed in the "done" box in business case ID order. A year or two from now, if anyone needs to know exactly what was tested, it is a simple matter to locate and retrieve the test documentation.

Step 9 - Evaluate and Track the Test As the test is performed, you will evaluate the results and determine if the test passed or failed in that particular cycle. There are two effective and easy ways to keep track of test progress manually. One way is to use the overview document in the business case folder to indicate pass/fail. The other is to highlight each cell in the test cycle matrix as the test is completed and passed. It is good to use both methods.

The Key Benefits of Using Test Cycles While it is true that going to the trouble of designing test cycles and business cases is extra work, there are some very good benefits that you achieve with no other test methods. These benefits are especially important for data-driven testing.

Benefit #1 - The ability to simulate a business case from point A to point Z in your processing. Most other test methods focus on one process or software module at a time, but never have a way to effectively string them together for "end to end testing" of a system or systems.

Benefit #2 - The ability to plan and coordinate the march of time for a test. For data-driven testing, we know that time must be advanced, but the problem is how to keep not only the test data, but the test environment and test cases in sync. The test cycle concept allows you to do this with ease.

Benefit #3 - A safety net in case the test environment gets corrupted. A common situation that occurs in testing is when the test itself destroys data or updates data files with incorrect information. It is also not uncommon for other people to delete or to restore over test files. The common response to this situation is to simply restore from the last backup, but how do you know what was tested since the last backup? In most test processes you don't know exactly what was done, but with test cycles, you do know. In fact, the backup process is fairly straightforward.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 327

You take image backups of the test environment before and after online input. If batch processing is part of your test, the backup taken after online processing will also suffice for the batch backup.

Conclusion In testing, the reliability of the test depends on the rigor of the test. Also, the rigor of the test depends on the relative risk, both business and technical. While some might look at the work involved in planning a test using test cycles as being excessive, others will testify that this kind of effort is required on some projects and systems to validate their operation through multiple simulated dates.

The extent of test planning and execution always depends on the scope of coverage and risk. The question is, are you willing to bet your business or systems operation on anything less than the right test method for the job?

AC 3 he testing includes the recording of test results that allow for the identification and validation of test outcomes. T Cycle 1(new data) and cycle 2 (existing data) testing Data Cycle Test (DCyT) Description The data cycle test (DCyT) is a technique for testing whether the data are being used and processed consistently by various functions from within different subsystems or even different systems. The technique is ideally suited to the testing of overall functionality, suitability and connectivity.

The primary aim of the data cycle test is not to trace functional defects in individual functions, but to find integration defects. The test focuses on the link between various functions and the way in which they deal with communal data. The DCyT is most effective if the functionality of the individual functions has already been sufficiently tested. That is also an important reason why this test is usually applied in the later phases of acceptance testing.

The most important test basis is the CRUD matrix (see "CRUD") and a description of the applicable integrity rules. The latter describe the preconditions under which certain processes are or are not permitted, such as, for example, "Entity X may only be changed if

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 328

the linked entity Y is removed from it". Besides this, functional specifications or detailed domain expertise is necessary in order to be able to predict the result of each test case. The basic techniques used are: CRUD, for coverage of the life cycle of the data Decision coverage, for coverage of the integrity rules. Reinforcement of the test can be achieved by the application of, e.g.: A more extended variant of the CRUD Modified condition/decision coverage or multiple condition coverage of the integrity rules.

Points of focus in the steps In this section, the data cycle test is explained step by step. In this, the generic steps (see "Introduction") are taken as a starting point. An example is also set out that demonstrates, up to and including the designing of the logical test cases, how this technique works.

1 - Identifying test situations The test situations are created from the coverage of the CRUD and from the integrity rules. Both will be further explained here. Test situations in connection with CRUD The following activities should be carried out: Determine the entities of which the life cycle is to be tested. Usually, this concerns all the entities that are used by the system or subsystem (created, changed, read or removed). If there are too many entities, a cohesive subset of entities may be selected

Determine the functions that make use of these entities. Here, too, the scope of the test should be determined: all the functions of the system under test, a cohesive subset of this, functions from other systems that are linked to the system under test Fill in the CRUD matrix (see "CRUD"). If the CRUD matrix is delivered as a test basis, the relevant part should be selected from this, based on the previous two activities. If it was not possible to get the CRUD matrix delivered as a test basis, the test team may decide to create this themselves, based on the functional specifications. This is obviously undesirable, but is a last resort Each process (C, R, U or D) that occurs in the CRUD matrix is a separate test situation that has to be tested. Test situations in connection with integrity rules

The following activities should be carried out:

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 329

Gather the integrity rules on the selected entities. These are the rules that define under which conditions the processing of the entities is valid or not. Integrity rules are usually specified within the functional specifications, database models or in separate business rules Apply decision coverage. That means that for each integrity rule, two test situations are derived: Invalid The integrity rule is disobeyed. The process is invalid and should result in correct error handling. Valid The integrity rule is obeyed. The process is valid and should be executed.

In more detail Integrity rules (see integrity rules) should not be confused with semantic rule (see semantic rules), which define the conditions under which the value of the data themselves is valid or not. For example: The rule "When creating an order, the value of quantity should not be below the boundary that is set in product" – is a semantic rule The rule "The creation of an invoice is only permitted if the order concerned has already been approved" – is an integrity rule. Therefore, the integrity rule determines whether the function is permitted in the first place. Thereafter, the semantic rules determine whether the input data offered to that function are valid

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 330

Example Data Cycle Test is applied to a subsystem that invoices orders and processes payments. The relevant part of the CRUD matrix is in the table below. Item Payment agreement Invoice Ledger

Item management C, R, U, D - - ...

Payment agreement man. - C, R, U, D R ...

Ledger management - - R C, R, U, D

Invoice creation R R C U

Cash payment - - C, U, D U

Bank transfer - - U, D U

...... For this part of the CRUD matrix, there is one relevant integrity rule: A payment agreement may not be removed as long as there is an outstanding invoice with the relevant payment agreement. This leads to two test situations: IR1-1: Delete (D) payment agreement, while an invoice is outstanding with the relevant payment agreement IR1-2: Delete (D) payment agreement, without there being an outstanding invoice with the with the relevant payment agreement A brief overview notation for this type of test situation is, for example: Test situation Process Entity Condition Valid Y/N

IR1-1 D Payment agreement Outstanding invoice N

IR1-2 D Payment agreement No outstanding invoice Y The initials "IR" here stand for "Integrity rule".

2- Creating logical test cases Create 1 or more logical test cases in such a way that: Each entity goes through a full life cycle (beginning with 'C' and ending with 'D') All the test situations from the CRUD matrix (every C, R, U and D) are covered All the test situations of the relevant integrity rules are covered. See also "CRUD". A test case thus describes a complete scenario consisting of several actions, each of which perform a process on a particular entity.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 331

Example In the table in this example below the logical test cases for the entities "Item" and "Payment agreement" are set out, to illustrate the principle. The table describe at each row which function should be used, which process (CRUD) on the relevant entity is covered by this and a brief explanation with additional information on the action to be performed. LTC01: "Item" Function CRUD Action / Notes Item management C Create new item ITM-01 Item management R Check ITM-01 Create invoice R Create invoice INV-01 in which ITM-01 occurs Ledger management - Check INV-01 Item management U Change ITM-01 (e.g. price) in ITM-01B Item management R Check ITM-01B Ledger management - Check INV-01 is unchanged Item management D Remove ITM-01B Item management R Check ITM-01B (is removed) LTC02: "Payment agreement" Function CRUD Action / Notes Payment agreement mgt. C Create new payment agreement PAG-01 Payment agreement mgt. R Check PAG-01 Payment agreement mgt. U Change PAG-01 (e.g. period) in PAG-01B Payment agreement mgt. R Check PAG-01B Create invoice R Create invoice INV-02 containing agreement PAG-01B Ledger management - Check INV-02 Payment agreement mgt. D IR1-1. Error handling! Payment agreement mgt. R Check that PAG-01B is removed A "-" in the column "CRUD" means that the relevant function is required in order to carry out a certain action, but that this does not perform any processing on the tested entity. For example: With LTC01, "Ledger management" is used to be able to check that the correct item appears on the invoice, but does not perform any processing itself on "Item". With LTC02, "Cash payment" is used to close invoice INV-02 so that integrity rule IR1-2 is complied with, but does not perform any processing itself on "Payment agreement".

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 332

3 - Creating physical test cases In the translation of logical test cases to physical test cases, the following details are added: (Optional) Exactly how the relevant function is activated. This is usually clear enough, but sometimes it requires a less obvious sequence of actions. The data to be entered with that function. If the logical test case indicates that a certain entity has to be changed, then the physical test case should indicate unequivocally which attribute is changed into which value. A concrete description with each predicted result of what has to be checked concerning a particular entity. Extra actions that are necessary to facilitate subsequent actions in the test case. E.g., the changing of the system date or the execution of a particular batch process in order to give the system a certain required status.

4 - Establishing the starting point The DCyT typically operates at overall system level, possibly across several systems. That means an extensive starting point has to be prepared that is complete and consistent across all the systems. The following, in particular, should be organised: All the necessary databases for all the systems involved, in which all the data is consistent A configuration (possibly a network) in which all the necessary systems are connected and in which all the necessary users are defined with the necessary access rights. Such a starting point approximates the production situation and is complicated to put together. Ideally, an existing real-life test environment is used. See also "Defining central starting point(s)".

In particular, attention should be paid to the data in the starting point that are only valid for a limited time. At the start of each test execution, it should be checked whether these time- dependent data are still valid and whether, on the basis of this, changes should be made in the starting point.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 333

Implement the program to meet business requirements Time: 180 minutes Activity: Self and Group

AC 1 he implementation involves checking the program for compliance with user expectations and any other applicable factors T Implementation is the realization of an application, or execution of a plan, idea, model, design, specification, standard, algorithm, or policy.

Industry-specific definitions Computer science In computer science, an implementation is a realization of a technical specification or algorithm as a program, software component, or other computer system through computer programming and deployment. Many implementations may exist for a given specification or standard. For example, web browsers contain implementations of World Wide Web Consortium-recommended specifications, and software development tools contain implementations of programming languages.

A special case occurs in object-oriented programming, when a concrete class implements an interface; in this case the concrete class is an implementation of the interface and it includes methods which are implementations of those methods specified by the interface.

Information technology In the information technology industry, implementation refers to post-sales process of guiding a client from purchase to use of the software or hardware that was purchased. This includes requirements analysis, scope analysis, customizations, systems integrations, user policies, user training and delivery.

These steps are often overseen by a project manager using project management methodologies. Software Implementations involve several professionals that are relatively new to the knowledge based economy such as business analysts, technical analysts, solutions architects, and project managers.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 334

To implement a system successfully, many inter-related tasks need to be carried out in an appropriate sequence. Utilising a well-proven implementation methodology and enlisting professional advice can help but often it is the number of tasks, poor planning and inadequate resourcing that causes problems with an implementation project, rather than any of the tasks being particularly difficult. Similarly with the cultural issues it is often the lack of adequate consultation and two-way communication that inhibits achievement of the desired results.

Political science In political science, implementation refers to the carrying out of public policy. Legislatures pass laws that are then carried out by public servants working in bureaucratic agencies. This process consists of rule-making, rule-administration and rule- adjudication. Factors impacting implementation include the legislative intent, the administrative capacity of the implementing bureaucracy, interest group activity and opposition, and presidential or executive support.

In international relations, implementation refers to a stage of international treaty-making. It represents the stage when international provisions are enacted domestically through legislation and regulation. The implementation stage is different from the ratification of an international treaty.

Social and health sciences Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions. According to this definition, implementation processes are purposeful and are described in sufficient detail such that independent observers can detect the presence and strength of the "specific set of activities" related to implementation. In addition, the activity or program being implemented is described in sufficient detail so that independent observers can detect its presence and strength."

Water and natural resources In water and natural resources, implementation refers to the actualization of best management practices with the ultimate goals of conserving natural resources and improving the quality of water bodies.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 335

Types Direct changeover Parallel running, also known as parallel Phased implementation Pilot introduction, also known as pilot Well-trade

Role of end users System implementation generally benefits from high levels of user involvement and management support. User participation in the design and operation of information systems has several positive results. First, if users are heavily involved in systems design, they move opportunities to mould the system according to their priorities and business requirements, and more opportunities to control the outcome. Second, they are more likely to react positively to the change process. Incorporating user knowledge and expertise leads to better solutions.

The relationship between users and information systems specialists has traditionally been a problem area for information systems implementation efforts. Users and information systems specialists tend to have different backgrounds, interests, and priorities. This is referred to as the user-designer communications gap. These differences lead to divergent organizational loyalties, approaches to problem solving, and vocabularies. Examples of these differences or concerns are below: User concerns Will the system deliver the information I need for my work? How quickly can I access the data? How easily can I retrieve the data? How much clerical support will I need to enter data into the system? How will the operation of the system fit into my daily business schedule?

Designer concerns How much disk storage space will the master file consume? How many lines of program code will it take to perform this function? How can we cut down on CPU time when we run the system? What are the most efficient ways of storing this data? What database management system should we use?

Critique of the Premise of Implementation

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 336

Social scientific research on implementation also takes a step away from the project oriented at implementing a plan, and turns the project into an object of study. Lucy Suchman's work has been key, in that respect, showing how the engineering model of plans and their implementation cannot account for the situated action and cognition involved in real-world practices of users relating to plans: that work shows that a plan cannot be specific enough for detailing everything that successful implementation requires. Instead, implementation draws upon implicit and tacit resources and characteristics of users and of the plan's components.

AC 2 he implementation involves training of users to enable them to use the software to their requirements T Training of the end users is one of the most important steps for a successful system implementation. The end users should be utilized during parallel testing, so training will need to be rolled out prior to that. Getting the end users involved at this point is also a good way to get them excited about the system, as many of them may not have been involved with the project prior to training.

Their assistance in parallel testing will help them prepare for when the system goes live. End users are good at using the system in more of a "real world" situation and can judge when process flows are not working. When everyone involved with using the system is included in the training, they will feel more confident about using it as they go into production and the user community will view the implementation as successful.

The system may have been tested for functionality and all customizations are working accurately, but if the end users do not know how to use it or feel comfortable with it, then the launch of the new system will be viewed as unsuccessful.. Therefore, the timing of the end user training is critical and must be planned for and implemented prior to the start of the parallel test phase to ensure a successful implementation.

There are two possible solutions for training. The first is to use project team members to develop and deliver the end user training and the second is to identify a training partner to support the development and delivery of end user training, including a train the trainer component. Both options will be fully explored during the next phase of the project.

Trainers

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 337

Using members of the project team to conduct training for the various departments will allow the end users to be better informed about how and why the system was developed. The functional experts on the team will be more knowledgeable with the Princeton PeopleSoft system. This will be more inclusive training for staff rather than vanilla PeopleSoft training from an outside vendor, unless the vendor creates and uses Princeton specific materials.

Hands on Training Hands-on training is proposed for all central department end-users in HR, Benefits and Payroll. This training begins with a basic HR module and continues with more advanced training for individuals with functional responsibilities in HR, Benefits or Payroll. Since Benefits and Payroll builds on HR, it is important to understand how HR works in order to understand better how Payroll and Benefits works.

The Payroll staff will continue with hands-on payroll training. The Payroll staff will need to have the most intensive training. An overview of payroll processing and the payroll panels (PayCheck_Data & PayCheck_Summary) that HR and Benefits offices will be allowed to view should be included for these user groups.

Since the system is integrated, an overview of BenAdmin should be given to the HR and Payroll offices. This is important so that everyone understands their impact on these processes. Query and report writing should also be included in a separate training class so that the end users can write ad hoc reports and also know how to run and print delivered reports.

Based on the level of distribution of functionality included in scope, departmental training will address navigation of the system, definition of data fields, and basic reporting. This training may be supported through a classroom environment or through Computer Based Training (CBT). A separate training strategy is being developed and will be included in the Time Collection Project Plan.

Listed below are the areas that need to be developed for a successful PeopleSoft training that is Princeton specific.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 338

Training Design Identify Training Needs: Have a Key Message on what Training will be provided for the End Users (Like a Mission Statement on what will be delivered to the users) Decide when training should be started. It should be rolled out before parallel testing so the end users can be a part of that system testing right away. Identify what areas should be trained (HR, DOF, Benefits, Payroll, PPPL, Technical) Develop a Content Outline on what Functionality of the system should be covered in the training, what modules of the system will be included and who (the end users) should be trained.

Establish a Training Team Design Training Materials/Documents Design a layout of the training manual, and what each functional and technical areas of the system should be included. An overview section should also be included for the areas that do not need detailed training. Also query and report writing training should be included.

Design Support Materials Design any support materials that may be needed for use during training. A separate training guide may be developed that show the navigation paths or the Querys that are public.

Finalize Design Document Publish the training document outline and have the team review and finalize. Update Training and Support Materials Plan Update the Project Plan to accommodate any changes to the Training Schedule

Training Development: Develop Training Materials/Documents Develop the training manual and other documentation on the functional and technical areas of the system will be included. Include an overview section for the areas that do not need detailed training like a Payroll Processing and BenAdmin review. Query and report writing training needs to be developed for all users. A Power User session may be required for advance users.

Develop Support Materials

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 339

Develop required support materials that may be needed for use during training. A separate training guide may be developed that show the navigation paths or the Querys that are public. A PeopleSoft index may be useful.

Evaluate Training & Support Materials: Develop a Training Schedule for each functional and technical area and make sure it is published in the community. Build in additional Training for classes that are missed and for peoples who want additional training.

Test Training Materials for accuracy Test Training Database to ensure it is working and customizations that are complete have been moved in. Complete a payroll cycle and BenAdmin to make sure processes are working with the population of employees. Train Instructors so they are comfortable with the material and the training database. Make sure they have a script to run all processes.

Training Delivery: Produce accurate numbers of Training Documents and materials. Deliver Training to the end users May need to add additional training based on the needs and feedback received while training is in progress.

Training Support: Trainers should be used as a Liaison between functional areas and the Help desk to troubleshoot problems and answer questions. Track Training & Support Material Performance by getting feedback from the end users on what areas they do not feel trained in. Use a Performance Report that the employees can complete to rate the training and materials. Prioritize changes that should be made to the training materials Modify Training & Support Material

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 340

AC 3 he implementation involves planning of installation of the program that minimises disruption to the user T Depending on program type (on-line or batch or Internet) Planning a network installation This article provides practical advice to help address the needs of organisations that have a network and are in the process of upgrading it. The article should also be useful to organisations installing a network for the first time.

How do I get started? The first step in developing a plan is assessing your current network requirements and considering how your business is likely to change over time. Here are some ideas to help you start the process:

Consider Usage Requirements Determine the number of people that will be using the network to get a rough idea of the computers and peripherals it must support. Consider how users will interact with the system to define the features you will need. For example, what sort of access is required to the network (e.g. will each user have their own computer? or will several users be sharing the same computer?) Will any users need to access the network remotely (e.g. from home or other office sites)?

Gather Input Factor the needs of the various teams and departments within your organisation into your network plan. Start by defining the requirements of each group and determine the relative costs of incorporating the different requirements into the network plan. This may be in terms of money or time saved.

Plan for the future Detail or factor in, to the best of your knowledge, the direction your organisation is likely to take in the near future (3-5 years). As you think about expansion, identify any plans that might affect your network needs (e.g. new staff or volunteers, office expansion, remote working, or the installation of new software packages). Doing this now will be less expensive and time-consuming than replacing an inadequate network later.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 341

Decide who will manage the network As your network solution becomes more defined, you will need to decide whether you have the resources in-house to install and maintain it yourself or whether you require a consultant or external company to handle it. Networking products have become easier to use and administer over the years, so small organisations are finding that internal day- to-day management of the network is becoming increasingly cost effective. External support will also likely be required, and it is worth considering using remote network administration tools to reduce the number of on-site visits necessary to keep the network running smoothly.

Security Issues Ensure you build security features into your network plan to protect your organisations most important asset - its information. Common network security precautions include passwords, virus protection, an external firewall and data encryption.

Other Considerations You may enhance the foundation of your network plan by addressing other issues that may affect the integration, use and maintenance of your network. These include:

Information Management Consider how to manage information on your server so that users can easily find what they need. Create standardised naming conventions for files on the server and establish rules for the creation of new files and folders.

Remote Access If some staff members travel frequently or work from locations outside your office, you may want to build remote access capabilities into your network. This can be done through remote dial-in, or securely over the Internet using a VPN.

Staff Training While working with a network is relatively simple, it may demand that employees adopt new habits. A training program will enable workers to take full advantage of your network's timesaving and productivity enhancing features. Ensure training time is built into your network rollout timetable and offer follow up sessions to address ongoing staff challenges and concerns.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 342

Network pre-installation checklist This checklist of questions will help you cover the main areas when it comes to planning and installing a new or upgraded network. Planning How many people will use the network? How many users are local or on-site? How many users are remote or off-site and will require access to the network? How many on-site computers will be connected to the network? How many on-site devices (computers, servers, scanners, printers, etc.) will require a network card?

How do you intend remote users to access the network? Which server based applications (e.g. databases, email) do you plan to run on the network? What are the minimum hardware requirements of these server based applications? What are the specifications of the servers you intend to install on the network (e.g. amount of memory, processor speed etc.)? Have you purchased sufficient licenses to run all the software on servers and client machines?

Network hardware requirements What other devices will your network support (e.g. back-up devices, Uninterruptible Power Supplies, Network printers, etc.)? Do you have enough network points for these network devices? Do the hubs or switches have enough ports for the number of connections you will require? And is there room for growth?

Network design What network topology will you use? Do all workstations have the correct Network interface cards (NICs) to support this technology? Which network operating system will you use (e.g. Windows 2000 Server, Linux, Novell etc.)? Which type of cabling will you use (e.g. CAT 5, fibre optic) or will a wireless network be suitable? Where will network cables be located? Are there any building or leasing regulations that may affect cable placement? Where will you locate the following devices, servers, hubs or switches, printers, firewalls and routers, modems etc.?

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 343

Security, back-up and power What security measures will you be putting in place? Virus protection, user passwords, firewalls, data encryption etc. Do you need to physically secure your server (e.g. lock it away in a cupboard)? How will you back up data on your network? What is the capacity of your back up solution? Is it large enough to support all the data on your servers and network devices? Does your back up solution have the capacity to grow as your data grows? How frequently will files be backed up and how long will you keep backed up files? Where will you store backed up tapes (e.g. fireproof safe, off site)? What devices will require an uninterruptible power supply (e.g. server(s))? Is there sufficient ventilation around your servers?

Support services Do you have resources allocated for the following areas (e.g., consultants, in-house IT staff etc.)? Network installation Cable installation Network technical support Network management Network security Network maintenance Training Undertaking a significant upgrade to your network or migrating to a newer or different operating system can be a daunting and challenging task. Effective planning can limit the system downtime, reduce network crashes and ensure a seamless transition and minimal disruption to users.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 344

Document the program according to industry standards Time: 180 minutes Activity: Self and Group

AC 1 he documentation includes annotation of the program with a description of program purpose and design specifics. T Software documentation Software documentation is written text or illustration that accompanies computer software or is embedded in the source code. The documentation either explains how the software operates or how to use it, or may mean different things to people in different roles. Documentation is an important part of software engineering.

Types of documentation include: Requirements – Statements that identify attributes, capabilities, characteristics, or qualities of a system. This is the foundation for what will be or has been implemented. Architecture/Design – Overview of software. Includes relations to an environment and construction principles to be used in design of software components. Technical – Documentation of code, algorithms, interfaces, and APIs. End user – Manuals for the end-user, system administrators and support staff. Marketing – How to market the product and analysis of the market demand.

Requirements documentation Requirements documentation is the description of what a particular software does or shall do. It is used throughout development to communicate how the software functions or how it is intended to operate. It is also used as an agreement or as the foundation for agreement on what the software will do. Requirements are produced and consumed by everyone involved in the production of software, including: end users, customers, project managers, sales, marketing, software architects, usability engineers, interaction designers, developers, and testers.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 345

Requirements comes in a variety of styles, notations and formality. Requirements can be goal-like (e.g., distributed work environment), close to design (e.g., builds can be started by right-clicking a configuration file and select the 'build' function), and anything in between. They can be specified as statements in natural language, as drawn figures, as detailed mathematical formulas, and as a combination of them all.

The variation and complexity of requirements documentation makes it a proven challenge. Requirements may be implicit and hard to uncover. It is difficult to know exactly how much and what kind of documentation is needed and how much can be left to the architecture and design documentation, and it is difficult to know how to document requirements considering the variety of people who shall read and use the documentation. Thus, requirements documentation is often incomplete (or non-existent). Without proper requirements documentation, software changes become more difficult — and therefore more error prone (decreased software quality) and time-consuming (expensive).

The need for requirements documentation is typically related to the complexity of the product, the impact of the product, and the life expectancy of the software. If the software is very complex or developed by many people (e.g., mobile phone software), requirements can help to better communicate what to achieve. If the software is safety-critical and can have negative impact on human life (e.g., nuclear power systems, medical equipment, mechanical equipment), more formal requirements documentation is often required.

If the software is expected to live for only a month or two (e.g., very small mobile phone applications developed specifically for a certain campaign) very little requirements documentation may be needed. If the software is a first release that is later built upon, requirements documentation is very helpful when managing the change of the software and verifying that nothing has been broken in the software when it is modified.

Traditionally, requirements are specified in requirements documents (e.g. using word processing applications and spreadsheet applications). To manage the increased complexity and changing nature of requirements documentation (and software documentation in general), database-centric systems and special-purpose requirements management tools are advocated.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 346

Architecture design documentation Architecture documentation (also known as software architecture description) is a special type of design document. In a way, architecture documents are third derivative from the code (design document being second derivative, and code documents being first). Very little in the architecture documents is specific to the code itself.

These documents do not describe how to program a particular routine, or even why that particular routine exists in the form that it does, but instead merely lays out the general requirements that would motivate the existence of such a routine. A good architecture document is short on details but thick on explanation. It may suggest approaches for lower level design, but leave the actual exploration trade studies to other documents.

Another type of design document is the comparison document, or trade study. This would often take the form of a whitepaper. It focuses on one specific aspect of the system and suggests alternate approaches. It could be at the user interface, code, design, or even architectural level. It will outline what the situation is, describe one or more alternatives, and enumerate the pros and cons of each.

A good trade study document is heavy on research, expresses its idea clearly (without relying heavily on obtuse jargon to dazzle the reader), and most importantly is impartial. It should honestly and clearly explain the costs of whatever solution it offers as best. The objective of a trade study is to devise the best solution, rather than to push a particular point of view. It is perfectly acceptable to state no conclusion, or to conclude that none of the alternatives are sufficiently better than the baseline to warrant a change. It should be approached as a scientific endeavour, not as a marketing technique.

A very important part of the design document in enterprise software development is the Database Design Document (DDD). It contains Conceptual, Logical, and Physical Design Elements. The DDD includes the formal information that the people who interact with the database need. The purpose of preparing it is to create a common source to be used by all players within the scene.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 347

The potential users are: Database designer Database developer Database administrator Application designer Application developer

When talking about Relational Database Systems, the document should include following parts: Entity - Relationship Schema (enhanced or not), including following information and their clear definitions: Entity Sets and their attributes Relationships and their attributes Candidate keys for each entity set Attribute and Tuple based constraints Relational Schema, including following information: Tables, Attributes, and their properties Views Constraints such as primary keys, foreign keys, Cardinality of referential constraints Cascading Policy for referential constraints

Primary keys It is very important to include all information that is to be used by all actors in the scene. It is also very important to update the documents as any change occurs in the database as well.

Technical documentation It is important for the code documents associated with the source code (which may include README files and API documentation) to be thorough, but not so verbose that it becomes overly time-consuming or difficult to maintain them. Various how-to and overview documentation guides are commonly found specific to the software application or software product being documented by API writers.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 348

This documentation may be used by developers, testers, and also end-users. Today, a lot of high-end applications are seen in the fields of power, energy, transportation, networks, aerospace, safety, security, industry automation, and a variety of other domains. Technical documentation has become important within such organizations as the basic and advanced level of information may change over a period of time with architecture changes. Code documents are often organized into a reference guide style, allowing a programmer to quickly look up an arbitrary function or class.

Technical documentation embedded in source code Often, tools such as Doxygen, NDoc, Visual Expert, Javadoc, JSDoc, EiffelStudio, Sandcastle, ROBODoc, POD, TwinText, or Universal Report can be used to auto-generate the code documents—that is, they extract the comments and software contracts, where available, from the source code and create reference manuals in such forms as text or HTML files.

The idea of auto-generating documentation is attractive to programmers for various reasons. For example, because it is extracted from the source code itself (for example, through comments), the programmer can write it while referring to the code, and use the same tools used to create the source code to make the documentation. This makes it much easier to keep the documentation up-to-date.

Of course, a downside is that only programmers can edit this kind of documentation, and it depends on them to refresh the output (for example, by running a cron job to update the documents nightly). Some would characterize this as a pro rather than a con.

Literate programming Respected computer scientist Donald Knuth has noted that documentation can be a very difficult afterthought process and has advocated literate programming, written at the same time and location as the source code and extracted by automatic means. The programming languages Haskell and CoffeeScript have built-in support for a simple form of literate programming, but this support is not widely used.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 349

Elucidative programming Elucidative Programming is the result of practical applications of Literate Programming in real programming contexts. The Elucidative paradigm proposes that source code and documentation be stored separately.

Often, software developers need to be able to create and access information that is not going to be part of the source file itself. Such annotations are usually part of several software development activities, such as code walks and porting, where third party source code is analysed in a functional way. Annotations can therefore help the developer during any stage of software development where a formal documentation system would hinder progress.

User documentation Unlike code documents, user documents simply describe how a program is used. In the case of a software library, the code documents and user documents could in some cases be effectively equivalent and worth conjoining, but for a general application this is not often true. Typically, the user documentation describes each feature of the program, and assists the user in realizing these features. A good user document can also go so far as to provide thorough troubleshooting assistance. It is very important for user documents to not be confusing, and for them to be up to date. User documents don't need to be organized in any particular way, but it is very important for them to have a thorough index.

Consistency and simplicity are also very valuable. User documentation is considered to constitute a contract specifying what the software will do. API Writers are very well accomplished towards writing good user documents as they would be well aware of the software architecture and programming techniques used. See also technical writing.

User documentation can be produced in a variety of online and print formats. However, there are three broad ways in which user documentation can be organized. Tutorial: A tutorial approach is considered the most useful for a new user, in which they are guided through each step of accomplishing particular tasks. Thematic: A thematic approach, where chapters or sections concentrate on one particular area of interest, is of more general use to an intermediate user. Some authors prefer to convey their ideas through a knowledge based article to facilitate the user needs. This approach is usually practiced by a dynamic industry, such as Information technology, where the user population is largely correlated with the troubleshooting demands List or Reference: The final type of organizing principle is one in which commands or tasks are simply listed alphabetically or logically grouped, often via cross-referenced indexes. This

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 350

latter approach is of greater use to advanced users who know exactly what sort of information they are looking for.

A common complaint among users regarding software documentation is that only one of these three approaches was taken to the near-exclusion of the other two. It is common to limit provided software documentation for personal computers to online help that give only reference information on commands or menu items. The job of tutoring new users or helping more experienced users get the most out of a program is left to private publishers, who are often given significant assistance by the software developer.

AC 2 he documentation includes the layout of the program code including indentation and other acceptable industry standards T In computer programming, an indentation style is a convention governing the indentation of blocks of code to convey program structure. This article largely addresses the free-form languages, such as C and its descendants, but can be (and often is) applied to most other programming languages (especially those in the curly bracket family), where whitespace is otherwise insignificant. Indentation style is only one aspect of programming style. Indentation is not a requirement of most programming languages, where it is used as secondary notation. Rather, indenting helps better convey the structure of a program to human readers. Especially, it is used to clarify the link between control flow constructs such as conditions or loops, and code contained within and outside of them. However, some languages (such as Python and occam) use indentation to determine the structure instead of using braces or keywords; this is termed the off-side rule. In such languages, indentation is meaningful to the compiler or interpreter; it is more than only a clarity or style issue. This article uses the term brackets to refer to parentheses, and the term braces to refer to curly brackets.

Brace placement in compound statements The main difference between indentation styles lies in the placing of the braces of the compound statement ({...}) that often follows a control statement (if, while, for...). The table below shows this placement for the style of statements discussed in this article; function declaration style is another case. The style for brace placement in statements may differ from the style for brace placement of a function definition. For consistency, the indentation depth has been kept constant at 4 spaces, regardless of the preferred indentation depth of each style.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 351

Brace placement Styles while (x == y) { something(); K&R, Allman somethingelse(); } while (x == y) { something (); GNU somethingelse (); } while (x == y) { something(); somethingelse(); } while (x == y) { something(); somethingelse(); Horstmann } while (x == y) { something() ; somethingelse() Haskell ; } while (x == y) { something(); Pico somethingelse(); } while (x == y) { something(); somethingelse(); Ratliff } while (x == y) { something(); Lisp somethingelse(); } while (x == y) { something(); somethingelse(); K&R variant 1TBS }

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 352

Tabs, spaces, and size of indentations Many early programs used tab characters to indent, for simplicity and to save on source file size. Unix editors generally view tabs as equalling eight characters, while Macintosh and Windows environments would set them to four, creating confusion when code was transferred between environments. Modern programming editors can now often set arbitrary indentation sizes, and will insert the proper mix of tabs and spaces.

The issue of using hard tabs or spaces is an ongoing debate in the programming community. Some programmers such as Jamie Zawinski state that spaces instead of tabs increase cross- platform portability. Others, such as the writers of the WordPress coding standards state the opposite, that hard tabs increase portability. A survey of the top 400,000 repositories on GitHub found that spaces are more common.

The size of the indentation is usually independent of the style. In an experiment from 1983 performed on PASCAL code, a significant influence of indentation size on comprehensibility was found. The results indicate that indentation levels in the range from 2 to 4 characters ensure best comprehensibility. For Ruby, many shell scripting languages, and some forms of HTML formatting, two spaces per indentation level is generally used.

Tools There are many computer programs that automatically correct indentation styles (according to the preferences of the program author) and the length of indents associated with tabs. A famous one is indent, a program included with many Unix-like operating systems.

In Emacs, various commands are available to automatically fix indentation problems, including hitting Tab on a given line (in the default configuration). M-x indent-region can be used to properly indent large sections of code. Depending on the mode, Emacs can also replace leading indentation spaces with the proper number of tabs followed by spaces, which results in the minimal number of characters for indenting each source line.

Elastic tabstops is a tabulation style which requires support from the text editor, where entire blocks of text are kept automatically aligned when the length of one line in the block changes.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 353

Styles K&R style The K&R style (Kernighan & Ritchie Style), which is also called "the one true brace style" in hacker jargon (abbreviated as 1TBS), is commonly used in C, C++, and other curly brace programming languages. It was the style used in the original Unix kernel, Kernighan and Ritchie's book The C Programming Language, as well as Kernighan and Plauger's book The Elements of Programming Style.

When following K&R, each function has its opening brace at the next line on the same indentation level as its header, the statements within the braces are indented, and the closing brace at the end is on the same indentation level as the header of the function at a line of its own.

The blocks inside a function, however, have their opening braces at the same line as their respective control statements; closing braces remain in a line of their own, unless followed by a keyword else or while. Such non-aligned braces are nicknamed "Egyptian braces" (or "Egyptian brackets") for their resemblance to arms in some fanciful poses of ancient Egyptians. int main(int argc, char *argv[]) { ... while (x == y) { something(); somethingelse();

if (some_error) do_correct(); else continue_as_usual(); }

finalthing(); ... }

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 354

The C Programming Language does not explicitly specify this style, though it is followed consistently throughout the book. From the book: The position of braces is less important, although people hold passionate beliefs. We have chosen one of several popular styles. Pick a style that suits you, then use it consistently. In old versions of the C language, argument types needed to be declared on the subsequent line (i.e., just after the header of the function): /* Original pre-ISO C style without function prototypes */ int main(argc, argv) int argc; char *argv[]; { ... }

Variant: 1TBS (OTBS) Advocates of this style sometimes refer to it as "the one true brace style" (abbreviated as 1TBS or OTBS). The main two differences from the K&R style are that functions have their opening braces on the same line separated by a space, and that the braces are not omitted for a control statement with only a single statement in its scope. In this style, the constructs that allow insertions of new code lines are on separate lines, and constructs that prohibit insertions are on one line. This principle is amplified by bracing every if, else, while, etc., including single-line conditionals, so that insertion of a new line of code anywhere is always safe (i.e., such an insertion will not make the flow of execution disagree with the source code indenting).

Suggested advantages of this style are that the starting brace needs no extra line alone; and the ending brace lines up with the statement it conceptually belongs to. One cost of this style is that the ending brace of a block needs a full line alone, which can be partly resolved in if/else blocks and do/while blocks: void checknegative(x) { if (x < 0) { puts("Negative"); } else { nonnegative(x); } }

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 355

There are many mentions of The One True Brace Style out there, but there is some confusion as to its true form. Some sources say it is the variation specified above, while others note it as just another "hacker jargon" term for K&R.

Variant: Linux kernel A minor variant of the K&R style is the linux kernel style, which is known for its extensive use in the source tree of the Linux kernel. Linus Torvalds strongly advises all contributors to follow it. The style borrows many elements from K&R: The kernel style uses tab stops (with the tab stops set every 8 characters) for indentation. Opening curly braces of a function go to the start of the line following the function header. Any other opening curly braces go on the same line as the corresponding statement, separated by a space. Labels in a switch statement are aligned with the enclosing block (there is only one level of indents). A single-statement body of a compound statement (such as if, while, and do-while) need not be surrounded by curly braces. If, however, one or more of the substatements in an if-else statement require braces, then both substatements should be wrapped inside curly braces. Line length is limited to 80 characters.

The Linux-kernel style specifies that "if only one branch of a conditional statement is a single statement ... use braces in both branches": int power(int x, int y) { int result;

if (y < 0) { result = 0; } else { result = 1; while (y-- > 0) result *= x;

} return result; }

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 356

Variant: mandatory braces Some advocate mandatory braces for control statements with only a single statement in its scope, ie., bracing every if, else, while, etc., including single-line conditionals, so that insertion of a new line of code anywhere is always safe (i.e., such an insertion will not make the flow of execution disagree with the source-code indentation).

The cost of this style is that one extra full line is needed for the last block (except for intermediate blocks in if/else if/else constructs and do/while blocks).

Variant: Java While Java is sometimes written in other styles, a significant body of Java code uses a minor variant of the K&R style in which the opening brace is on the same line not only for the blocks inside a function, but also for class or method declarations.

This style is widespread largely because Sun Microsystems's original style guides used this K&R variant, and as a result most of the standard source code for the Java API is written in this style. It is also a popular indentation style for ActionScript and JavaScript, along with the Allman style.

Variant: Stroustrup Stroustrup style is Bjarne Stroustrup's adaptation of K&R style for C++, as used in his books, such as Programming: Principles and Practice using C++ and The C++ Programming Language. Unlike the variants above, Stroustrup does not use a “cuddled else”. Thus, Stroustrup would write if (x < 0) { puts("Negative"); negative(x); } else { puts("Non-negative"); nonnegative(x); }

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 357

Stroustrup extends K&R style for classes, writing them as follows: class Vector { public: Vector(int s) :elem(new double[s]), sz(s) { } // construct a Vector double& operator[](int i) { return elem[i]; } // element access: subscripting int size() { return sz; } private: double * elem; // pointer to the elements int sz; // number of elements }; Stroustrup does not indent the labels public: and private: Also, in this style, while the opening brace of a function starts on a new line, the opening brace of a class is on the same line as the class name.

Stroustrup allows writing short functions all on one line. Stroustrup style is a named indentation style available in the editor Emacs. Stroustrup encourages a K&R-derived style layout with C++ as stated in his modern C++ Core Guidelines.

AC 3 he documentation includes full internal and external documentation, with a level of detail that enables other programmers to analyse the T program Internal documentation Computer software is said to have Internal Documentation if the notes on how and why various parts of code operate is included within the source code as comments. It is often combined with meaningful variable names with the intention of providing potential future programmers a means of understanding the workings of the code. This contrasts with external documentation, where programmers keep their notes and explanations in a separate document. Internal documentation has become increasingly popular as it cannot be lost, and any programmer working on the code is immediately made aware of its existence and has it readily available.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 358

What are the examples of internal and external documentation? It depends what you mean by internal and external. If you are referring to your audience, internal documentation is intended for the employees of your company and external documentation is intended for the customers of your company.

Here are a few examples: In a training environment, internal documentation may describe to how to prepare for a training including making sure the environment you are going to use for training is set up correctly, and that you are familiar with the course content and the commonly asked questions. While the external documentation would be the supporting training documentation, study guide, resources, etc.

For writers, internal documentation includes a style guide, schedules, information on how the review process works, release process checklist, etc. External documentation is the content the documentation team develops for the customers.

For developers, internal documentation may include details on which tools to use for development, which coding practices to follow, how to use the build process, the process to release a build to QA for testing, and so on. It may also include details letting other internal teams know what to do to deploy or release a product.

If you are referring to code, internal documentation explains how the code works and external documentation explains how to use it. You might even have two types of internal documentation —one for your team and one for the customers who use the code.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 359

AC 4 he documentation reflects the tested and implemented program, including changes made during testing of the program T Software testing Software development

Core activities

Processes Requirements Design Engineering Construction Testing Debugging Deployment Maintenance

Paradigms and models

Agile Cleanroom Incremental Prototyping Spiral V model Waterfall

Methodologies and frameworks

ASD DevOps DAD DSDM FDD IID Kanban Lean SD LeSS MDD MSF

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 360

PSP RAD RUP SAFe Scrum SEMAT TSP OpenUP UP XP

Supporting disciplines

Configuration management Documentation Software quality assurance (SQA) Project management User experience

Practices

ATDD BDD CCO CI CD DDD PP SBE Stand-up TDD

Tools Compiler Debugger Profiler GUI designer Modeling IDE Build automation

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 361

Release automation Infrastructure as code Testing

Standards and Bodies of Knowledge

BABOK CMMI IEEE standards ISO 9001 ISO/IEC standards PMBOK SWEBOK ITIL IREB

Glossaries

Artificial intelligence Computer science Electrical and electronics engineering

Outlines Outline of software development

Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use.

Software testing involves the execution of a software component or system component to evaluate one or more properties of interest.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 362

In general, these properties indicate the extent to which the component or system under test: Meets the requirements that guided its design and development, Responds correctly to all kinds of inputs, Performs its functions within an acceptable time, It is sufficiently usable, Can be installed and run in its intended environments, and Achieves the general result its stakeholder’s desire.

As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources. As a result, software testing typically (but not exclusively) attempts to execute a program or application with the intent of finding software bugs (errors or other defects). The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can even create new ones.

Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors. Software testing can be conducted as soon as executable software (even if partially complete) exists. The overall approach to software development often determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and then implemented in testable programs. In contrast, under an agile approach, requirements, programming, and testing are often done concurrently.

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 363

You are now ready to go through a check list. Be honest with yourself Tick the box with either a √ or an X to indicate your response

 I am able to Interpret a given specification to plan a computer program solution  I am able to Design a computer program to meet a business requirement  I am able to Create a computer program that implements the design  I am able to Test a computer program against the business requirements  I am able to Implement the program to meet business requirements  I am able to Document the program according to industry standards

You must think about any point you could not tick. Write this down as a goal. Decide on a plan of action to achieve these goals. Regularly review these goals.

My Goals and Planning: ______

NC: IT: SYSTEMS DEVELOPMENT AUTHOR: LEARNER MANUAL REL DATE: 27/01/2020 REV DATE: 01/01/2023 DOC REF: 48872 LM MOD 3 V-1 PAGE 364