1 CYBER SECURITY GUIDELINE FOR 2 SECURE SOFTWARE 3 DEVELOPMENT LIFE CYCLE (SSDLC) 4 5 CyberSecurity Malaysia

6 November, 2019

7 REGISTERED OFFICE: 8 CyberSecurity Malaysia, 9 Level 7 Tower 1, 10 Menara Cyber Axis, 11 Jalan Impact, 12 63000 Cyberjaya, 13 Selangor Darul Ehsan, Malaysia

14 COPYRIGHT © 2019 CYBERSECURITY MALAYSIA

15 The copyright of this document belongs to CyberSecurity Malaysia. No part of this document 16 (whether in hardcopy or electronic form) may be reproduced, stored in a retrieval system of any 17 nature, transmitted in any form or by any means either electronic, mechanical, photocopying, 18 recording, or otherwise without the prior written consent of CyberSecurity Malaysia. The 19 information in this document has been updated as accurately as possible until the date of 20 publication.

21 NO ENDORSEMENT

22 Products and manufacturers discussed or referred to in this document, if any, are presented for 23 informational purposes only and do not in any way constitute product approval or endorsement by 24 CyberSecurity Malaysia.

25 TRADEMARKS

26 All terms mentioned in this document that are known to be trademarks or service marks have been 27 appropriately capitalised. CyberSecurity Malaysia cannot attest to the accuracy of this information. 28 Use of a term in this document should not be regarded as affecting the validity of any trademark 29 or service mark.

30 DISCLAIMER

31 This document is for informational purposes only. It represents the current thinking of 32 CyberSecurity Malaysia on the security aspects of the SSDLC environment. It does not establish 33 any rights for any person and is not binding on CyberSecurity Malaysia or the public. The 34 information appearing on this guideline is not intended to provide technical advice to any 35 individual or entity. We urge you to consult with your own SSDLC advisor before taking any 36 action based on information appearing on this guideline or any other documents to which it may 37 be linked.

38 PUBLIC COMMENT

39 You may submit electronic comments and suggestions at any time for CyberSecurity Malaysia 40 consideration to [email protected]. Comments may not be acted upon by CyberSecurity 41 Malaysia until the document is next revised or updated.

42 Acknowledgement

43 CyberSecurity Malaysia wishes to thank the following individuals who have contributed and/ or 44 reviewed this document.

45 External Contributors/ Reviewers:

46 Internal Contributors/ Reviewers:

47 Contents 48 49 INTRODUCTION ...... 1 50 1. Scope ...... 2 51 2. Terms, Definitions, Abbreviated Terms and Acronyms ...... 2 52 2.1 Terms and Definitions ...... 2 53 2.2 Abbreviated Terms and Acronyms ...... 6 54 3. Intended Audience ...... 6 55 4. Secure Software Development Life Cycle (SSDLC) ...... 6 56 5. Phase 1: Security Requirements ...... 8 57 5.1 Sources for security requirement ...... 8 58 5.2 Data Classification ...... 13 59 5.3 Use case and misuse case modeling ...... 15 60 5.4 Risk management ...... 16 61 6. Phase 2: Security Design ...... 20 62 6.1 Core Security Design Considerations ...... 20 63 6.2 Additional Design Considerations ...... 23 64 6.3 Threat modeling...... 25 65 7. Phase 3: Security Development ...... 27 66 7.1 Common software vulnerabilities and controls ...... 27 67 7.2 Secure software processes ...... 28 68 7.3 Securing build environments ...... 29 69 8. Phase 4: ...... 30 70 8.1 Attack surface validation ...... 30 71 8.2 Test data management ...... 31 72 9. Phase 5: Security Deployment ...... 32 73 9.1 Software acceptance considerations ...... 33 74 9.2 Verification and validation (V&V) ...... 34 75 9.3 Certification and accreditation (C&A) ...... 35 76 9.4 Installation ...... 35 77 10. Phase 6: Security Maintenance ...... 36 78 10.1 Operations, monitor and maintenance ...... 37 79 10.2 Incident Management ...... 38 80 10.3 Problem management ...... 39 81 10.4 Change management ...... 40 82 10.5 Disposal ...... 41 83 11. SSDLC Checklists ...... 43

84 11.1 Phase 1: Security Requirements (5) ...... 43 85 11.2 Phase 2: Secure Design (6) ...... 44 86 11.3 Phase 3: Security Development (7) ...... 45 87 11.4 Phase 4: Security Testing (8) ...... 46 88 11.5 Phase 5: Security Deployment (9) ...... 47 89 11.6 Phase 6: Security Maintenance (10) ...... 47 90 12 References ...... 49 91 Appendix A ...... 51 92 Appendix B ...... 53 93 Appendix C ...... 56 94 Appendix D ...... 58 95 Appendix E ...... 59 96 Appendix F ...... 61 97

98 INTRODUCTION 99 100 This document provides guideline for Secure Software Development Life Cycle (SSDLC) 101 highlight the security tasks for each phase involves in the development processes. SSDLC consists 102 of six (6) phases, there are: security requirement, security design, security development, security 103 testing, security deployment and security maintenance phases. This guideline describes security 104 information such as security tasks, which incorporate into every phase in producing secure 105 software to ensure the confidentiality, integrity and availability of their information systems. 106 107 The applying of security tasks into the development life cycle are become vital and needed to 108 clarify several problems. The high costs of remediation for the developer whenever the 109 vulnerabilities have been identified after the deployment of the software become the major 110 problem to the organization. As consequences, it will be related with a breach and then give effect 111 to an organization. Therefore, it is important for the organization to ensure the appropriate security 112 controls with security tasks are in place throughout the development life cycle. The organization 113 must plan for security in order to incorporate security from the beginning of any software 114 development. Organization has assured the appropriate security tasks included in design phase to 115 meet the requirement phase. The processes continue for development of software securely and 116 assure the security requirements have been met during implementation. The organization must 117 conduct ongoing reviews to maintain the appropriate level of security in the deployed software. 118 119 This guideline will suggest several security tasks of controls to ensure the development of secure 120 software from the earlier processes. Organization can take this SSDLC guideline to use it as 121 blueprint to apply the security control in all phases involve in secure software development 122 processes.

1

123 1. Scope 124 125 This document provides guideline for specific security tasks of each phase in Secure Software 126 Development Life Cycle (SSDLC) for target audience in incorporating the security features in the 127 development of software. The guideline is only focus on the development of secure software for 128 web application, which assume that the usage of components or codes or frameworks for 129 developments is under controlled environment. In addition, the secure software developed also not 130 included the cloud-based and external or third-party environments. 131 132 This document gives guidance to individuals and organizations with respect to common security 133 tasks throughout the six (6) phases involves in SSDLC. The phases are security requirement, 134 security design, security development, security testing, security deployment and security 135 maintenance phases. The selection of appropriate security tasks will assist target audiences in 136 minimizing the potential risks in software development. The common example architecture of 137 secure software (Figure A-1) and example of secure software architectures with protection controls 138 to be developed (Figure A-2) can be referred in Appendix A. 139 140 The provided security tasks describe the importance and relevancy of the steps to be conducted in 141 all phases. Moreover, the guideline describes knowledge of security control and steps as guidance 142 for implementing the security tasks. 143 144 The provided checklist will contain all security tasks, separated into six (6) different phases. Target 145 audience can use the checklist instantly provided that all the security tasks has been followed or 146 applied in the development of secure software. 147

148 2. Terms, Definitions, Abbreviated Terms and Acronyms

149 2.1 Terms and Definitions 150 For the purposes of this document, the terms and definitions as the following apply. 151 152 2.1.1 153 access control 154 means to ensure that access to assets is authorized and restricted based on business and security 155 requirements. 156 157 [ISO/IEC 27000: 2014(E)] 158 159 2.1.2 160 161 provision of assurance that a claimed characteristic of an entity is correct 162 163 [ISO/IEC 27000: 2014(E)]

2

164 2.1.3 165 availability 166 property of being accessible and usable upon demand by an authorized entity 167 168 [ISO/IEC 27000: 2014(E)] 169 170 2.1.4 171 confidentiality 172 property that information is not made available or disclosed to unauthorized individuals, entities, 173 or processes 174 175 [ISO/IEC 27000: 2014(E)] 176 177 2.1.5 178 integrity 179 property of accuracy and completeness. 180 181 [ISO/IEC 27000: 2014(E)] 182 183 2.1.6 184 objective 185 result to be achieved 186 187 [ISO/IEC 27000: 2014(E)] 188 189 2.1.7 190 risk 191 effect of uncertainty on objectives (5.6) 192 193 [ISO/IEC 27000: 2014(E)] 194 195 2.1.8 196 process 197 set of interrelated or interacting activities which transforms inputs into outputs 198 199 [ISO/IEC 27000: 2014(E)] 200 201 2.1.9 202 risk treatment 203 process (5.8) to modify risk (5.7) 204 205 [ISO/IEC 27000: 2014(E)]

3

206 2.1.10 207 residual risk 208 risk (5.7) remaining after risk treatment (3.72) 209 210 [ISO/IEC 27000: 2014(E)] 211 212 2.1.11 213 risk assessment 214 process of identifying the risks to system security and determining the probability of occurrence, 215 the resulting impact, and additional safeguards that would mitigate this impact. Part of Risk 216 Management and synonymous with Risk Analysis. 217 218 [NIST Special Publication 800-30] 219 220 2.1.12 221 spoofing 222 “Identity spoofing” is a key risk for applications that have many users but provide a single 223 execution context at the application and database level. Users should not be able to become any 224 other user or assume the attributes of another user. 225 226 [OWASP Code Review Guide Version 2.0.] 227 228 2.1.13 229 tampering 230 Users can potentially change data delivered to them, return it, and thereby potentially manipulate 231 client-side validation, GET and POST results, cookies, HTTP headers, and so forth. The 232 application should also carefully check data received from the user and validate that it is sane and 233 applicable before storing or using it. 234 235 [OWASP Code Review Guide Version 2.0.] 236 237 2.1.14 238 repudiation 239 Users may dispute transactions if there is insufficient auditing or recordkeeping of their activity. 240 For example, if a user says they did not make a financial transfer, and the functionality cannot 241 track his/her activities through the application, then it is extremely likely that the transaction will 242 have to be written off as a loss. 243 244 [OWASP Code Review Guide Version 2.0.]

4

245 2.1.15 246 information disclosure 247 Users are rightfully wary of submitting private details to a system. Is possible for an attacker to 248 publicly reveal user data at large, whether anonymously or as an authorized user? 249 250 [OWASP Code Review Guide Version 2.0.] 251 252 2.1.16 253 denial of service 254 Application designers should be aware that their applications may be subject to a denial of service 255 attack. The use of expensive resources such as large files, complex calculations, heavy-duty 256 searches, or long queries should be reserved for authenticated and authorized users, and not 257 available to anonymous users. 258 259 [OWASP Code Review Guide Version 2.0.] 260 261 2.1.17 262 elevation of privilege 263 If an application provides distinct user and administrative roles, then it is vital to ensure that the 264 user cannot elevate his/her role to a higher privilege one. 265 266 [OWASP Code Review Guide Version 2.0.]

5

267 2.2 Abbreviated Terms and Acronyms 268 CSM CyberSecurity Malaysia DoS Denial of Service DREAD Damage, Reproducibility, Exploitability, Affected Users, and Discoverability EOL End-of-Life FIPS Federal Information Processing Standards IPL Initial Program Load MyCC Malaysia Common Criteria NIST National Institute Standard Technology NT New Technology NTLM New Technology LAN Manager OWASP Open Web Project POST Power-on self-test SANS SysAdmin, Audit, Network and Security SDLC Software Development Life Cycle SSDLC Secure Software Development Life Cycle STRIDE Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege WAF Web application 269

270 3. Intended Audience 271 272 This guideline is created for the use of wide range of audiences for information systems and 273 professionals including i) individuals with information security management 274 responsibilities (e.g. project managers, information security officers), which usually range from 275 early to end development phases; ii) individuals development responsibilities (e.g. software 276 developers, programmers, software analyst, software architects), which usually range from design 277 to deployment phases; iii) other related stakeholders (software owners, consultant, auditors), 278 which usually involve at early or end development phases. 279

280 4. Secure Software Development Life Cycle (SSDLC) 281 282 Secure Software Development Life Cycle (SSDLC) follows an iterative process of phases such as 283 security requirement, security design, security development, security testing, security deployment 284 and security maintenance. Furthermore, the SSDLC methodology adapt the security controls to 285 the project life cycle to produce a secure software. 286 287 The main purpose of SSDLC methodology is to confirm the presence of essential components in 288 the environment of the software are secure, starting from the earlier phase of this secure software 289 development processes. SSDLC also necessitate for cost savings from the early integration of

6

290 security within the SDLC, which could help avoid expensively design flaws and increase the long- 291 term viability of software projects. 292 This guideline provides several specific security tasks required for each phase of SSDLC 293 methodology. These are the common security controls involved in developing the secure software. 294 Figure 1 below presents the security tasks of SSDLC phases. 295

296

7

297 Developing a secure software with SSDLC is about the processes of handling and managing risks 298 from the early phase, the security requirement phase which viewed as a non-functional requirement 299 called as software requirements specifications documents. The security software specifications 300 will be translated into architectural blueprints and identifying control based on risks will be 301 conducted in security design phase. Meanwhile in the security development phase will use the 302 architectural blueprints to code the software and apply to protect software and data. 303 In security testing phase will validate and verify the functionality and security of software based 304 on the requirement specification. Following in the security deployment phase, the verification will 305 be conducted to ensure that the software meets the specified functional and assurance 306 requirements. Finally, in security maintenance phase will monitoring ongoing operations. 307 Maintenance includes addressing incidents impacting the software and patching the software to 308 mitigate its chances of being exploited by hackers and threats. 309

310 5. Phase 1: Security Requirements 311 312 Objective 313 1) To identify and define relevant security requirements for the software developed. 314 2) To identify security preparation for software development. 315 3) To elicit protection needs using data classification, use and misuse case modeling and risk 316 management. 317 There are four (4) types of security tasks, which are i) sources for security requirement; ii) data 318 classification; iii) use case and misuse case modeling; iv) risk management; should be considered 319 by the audience to accomplish the security requirement phase. The details will be provided in next 320 sub-section:

321 5.1 Sources for security requirement 322 323 Objective 324 1) To explicitly define and address the security objectives or goals of the organization. 325 2) To identify which requirements are applicable to the business context and the software 326 functionality serving that context. 327 328 Implementation guide 329 330 5.1.1 Identify core security requirements. 331 i) Confidential requirements 332 To address protection against the unauthorized disclosure of data or information that 333 is either private or sensitive in nature. Confidentiality protection mechanisms are such 334 as follows: 335 a. Cryptographic is a protection mechanism in which the goal is to prevent the 336 disclosure of the information deemed secret. The types of mechanisms include:

8

337 (1) overt1 mechanisms such as and hashing. The goal of overt secret 338 writing is to make the information humanly indecipherable or unintelligible 339 even if disclosed. 340 (2) covert2 mechanisms such as steganography and digital watermarking. The goal 341 of covert secret writing is to hide information within itself or in some other 342 media or form. 343 344 b. Masking is a weaker form of confidentiality protection mechanism in which the 345 original information is either asterisked or X’ed out. This is primarily used to 346 protect against shoulder surfing attacks, which are characterized by someone 347 looking over another’s shoulder and observing sensitive information. The 348 masking of credit card numbers, except for the last four digits when printed on 349 receipts or displayed on a screen is an example of masking providing 350 confidentiality protection. 351 352 Confidentiality requirements need to be defined throughout the information life cycle 353 from the origin of the data in question to its retirement. It is necessary to explicitly 354 state confidentiality requirements for non-public data: 355 a. In Transit: When the data is transmitted over unprotected networks i.e., data-in- 356 motion. 357 b. In Processing: When the data is held in computer memory or media for processing 358 359 c. In Storage: When the data is at rest, within transactional systems as well as non- 360 transactional systems including archives i.e., data-at-rest. 361 Confidentiality requirements may also be time bound. 362 363 ii) Integrity requirements 364 To address two primary areas of software security namely, reliability assurance and 365 protection or prevention against unauthorized modifications. Integrity refers not only 366 to the software modification protection (system integrity) but also the data that the 367 software handles (data integrity). 368 Reliability – essentially refers to ensuring that the software is functioning as it is 369 designed and expected to. Also meant to provide security controls that will ensure that 370 the accuracy of the software and data is maintained. 371 Integrity protection also takes into consideration the completeness and consistency of 372 the software or data that the software handles. 373 374 iii) Availability requirements 375 To ensure that there is no disruption to business operations. Availability requirements 376 are those software requirements that ensure the protection against destruction of the

1 Overt- [Over] Over and above all. Out in the open, obvious, for all to see, not hidden. 2 Covert- [Cover] Hidden, camouflaged, cryptic. UNDER COVER. 9

377 software and/or data, thereby assisting in the prevention against Denial of Service 378 (DoS) to authorized users. 379 380 iv) Authentication requirements 381 To verify and assure the legitimacy and validity of the identity that is presenting entity 382 claims for verification. 383 Authentication credentials could be provided by different factors or a combination of 384 factors that include knowledge, ownership or characteristics. 385 Two-factor authentication – when two factors are used to validate an entity’s claim 386 and/or credentials. Multi-factor authentication – when more than two factors are used 387 for authentication. 388 The most common forms of authentication are as below: 389 a. Anonymous authentication is the means of access to public areas of your 390 software without prompting for credentials such as username and password. 391 b. Basic authentication is one of the HyperText Transport Protocol (HTTP) 1.0 392 specifications, which is characterized by the client browser prompting the user to 393 supply their credentials. 394 c. Digest authentication is a challenge/response mechanism, which unlike Basic 395 authentication, does not send the credentials over the network in clear text or 396 encoded form, but instead sends a message digest (hash value) of the original 397 credential. 398 d. Integrated authentication commonly known as New Technology LAN Manager 399 (NTLM) authentication or New Technology (NT) challenge/response 400 authentication, like Digest authentication, the credentials are sent as a digest. 401 e. Client certificate-based authentication works by validating the identity of the 402 certificate holder. These certificates are issued to organizations or users by a 403 certification authority (CA) that vouches for the validity of the holder. 404 f. Forms authentication requires the user to supply a username and password for 405 authentication purposes and these credentials are validated against a directory 406 store which can be the active directory, a database or configuration file. 407 g. Token-based authentication is usually used in conjunction with forms for 408 authentication where a username and password are supplied for verification. Upon 409 verification, a token is issued to the user who supplied the credentials. The token 410 is then used to grant access to resources that are requested. This way the username 411 and password need not be passed on each call. 412 h. Smart cards-based authentication provides the ownership (something you 413 have). They contain a programmable embedded microchip that is used to store 414 authentication credentials of the owner. 415 i. Biometric authentication uses biological characteristics (something you are) for 416 providing the identity’s credentials. Biological features such as retinal blood 417 vessel patterns, facial features and fingerprints are used for identity verification 418 purposes. 419 10

420 v) requirements 421 To confirm an authenticated entity has the needed rights and privileges to access and 422 perform actions on a requested resource. 423 Identify the subjects and objects. Subjects are the entities that are requesting access 424 and Objects are the items that subject will act upon. A subject can be a human user or 425 a software process. Action on objects needs to be explicitly captured, commonly are 426 Create, Read, Update or Delete (CRUD) data operations. The access control models 427 to apply are primarily of the following types: 428 a. Discretionary Access Control (DAC) – restricting access to objects based on the 429 identity of subjects and/or groups to which they belong. DAC is implemented 430 either by using identities or roles. DAC is often observed to be implemented is by 431 using access control lists (ACLs), the relationship between the individuals 432 (subjects) and the resources (objects) is direct and the mapping of individuals to 433 resources by the owner. 434 b. Non-Discretionary Access Control (NDAC) – characterized by the software 435 enforcing the security policies. It does not rely on the subject’s compliance with 436 security policies. The non-discretionary aspect is that it is unavoidably imposed on 437 all subjects. 438 c. Mandatory Access Control (MAC) – access to objects is restricted to subjects 439 based on the sensitivity of the information contained in the objects. The sensitivity 440 is represented by label. Only subjects that have the appropriate privilege and 441 formal authorization (i.e., clearance) are granted access to the objects. 442 d. Role-Based Access Control (RBAC) – individuals (subjects) have access to a 443 resource (object) based on their assigned role. Permissions to operate on objects 444 such Create, Read, Update or Delete are also defined and determined based on 445 responsibilities and authority (permissions) within the job function. 446 e. Resource-Based Access Control – access be granted based on the resources. 447 Resource based access control models are useful in architectures that are 448 distributed and multi-tiered including service-oriented architectures. 449 450 vi) Accountability requirements 451 To assist in building, a historical record of user actions. Audit trails can help detect 452 when an unauthorized user makes a change or an authorized user makes an 453 unauthorized change, both of which are cases of integrity violations. 454 455 Example of Security Requirements can be referred in Appendix B. 456 457 5.2.2 Identify general requirements 458 i) Session Management requirements 459 Upon successful authentication, a session identifier (ID) is issued to the user and that 460 session ID is used to track user behavior and maintain the authenticated state for that 461 user until that session is abandoned or the state changes from authenticated to not 462 authenticated.

11

463 ii) Errors & Exceptions Management requirements 464 Errors & exceptions are potential sources of information disclosure. 465 iii) Configuration Parameters Management requirements 466 Software configuration parameters and code which makeup the software needs 467 protection against hackers. These parameters and code usually need to be initialized 468 before the software can run. 469 470 5.1.3 Identify operational requirements 471 To identify requirements such as number of database connections for concurrent access, 472 interdependencies with other applications in the computing ecosystem, and shared and 473 computing resources required. Identify the needed capabilities and dependencies of the 474 software as it serves the business with their intended functionality. 475 i) Deployment environment requirements 476 Toidentify and capture the pertinent requirements about the environment in which the 477 software will be deployed. 478 ii) Archiving requirements 479 Archives be maintained either as a means for business continuity or as a need to comply 480 with a regulatory requirement or organizational policy, the archiving requirement must 481 be explicitly identified and captured. 482 iii) Anti-piracy requirements 483 Code obfuscation, code signing, anti-tampering, licensing and IP protection 484 mechanisms should be included as part of the requirements documentation especially 485 if in the business of building and selling commercial software. 486 487 5.1.4 Identify other requirements 488 i) Sequencing and timing requirements 489 Sequencing and timing design flaws in software can lead to what is commonly known 490 as race conditions or Time of Check/Time of Use (TOC/TOU) attacks. Race conditions 491 are in fact one of the most common flaws observed in software design. 492 ii) International requirements 493 International requirements can be of two types – legal and technological. 494 Legal requirements – so that are not in violation of any regulations. 495 Technological requirements – to determine character encoding and display direction 496 iii) Procurement requirements 497 The identification and determination of software security requirements is no less 498 important when a decision is made to procure the software instead of building it in- 499 house. The brief explanations of the procurement requirements can be referred in 500 Appendix C.

12

501 5.2 Data Classification 502 503 Objective 504 To ensure data or information as the most valuable digital assets will be protected. 505 506 Implementation guide 507 508 5.2.1Types of data 509 To classify data into types as follows: 510 i) Structured data 511 Referred to data which is organized into identifiable structure. Example of 512 structured data is a database (stored in columns and rows). 513 ii) Unstructured data 514 Referred to data which has no identifiable structure. Examples of unstructured data 515 include images, videos, emails, documents and text. 516 517 5.2.2 Labeling the data 518 Data classification is effort to assign labels (a level of sensitivity) to information (data) 519 assets, based on potential impact to confidentiality, integrity and availability (CIA), upon 520 disclosure, alteration or destruction. 521 522 5.2.3 Data ownership and roles 523 Decisions to classify data, which has access and what level of access, etc. are decisions that 524 are to be made by the business/data owner. 525 526 5.2.4 Data lifecycle management (DLM) 527 A policy-based approach, involving procedures and practices, to protect data throughout the 528 information life cycle: from the time it is created to the time it is disposed or deleted. 529 530 5.2.5 Privacy requirements 531 Data classification can help in identifying data that will need to have privacy protection 532 requirements applied. Categorizing the data into privacy tiers, based on impact upon 533 disclosure, alteration and/or destruction, can provide insight into ensuring that appropriate 534 levels of privacy controls are in place. 535 536 Best practice guidelines for data privacy that need to be included in software requirements 537 analysis, design and architecture can be addressed if one complies with the following rules. 538 i) If you don’t need it, don’t collect it. 539 ii) If you need to collect it for processing only, collect it only after you have informed the 540 user that you are collecting their information and they have consented, but don’t store 541 it.

13

542 iii) If you have the need to collect it for processing and storage, then collect it, with user 543 consent, and store it only for an explicit retention period that is compliant with 544 organizational policy and/ or regulatory requirements. 545 iv) If you have the need to collect it and store it, then don’t archive it, if the data has outlived 546 its usefulness and there is no retention requirement. 547 548 Privacy requirements and controls are as below: 549 i) Data anonymization – By permanently and completely removing personal identifiers 550 from data, anonymity can be assured. Anonymization is the process of removing private 551 information from the data. Anonymized data cannot be linked to any one individual 552 account. 553 The anonymization techniques are useful to assure data privacy are replacement (also 554 known as substitution), suppression (also known as omission), generalization (specific 555 identifiable information is replaced using a more generalized form), and perturbation 556 (also known as randomization, that involves making random changes to the data). 557 ii) Disposition – All software is vulnerable until it and the data it processes, transmits and 558 stores is disposed in a secure manner. This is particularly of great importance if the data 559 is sensitive and/or personally identifiable. 560 Most privacy regulations require the implementation of policies and procedures to 561 address final disposition of private information and/or the sanitization of electronic 562 hardware and media on which it is stored, before the hardware is re-provisioned for re- 563 use. 564 iii) Security models – a formal abstraction of the security policy which is comprised 565 of the set of security requirements that needs to be part of the software, so that it is 566 resistant to attack, can tolerate the attacks that cannot be resisted and can recover quickly 567 from the undesirable state, if compromised. 568 iv) Pseudonymization – is a data management and de-identification procedure by which 569 personally identifiable information fields within a data record are replaced by one or 570 more artificial identifiers, or pseudonyms. A single pseudonym for each replaced field 571 or collection of replaced fields makes the data record less identifiable while remaining 572 suitable for data analysis and data processing. 573 574 Organization should also consider the privacy legislation such as in: 575 i) Malaysia, which is the Laws of Malaysia, Act 709, Personal Data Protection Act 2010 576 (PDPA). 577 ii) United States, which are the U.S. Privacy laws – Privacy Act (5 USC 552a), the 578 consumer credit is addressed in the Fair Credit Reporting Act, healthcare information 579 in the Health Insurance Portability and Accountability Act (HIPAA), financial service 580 organizations in the Gramm–Leach–Bliley Act (GLBA), children’s web access in the 581 Children’s Online Privacy Protection Act (COPPA), and student records in the Federal 582 Educational Rights and Privacy Act). 583 iii) Non-U.S. Privacy principles, such as for European countries which is the European 584 Union (EU) General Data Protection Regulation (GDPR).

14

585 5.3 Use case and misuse case modeling 586 587 Objective 588 To identify possible misbehavior and recommend relevant security requirement based on 589 functionality for developed software. 590 591 Implementation guide

592 5.3.1 Analyze the use case scenarios 593 To identify behavior of software. The use case describes behavior that the software owner 594 intended. Use case modeling and diagramming is very useful for specifying requirements. It 595 can be effective in reducing ambiguous and incompletely articulated business requirements 596 by explicitly specifying exactly when and under what conditions certain behavior occurs. 597 Use case modeling includes identifying actors, intended software behavior (use cases), and 598 sequences and relationships between the actors and the use cases. Actors may be an 599 individual, a role or non-human in nature. 600 Actors are represented with stick people and use case scenarios by ellipses when the use case 601 is diagrammatically represented. Arrows that represent the interactions or relationships 602 connect the use cases and the actors. 603 604 5.3.2 Analyze the misuse case scenarios 605 To identify weaknesses that to become as a threat. Misuse cases model the negative 606 scenarios. A negative scenario is an unintended behavior of the software, one that the 607 software owner does not want to occur within the context of the use case. Misuse cases 608 provide insight into the threats that can occur against the software. 609 In misuse case modeling, mis-actors and unintended scenarios or behavior are modeled. 610 Misuse cases may be intentional or accidental. Misuse cases can be created through 611 brainstorming negative scenarios like an attacker. 612 613 5.3.3 Creating attack model 614 To create an attack model with explicit consideration of known attacks or attack types. Given 615 a set of requirements and a list of threats, the idea here is to cycle through a list of known 616 attacks one at a time and to think about whether the "same" attack applies to the software. 617 To create an attack model, do the following: 618 i) Select those attack patterns relevant to the software. Build misuse cases around those 619 attack patterns. 620 ii) Include anyone who can gain access to the software because threats must encompass all 621 potential sources of danger to the software. 622 623 5.3.4 Select mitigation control 624 To propose a mitigation controls based on attack identified from previous step. Highlight the 625 point by propose the security control, it help in addressing security requirement. 626 627 Figure D-1 depicts a use case and misuse case examples are shown in Appendix D. 15

628 5.4 Risk management 629 630 Objective 631 1) To enable securing the software that store, process, or transmit organizational information; 632 2) To enable the management to make well-informed risk management decisions to justify the 633 expenditures that are part of a software budget; and 634 3) To assist the management in authorizing (or accrediting) the software based on the supporting 635 documentation resulting from the performance of risk management. 636 637 Implementation guide 638 639 5.4.1 Risk assessment 640 To determine the extent of the potential threat and the risk associated with software 641 throughout its SSDLC. To identify appropriate controls for reducing or eliminating risk 642 during the risk mitigation process. 643 The risk assessment methodology encompasses nine (9) primary steps, whereby steps 2, 3, 644 4, and 6 can be conducted in parallel after Step 1 has been completed. The steps are as below. 645 646 Step 1 – System Characterization 647 Is to define the scope of the effort. In this step, the boundaries of the software are identified, 648 along with the resources and the information that constitute the system. Characterizing 649 software establishes the scope of the risk assessment effort, delineates the operational 650 authorization (or accreditation) boundaries, and provides information (e.g., hardware, 651 software, system connectivity, and responsible division or support personnel) essential to 652 defining the risk. 653 654 Step 2 – Threat Identification 655 To identify the potential threat-sources and compile a threat statement listing potential 656 threat-sources those are applicable to the software being evaluated. A threat-source is 657 defined as any circumstance or event with the potential to cause harm to software. The 658 common threat sources can be natural, human, or environmental. 659 660 Step 3 – Vulnerability Identification 661 The analysis of the threat to software must include an analysis of the vulnerabilities 662 associated with the software environment. The goal of this step is to develop a list of software 663 vulnerabilities (flaws or weaknesses) that could be exploited by the potential threat-sources. 664 665 Step 4 – Control Analysis 666 To analyze the controls that have been implemented, or are planned for implementation, by 667 the organization to minimize or eliminate the likelihood (or probability) of a threat’s 668 exercising software vulnerability. 669 To derive an overall likelihood rating that indicates the probability that a potential 670 vulnerability may be exercised within the construct of the associated threat environment 671 (Step 5 below), the implementation of current or planned controls must be considered. 16

672 Step 5 – Likelihood Determination 673 To derive an overall likelihood rating that indicates the probability that a potential 674 vulnerability may be exercised within the construct of the associated threat environment, the 675 following governing factors must be considered: 676 i) Threat-source motivation and capability 677 ii) Nature of the vulnerability 678 iii) Existence and effectiveness of current controls. 679 680 Step 6 – Impact Analysis 681 To determine the adverse impact, resulting from a successful threat exercise of vulnerability. 682 Before beginning the impact analysis, it is necessary to obtain the following necessary 683 information as below: 684 i) System mission (e.g., the processes performed by the software) 685 ii) System and data criticality (e.g., the system’s value or importance to an organization) 686 iii) System and data sensitivity. 687 688 Step 7 – Risk Determination 689 To assess level of risk for the software. The determination of risk for a threat/vulnerability 690 pair can be expressed as a function of: 691 i) The likelihood of a given threat-source’s attempting to exercise a given vulnerability 692 ii) The magnitude of the impact should a threat-source successfully exercise the 693 vulnerability 694 iii) The adequacy of planned or existing security controls for reducing or eliminating risk. 695 696 Step 8 – Control Recommendations 697 Controls that could mitigate or eliminate the identified risks, as appropriate to the 698 organization’s operations, are provided. The goal of the recommended controls is to reduce 699 the level of risk to the software and its data to an acceptable level. The following factors 700 should be considered in recommending controls and alternative solutions to minimize or 701 eliminate identified risks: 702 i) Effectiveness of recommended options (e.g., system compatibility) 703 ii) Legislation and regulation 704 iii) Organizational policy 705 iv) Operational impact 706 v) Safety and reliability. 707 708 Step 9 – Results Documentation 709 Once the risk assessment has been completed (threat-sources and vulnerabilities identified, 710 risks assessed, and recommended controls provided), the results should be documented in 711 an official report or briefing. 712 A risk assessment report is a management report that helps senior management, the mission 713 owners, make decisions on policy, procedural, budget, and system operational and 714 management changes. Unlike an audit or investigation report, which looks for wrongdoing, 715 a risk assessment report should not be presented in an accusatory manner but as a systematic 17

716 and analytical approach to assessing risk so that senior management will understand the risks 717 and allocate resources to reduce and correct potential losses. 718 719 5.4.2 Risk mitigation 720 To prioritizing, evaluating, and implementing the appropriate risk-reducing controls 721 recommended from the risk assessment process. 722 723 i) Risk mitigation options 724 Organization should be considered in selecting any of these risk mitigation options. It may 725 not be practical to address all identified risks, so priority should be given to the threat and 726 vulnerability pairs that have the potential to cause significant mission impact or harm. The 727 risk mitigation options are: 728 a. Risk Assumption. To accept the potential risk and continue operating the software or 729 to implement controls to lower the risk to an acceptable level 730 b. Risk Avoidance. To avoid the risk by eliminating the risk cause and/or consequence 731 (e.g., skip certain functions of the software or shut down the software when risks are 732 identified) 733 c. Risk Limitation. To limit the risk by implementing controls that minimize the adverse 734 impact of a threat’s exercising a vulnerability (e.g., use of supporting, preventive, 735 detective controls) 736 d. Risk Planning. To manage risk by developing a risk mitigation plan that prioritizes, 737 implements, and maintains controls 738 e. Research and Acknowledgment. To lower the risk of loss by acknowledging the 739 vulnerability or flaw and researching controls to correct the vulnerability 740 f. Risk Transference. To transfer the risk by using other options to compensate for the 741 loss, such as purchasing insurance. 742 743 ii) Risk mitigation strategy 744 To provide guidance on actions to mitigate risks from intentional human threats: 745 a. When vulnerability (or flaw, weakness) exists – implement assurance techniques to 746 reduce the likelihood of vulnerability’s being exercised. 747 b. When vulnerability can be exercised apply layered protections, architectural designs, 748 and administrative controls to minimize the risk of or prevent this occurrence. 749 c. When the attacker’s cost is less than the potential gain – apply protections to decrease 750 an attacker’s motivation by increasing the attacker’s cost (e.g., use of software 751 controls such as limiting what a software user can access and do can significantly 752 reduce an attacker’s gain). 753 d. When loss is too great – apply design principles, architectural designs, and technical 754 and nontechnical protections to limit the extent of the attack, thereby reducing the 755 potential for loss.

18

756 iii) Approach for control implementation 757 To take control action with following steps: 758 759 Step 1 – Prioritize Actions 760 Prioritize the implementation actions based on the risk levels 761 762 Step 2 – Evaluate Recommended Control Options 763 To select the most appropriate control option for minimizing risk. Analyze the feasibility 764 (e.g., compatibility, user acceptance) and effectiveness (e.g., degree of protection and level 765 of risk mitigation) of the recommended control options. 766 767 Step 3 – Conduct Cost-Benefit Analysis 768 To aid management in decision making and to identify cost-effective controls, a cost benefit 769 analysis is conducted. 770 771 Step 4 – Select Control 772 To determines the most cost-effective control(s) for reducing risk to the organization’s 773 mission. 774 775 Step 5 – Assign Responsibility 776 To identify and assign appropriate persons (in-house personnel or external contracting staff) 777 who have the appropriate expertise and skillsets to implement the selected control 778 779 Step 6 – Develop a Safeguard Implementation Plan 780 To develop safeguard implementation plan (or action plan). 781 782 Step 7 – Implement Selected Control(s) 783 To implement controls, which may lower the risk level but not eliminate the risk (residual 784 risk3) 785 786 iv) Control categories 787 To consider technical, management, and operational security controls, or a combination of 788 such controls, to maximize the effectiveness of controls for their software and organization. 789 Technical security controls for risk mitigation can be configured to protect against given 790 types of threats. These controls may range from simple to complex measures and usually 791 involve software architectures; engineering disciplines; and security packages with a mix of 792 hardware, software, and firmware. 793 Management security controls, in conjunction with technical and operational controls, are 794 implemented to manage and reduce the risk of loss and to protect an organization’s mission. 795 Management controls focus on the stipulation of information protection policy, guidelines, 796 and standards, which are carried out through operational procedures to fulfill the 797 organization’s goals and missions.

3 Residual risk is the threat that remains after all efforts to identify and eliminate risk have been made. 19

798 v) Cost-benefit analysis 799 To allocate resources and implement cost-effective controls, organizations, after identifying 800 all possible controls and evaluating their feasibility and effectiveness, should conduct a cost- 801 benefit analysis for each proposed control to determine which controls are required and 802 appropriate for their circumstances. 803 804 vi) Residual risk 805 To analyze the extent of the risk reduction generated by the new or enhanced controls in 806 terms of the reduced threat likelihood or impact. 807 808 5.4.3 Evaluation and assessment 809 To emphasizes the good practice and need for an ongoing risk evaluation and assessment. 810 There should be a specific schedule for assessing and mitigating mission risks, but the 811 periodically performed process should also be flexible enough to allow changes where 812 warranted, such as major changes to the software and processing environment due to changes 813 resulting from policies and new technologies. 814

815 6. Phase 2: Security Design 816 817 Objective 818 1) To design a secure architecture by considering security elements 819 2) To secure the software architecture by understand and analyze threat against the software 820 before it is built. 821 822 There are three (3) types of security tasks, which are i) core security design considerations; ii) 823 additional design considerations; iii) threat modeling; to accomplish the security design phase. The 824 details will be provided in next sub-section: 825

826 6.1 Core Security Design Considerations 827 828 Objective 829 To design the software to address the core security elements of confidentiality, integrity, 830 availability, authentication, authorization, and auditing. 831 832 Implementation guide 833 834 6.1.1 Confidentiality design 835 To assure that disclosure protection can be achieved in several ways using cryptographic and 836 masking techniques. 837 Masking is useful for disclosure protection when data is displayed on the screen or on printed 838 forms.

20

839 Cryptographic is for assurance of confidentiality, used when the data is transmitted or stored 840 in transactional data stores or offline archives. Work factor involves cryptographic 841 protection is exponentially dependent on: 842 i) Key size (length) - is the length of the key that is used in the algorithm. 843 ii) Key management to protecting the secrecy of the key, which includes the generation, 844 exchange, storage, rotation, archiving, and destruction of the key. 845 iii) Rotation (swapping) of keys involves the expiration of the current key and the 846 generation, exchange, and storage of a new key. 847 848 Encryption algorithms are primarily of two types: 849 i) Symmetric algorithms – using a single key for encryption and decryption operations 850 that are shared between the sender and the receiver. 851 ii) Asymmetric algorithms – using two keys is to be held secret and is referred to as the 852 private key, while the other key is disclosed to anyone with whom secure 853 communications and transactions need to occur. The key that is publicly displayed to 854 everyone is known as the public key. 855 a. Digital Certificates – which specifies formats for the public key, used by anyone to 856 verify the authenticity of the certificate itself because it contains the digital 857 certificate of the certificate authority. 858 b. Digital Signatures – to provide identity verification and ensure that the data or 859 message has not been tampered, since the digital signature that is used to sign the 860 message cannot be easily imitated by someone unless it is compromised. It also 861 provides non-repudiation. 862 863 6.1.2 Integrity design 864 To assures that there is no unauthorized modification of the software or data, using any one 865 of the following techniques or a combination of the techniques: 866 i) Hashing (Hash functions) – to condense variable length inputs into an irreversible, 867 fixed-sized output known as a message digest or hash value. 868 ii) Referential Integrity – Integrity assurance of the data, especially in a relational 869 database management system (RDBMS), which ensures that data is not left in an 870 orphaned state. Uses primary keys and related foreign keys in the database to assure 871 data integrity. 872 iii) Resource Locking – When two concurrent operations are not allowed on the same 873 object (say a record in the database), because one of the operations locks that record 874 from allowing any changes to it, until it completes its operation. 875 876 6.1.3 Availability design 877 To achieve destruction and DoS protection are by proper coding of the software. Since it’s 878 a design phase, configuration requirements such as connection pooling, the use of cursors 879 and looping constructs can be looked at. 880 Techniques used to design the software for availability as follows: 881 i) Replication – to provide a single point of failure is characterized by having no 882 redundancy capabilities and this can undesirably affect end-users when a failure occurs. 21

883 ii) Failover – to provide automatic switching from an active transactional software, server, 884 system, hardware component or network to a standby (or redundant) system. 885 (Switchover is manual) 886 iii) Scalability – to handle increasing (or growing) amount of work without degradation in 887 its functionality or performance. 888 889 6.1.4 Authentication design 890 To determine the type of authentication that is required as specified in the requirements 891 documentation, consider of using multi-factor authentication and single sign on (SSO). 892 Recommended of using multi-factor or the use of more than one factor to authenticate a 893 principal (user or resource) provides heightened security. 894 If there is a need to implement SSO, wherein the principal’s asserted identity is verified once 895 and the verified credentials are passed on to other systems or applications, usually using 896 tokens, then it is crucial to factor into the design of the software both the performance impact 897 and its security. 898 899 6.1.5 Authorization design 900 To give attention to the impact on performance, and to the principles of separation of duties 901 and least privilege. The type of authorization to be implemented according to the 902 requirements must be determined, such as roles or resource-based authorization. 903 If roles are used for authorization, design should ensure that there are no conflicting roles 904 that circumvent the separation of duties principle. For example, a user cannot be in a teller 905 role and also in an auditor role for a financial transaction. 906 Designing for authorization can be accomplished using entitlement management, which is 907 about granular access control. 908 909 Access control mechanism appropriate for general objects of unspecified types as listed 910 below: 911 i) Directory 912 One simple way to protect an object is to use a mechanism that works like a file 913 directory. Imagine we are trying to protect files (the set of objects) from users of a 914 computing system (the set of subjects). Every file has a unique owner who possesses 915 "control" access rights (including the rights to declare who has what access) and to 916 revoke access to any person at any time. Each user has a file directory, which lists all 917 the files to which that user has access. 918 ii) Access Control List (ACL) 919 There is one such list for each object, and the list shows all subjects who should have 920 access to the object and what their access is. This approach differs from the directory 921 list because there is one access control list per object; a directory is created for each 922 subject. 923 iii) Access control matrix 924 A table in which each row represents a subject, each column represents an object, and 925 each entry is the set of access rights for that subject to that object. 926 iv) Capability 22

927 Unforgeable (not fake) token, that gives the certain rights to an object. In theory, a 928 subject can create new objects and can specify the operations allowed on those objects. 929 For example, users can create objects, such as files, data segments, or sub processes, 930 and can also specify the acceptable kinds of operations, such as read, write, and execute. 931 But a user can also create completely new objects, such as new data structures, and can 932 define types of accesses previously unknown to the software. 933 934 v) Procedure-oriented access control 935 Imply the existence of a procedure that controls access to objects (for example, by 936 performing its own user authentication to strengthen the basic protection provided by 937 the basic operating system). The procedure forms a capsule around the object, 938 permitting only certain specified accesses. 939 940 6.1.6 Accountability design 941 To audit the software especially in the event of a breach, primarily for forensic purposes. 942 Log data should include the ‘who’, ‘what’, ‘where’, and ‘when’ aspects of software 943 operations. As part of the ‘who’, it is important not to forget the non-human actors such as 944 batch processes and services or daemons. 945

946 6.2 Additional Design Considerations 947 948 Objective 949 To design the software which address other design considerations that are needed when building 950 the secure software. 951 952 Implementation guide 953 954 6.2.1 Programming languages 955 To determine the programming language that will be used to implement the design, which 956 is bring with its inherent risks or security benefits. There are two (2) main types of 957 programming languages: 958 i) Unmanages code – e.g. C/C++, the execution of code is not managed by any runtime 959 execution environment but directly executed by the operating system. 960 ii) Managed code – e.g. Java and .NET (include C#, VB.Net), the execution of code is not 961 by the operating system directly, but instead, it is managed by runtime environment. 962 Security and non-security services such as memory management, exception handling, 963 bound checking, garbage collection, and type safety checking can be leveraged from the 964 runtime environment and security checks can be asserted before the code executes. 965 966 6.2.2 Data type, format, range and length 967 To assure integrity with data type, format, range and length for design considerations. 968 i) Primitive or built-in data types are such as Character, Integer, Floating-point number 969 and Boolean. 23

970 ii) User-defined data types, by programmers, are not recommended from security 971 standpoint because it potentially increases the attack surface, called as strongly typed 972 programming languages. 973 iii) Set of values and the permissible operations on that value set which defined by the 974 data type. 975 iv) Conversion mismatches and casting or conversion errors could cause detrimental to 976 the secure state of the software. There are such as: 977 a. Widening conversion (expansion) – data type is converted from smaller size range 978 to a larger size range. Potential loss of data is the loss of precision. 979 b. Narrowing conversion (truncation) – data type is converted from larger size range 980 to a smaller size range. Potentially cause data loss truncation if the value being 981 stored in the new data type is greater than its allowed range. 982 983 6.2.3 Database security 984 To ensure reliability, resiliency and recoverability of software that depends on the data stored 985 in the database. 986 Impacts of database security design are: 987 i) Inference attack – attacker obtaining sensitive information about the database from 988 presumably hidden and trivial pieces of information (legitimately obtained by the 989 attacker) using data mining techniques without directly accessing the database. 990 ii) Aggregation attack – information at different, security classification levels, which are 991 primarily no sensitive in isolation, end up becoming sensitive information when pieced 992 together as a whole. 993 994 Protection design considerations concerning database assets are: 995 i) Polyinstantiation – to provide several instances (or versions) of the database 996 information, so that what is viewed by a user is dependent on the security clearance or 997 classification level attributes of the requesting user. 998 It’s a database security approach to deal with problems of inference by hiding 999 information using classification labels, and aggregation by labeling different 1000 aggregations of data separately. 1001 ii) Database encryption – to provide data-at-rest encryption is a preventive, control 1002 mechanism that can provide strong protection against disclosure and alteration of data. 1003 To ensure along with database encryption, proper authentication and access control 1004 protection mechanism exist to secure the key that is used for the encryption. 1005 iii) Normalization – is a formal technique used to organize data so that redundancy and 1006 inconsistency are eliminated. 1007 iv) Triggers and views – Trigger are fired to run implicitly by the database when triggering 1008 event occurs. Also, very useful for automating and improving security protection 1009 mechanisms. View is the output of a query and is akin to a virtual table or stored query. 1010 Views are dynamically constructed that cause the data that are presented can be custom- 1011 made for users based on their rights and privileges.

24

1012 6.2.4 Interface design 1013 To apply interface design considerations when building the software, such as following: 1014 i) User interface – to support the security model and act as the mediating program. User 1015 interfaces design should assure disclosure protection. Masking of sensitive information, 1016 such as a password or credit card number by displaying asterisks on the screen, is an 1017 example of a secure user interface that assures confidentiality. A database view can also 1018 be said to be an example of a restricted user interface. 1019 ii) Application Programming Interfaces (API) – to communicate from one software 1020 component with another or for the software to interact with the underlying operating 1021 system. The interfaces are usually made available a library and it may include 1022 specifications for routines, data structures, object classes and variables. 1023 iii) Security Management Interfaces (SMI) – to configure and manage the security of the 1024 software, itself. These are administrative interfaces with high levels of privilege. SMI 1025 are used for user-provisioning tasks such as adding users, deleting users, enabling or 1026 disabling user accounts, as well as granting rights and privileges to roles, changing 1027 security settings, configuring audit log settings and audit trails, exception logging, etc. 1028 iv) Out-of-Band interface – to allow an administrator to connect to a computer that is in 1029 an inactive or shutdown state. 1030 v) Log interfaces – to log on or off in different environments (e.g., development, test, 1031 production, etc.), which is crucial component of auditing and when designing software 1032 for auditing. 1033 1034 6.2.5 Interconnectivity 1035 To explicitly design the upstream and downstream compatibility of software. This is 1036 important when it comes to delegation of trust, single sign on (SSO), token-based 1037 authentication, and cryptographic key sharing between applications. Upstream and 1038 downstream compatibility of software should be explicitly designed. 1039 In most mobile applications, the protection of the data on the client is left up to the 1040 application itself and so the applications must be designed to avoid storing any sensitive 1041 information on the application’s sandboxed environment itself. 1042 The publishers and third-party APIs that provide cryptographic services (encryption and 1043 decryption) may have to be considered to assure confidentiality. 1044 When data is stored in a location on the network, the network attached storage (NAS) device 1045 should be protected as well. Only authenticated and authorized connections to the NAS 1046 should be designed and if access is architected to be restricted by an IP range are designed, 1047 special considerations to IP spoofing threats need to be given. 1048

1049 6.3 Threat modeling 1050 1051 Objective 1052 To identify, quantify, and address the security risks associated with the software.

25

1053 Implementation guide 1054 1055 6.3.1 Step 1: Decompose the software 1056 To obtain understanding of the software and how it interacts with external entities. This 1057 involves creating use-cases to understand how the application is used, identifying entry 1058 points to see where a potential attacker could interact with the application, identifying assets 1059 i.e. items/areas that the attacker would be interested in, and identifying trust levels which 1060 represent the access rights that the software will grant to external entities. 1061 1062 Items to consider when decomposing the software includes: 1063 i) external dependencies 1064 ii) entry points (or attack vectors) 1065 iii) assets 1066 iv) determining the attack surface by analysing the inputs (browser input, cookies, property 1067 files, external processes, data feeds, service responses, flat files, command line 1068 parameters, environment variables), data flows and transactions 1069 v) trust levels 1070 vi) data flow analysis 1071 vii) transaction analysis (areas covered are data/input validation of data from all untrusted 1072 sources, authentication, session management, authorization, cryptography (data at rest 1073 and in transit), error handling /information leakage, logging /auditing) 1074 viii) data flow diagrams 1075 1076 6.3.2 Step 2: Determine and rank threats 1077 To identify threats using a threat categorization methodology. Example of threat 1078 categorization is such as STRIDE4 can be used to defines threat categories with regards to 1079 the attacker goals, such as auditing and logging, authentication, authorization, configuration 1080 management, data protection in storage and transit, data validation and exception 1081 management. 1082 To rank the threat can use the DREAD5 threat-risk ranking model, methodology to rank the 1083 risk of the threat is to calculate the average of numeric values assigned to risk ranking 1084 categories. 1085 1086 6.3.3 Step 3: Determine countermeasures and mitigation. 1087 To mitigate vulnerability with the implementation of a countermeasure of protection against 1088 a threat. Such countermeasures can be identified using threat-countermeasure mapping lists. 1089 Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to 1090 the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by 1091 applying the identified countermeasures. 1092 1093 Example of threat modeling can be referred in Appendix E.

4 STRIDE means Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege 5 DREAD means Damage, Reproducibility, Exploitability, Affected Users, and Discoverability 26

1094 7. Phase 3: Security Development 1095 1096 Objective 1097 1) To apply technology and process aspects of writing secure code. 1098 2) To ensure the security control defined in security requirements must be put in place (in the 1099 code). 1100 3) To ensure threat identified from threat modeling must be avoided during coding. 1101 1102 There are three (3) types of security tasks, which include i) common software vulnerabilities and 1103 controls; ii) secure software processes; iii) securing build environments, to accomplish the security 1104 development phase. The details will be provided in next sub-section: 1105

1106 7.1 Common software vulnerabilities and controls 1107 1108 Objective 1109 1) To identify the most common vulnerabilities that result from insecure coding. 1110 2) To apply security controls that must be put in place (in the code) to resist and frustrate actions 1111 of threat agents. 1112 1113 Implementation guide

1114 7.1.1 Vulnerability databases 1115 To discover the vulnerability databases or repositories of known vulnerabilities that have 1116 been found to be the result of deficiencies and defects in implemented software (e.g., flaws 1117 and bugs). 1118 The most common software security vulnerabilities and risks has been listed in the SSDLC 1119 checklist to ease the developer during development. However, the organization can refer to 1120 the current list of software vulnerabilities found all throughout the software development 1121 industry such as OWASP Top 10 Project or the Common Weakness Enumeration, CWE/25 1122 (project maintained by MITRE and partnered with SANS Institute). 1123 1124 7.1.2 Defensive coding practices 1125 To recognize, evaluate and reduce the attack surface of the software code. This is because 1126 the attack surface has potentially increased every time a single line of code is written. Some 1127 examples of attack surface reduction related to code are: 1128 i) reducing the amount of code and services that are executed by default. 1129 ii) reducing the volume of code that can be accessed by untrusted users. 1130 iii) limiting the damage when the code is exploited. 1131 The most common defensive coding practices and techniques has been listed in the 1132 SSDLC checklist to ease the developer during development.

27

1133 7.2 Secure software processes 1134 1135 Objective 1136 To assure the security of software by conducted certain processes, as addition to writing secure 1137 code. 1138 1139 Implementation guide 1140 1141 7.2.1 Source code versioning 1142 To ensure that the development team is working with the correct version of code. Gives the 1143 ability to roll back to a previous version should there be a need to. Additionally, provides 1144 the ability to track ownership and changes of code, and for updated version purpose. 1145 1146 7.2.2 Code analysis 1147 To perform automated process of inspecting the code for quality and weaknesses that can be 1148 exploited. It is primarily accomplished by two means; static and dynamic. 1149 Static code analysis involves the inspection of the code without executing the code (or 1150 software program). This analysis is normally done by the third party. 1151 Dynamic code analysis is the inspection of the code when it is being executed (run as a 1152 program). This analysis is normally done by the developer. 1153 1154 7.2.3 Code review 1155 To perform manually systematic evaluation of the source code with the goal of finding out 1156 syntax issues and weaknesses in the code that can impact the performance and security of 1157 the software. This is normally done among the developer. Semantic issues such as business 1158 logic and design flaws are usually not detected in a code review, but a code review can be 1159 used to validate the threat model generated in the design phase of the software development 1160 project. 1161 1162 7.2.4 Developer testing 1163 To perform the developers’ intentional and systematic employment of testing tools and 1164 techniques in order to create testable and maintainable software with few defects as possible. 1165 Developer testing requires a structured approach and mastery of several core competencies, 1166 of which understanding testability drivers, fundamental testing techniques and unit testing 1167 are the most important. 1168 1169 The common types of tests that developers can write for applications are: 1170 i) Unit tests 1171 To perform the execution of a section of code or small program which is tested in 1172 isolation from the more complete software. Tested in isolation means not calling the 1173 implementation of code not under test e.g. database, web service calls or other code 1174 dependencies. The concept of isolation is why mocking frameworks are commonly used 1175 for unit tests. Developers would write these during the development of a feature. 28

1176 ii) Integration tests 1177 To perform the combined execution sections of code. In these types of tests, you would 1178 hit the database, make web service calls or call other code dependencies. Developers 1179 would write these during the development of a feature. 1180 iii) Regression tests 1181 To perform the repetition of previously executed test cases for the purpose of finding 1182 defects in software that previously passed the same set of tests. Such tests would 1183 commonly be used before shipping code to a new environment or as part of a build 1184 process. It’s common to see tools such a selenium used to write these types of tests, 1185 where a web browser would be launched and user input automated. Human testers can 1186 also perform regression tests by using an application directly. 1187 iv) Software tests 1188 To perform the execution of the software in its final configuration, including integration 1189 with other software and systems. It tests for security, performance, resource loss, timing 1190 problems, and other issues that can’t be tested at lower levels of integration. As with 1191 regression testing, automated tools such as selenium can be used for this process as well 1192 as human testers. 1193

1194 7.3 Securing build environments 1195 1196 Objective 1197 1) To protect the source code from being accessed by unrelated person with the project during 1198 the development phase. 1199 2) To ensure integrity of the source code developed. 1200 1201 Implementation guide 1202 1203 7.3.1 Physically securing access to the software’s that build code. 1204 7.3.2 Using access control lists (ACLs) that prevent access to unauthorizedusers. 1205 7.3.3 Using version control software to assure that the code built is of the right version. 1206 7.3.4 Build automation is the process of scripting or automating the tasks that are involved in the 1207 build process. It takes the manual activities performed by the build team members daily and 1208 automates them. Some of these build activities include compiling source code into machine 1209 code, packaging dependencies, deployment and installation. When build scripts are used for 1210 build automation process, it is important to make sure that security controls and checks are 1211 not circumvented, when using these build scripts. 1212 7.3.5 Code signing routine is to ensure the integrity of the source code being developed.

29

1213 8. Phase 4: Security Testing 1214 1215 Objective 1216 1) To validate and verify the functionality and security of software using quality assurance testing 1217 2) To ensure the code developed runs as intended 1218 1219 There are two (2) types of security tasks, which are i) attack surface validation; ii) test data 1220 management; to accomplish the security testing phase. The details will be provided in next sub- 1221 section: 1222

1223 8.1 Attack surface validation 1224 1225 Objective 1226 To verify the presence and effectiveness of the security controls that are designed and implemented 1227 in the software. 1228 1229 Implementation guide 1230 1231 8.1.1 Post-development testing 1232 To ensure the correctness of developed software, in this step will execute code analysis, 1233 which is specifically on dynamic code analysis, is the inspection of the code when it is being 1234 executed (run as a program). 1235 1236 8.1.2 Perform security testing using security testing methods 1237 Methods or approach used to accomplish security testing are as below: 1238 i) White box testing – to test the structural of software which performed based on the 1239 knowledge of how the software is designed and implemented. It is broadly known as 1240 full knowledge assessment, because the tester has complete knowledge of the software. 1241 It can be used to test both the use case (intended behavior) as well as the misuse case 1242 (unintended behavior) of the software and can be conducted at any time post 1243 development of code, although it is best advised to do so while conducting unit tests. 1244 Data/information flow, control flow, interfaces, trust boundaries (entry and exit points), 1245 configuration, error handling, etc. are methodically and structurally analyzed for 1246 security. 1247 1248 ii) Black box testing – to test the behavioral of software, also known as zero knowledge 1249 assessment, because the tester has very limited to no knowledge of the internal working 1250 of the software being tested. Architectural or design documents, configuration 1251 information or files, use and misuse cases or the source code of the software is not 1252 available to or known by the testing team.

30

1253 Black box testing is performed using different tools. The common methodologies by 1254 which black box testing is accomplished with tools are , scanning, and 1255 penetration testing. 1256 1257 iii) Cryptographic validation testing – to ensure the applications that employ 1258 cryptographic mechanisms are using solid cryptographic algorithms and best practices 1259 for key management. Some of these requirements are available from Federal 1260 Information Processing Standards (FIPS) 140-2 Special Publications and MySEAL 1261 (National Trusted Cryptographic Algorithm List) website for all standard algorithms 1262 such as AES, RSA, DSA etc. whenever applicable. 1263 1264 8.1.3 Perform software security testing for quality assurance 1265 To apply with different types of tests and how they can be performed to attest the security 1266 of code that is developed in the development phase of the SSDLC. 1267 For software revisions, regression testing must be conducted and for all versions, new or 1268 revisions, the following security tests must be performed, if applicable, to validate the 1269 strength of the security controls. Using a categorized list of threats as a template of security 1270 testing is effective in ensuring comprehensive coverage of the varied threats to software. 1271 Ideally, the same threat list that was used when threat modeling the software will be the 1272 threats list that is used for conducting security tests as well. This way security testing can be 1273 used to validate the threat model. 1274 1275 The security tests are as follows: 1276 i) Testing for input validation 1277 ii) Testing for injection flaws controls 1278 iii) Testing for scripting attacks controls 1279 iv) Testing for non-repudiation controls 1280 v) Testing for spoofing controls 1281 vi) Testing for error andexception handling controls (failure testing) 1282 vii) Testing for privileges escalations controls 1283 viii) Anti-reversing protection testing 1284 ix) Stress testing 1285

1286 8.2 Test data management 1287 1288 Objective 1289 To identify input test data and data that is expected to be output after normal operations of the 1290 software. 1291 1292 Implementation guide 1293 1294 8.2.1 Identify output test data to confirm software requirements

31

1295 To identifying expected output test data helps to confirm if the software is meeting the 1296 requirements. The quality of test data is directly related to the quality of the test itself and so 1297 test data needs to be managed. Production data must never be imported into and processed 1298 in test environments. It is advisable to use dummy data by creating it from scratch in the test 1299 or simulated environment. 1300 8.2.2 Apply testing with synthetic transactions 1301 To perform transactions that serves no business value that is related to the dummy data. 1302 Synthetic transactions can be passive or active. Passive synthetic transactions are not stored 1303 (or maintained) and do not have any residual impact to the software itself. However, if the 1304 query for finding orders of a ‘dummy’ customer is processed and stored within the software 1305 application, it would constitute an active synthetic transaction. The usage of active synthetic 1306 transactions requires one to give attention to setting up the data and environment in such a 1307 manner that it does not impact the production environment. 1308 1309 8.2.3 Test data management solutions can aid in the creation of referentially whole data subsets of 1310 production data. 1311 To reduce some of the concerns that comes with the creation of quality test data and its 1312 management in test environments. These solutions automatically discover data relationships 1313 by analyzing and capturing table attributes. Ensure that the extraction rules consider the 1314 storage space that is available in the test environment, so that the extraction process does not 1315 end up extracting a large subset of data that cannot be imported into the test environment, 1316 due to size limitations. 1317 1318 8.2.4 Defect reporting and tracking 1319 To report on defects/flaws and then track the coding bugs, design flaws, behavioral 1320 anomalies (logic flaws), errors, faults and vulnerabilities of the software. Reporting defects 1321 must be comprehensive and detailed enough to provide the software development teams the 1322 information that is necessary to determine the root cause of the issue, so that they can address 1323 it. 1324

1325 9. Phase 5: Security Deployment 1326 1327 Objective 1328 1) To ensure completion of Operation Plan and Application Documentation 1329 2) To ensure management approval and risk acceptance for deployment 1330 3) To ensure application is meets its Functionality and secured 1331 4) To ensure secured environment and configuration for deployment 1332 1333 There are four (4) types of security tasks, which are i) software acceptance considerations; ii) 1334 verification and validation (V&V); iii) certification and accreditation (C&A); iv) installation; to 1335 accomplish the security deployment phase. The details will be provided in next sub-section:

32

1336 9.1 Software acceptance considerations 1337 1338 Objective 1339 1) To ensure application documentation and operation plan is ready. 1340 2) To obtain approval and risk acceptance. 1341 1342 Implementation guide 1343 1344 9.1.1 Completion criteria 1345 To validate and verified all functional and security requirements completed as expected. 1346 Completion criteria for functionality and software security with explicit milestones must be 1347 defined well in advance. Some examples of security related milestones include, but are not 1348 limited to the following: 1349 i) generation of the security requirements besides functional requirements in the 1350 requirement phase; 1351 ii) completion of the threat model during the design phase; 1352 iii) review and sign-off on the security architecture at the end of the design phase; 1353 iv) review of code for security vulnerabilities after the development phases; 1354 v) completion of security testing at the end of the application testing phase; and 1355 vi) completion of documentation before the deployment phase commences. 1356 1357 9.1.2 Change management 1358 To ensure process in place to handle change requests. 1359 Change management is a subset of configuration management. Changes to the computing 1360 environment and redesign of the security architecture can potentially introduce new security 1361 vulnerabilities, thereby increasing risk. Necessary support queues and processes for the 1362 software that is to be deployed/released should be established. 1363 1364 9.1.3 Approval to deploy or release 1365 To ensure all the required authorities signed off. 1366 Without approvals, no change should be allowed to the production computing environment. 1367 Before any new installation of software, a risk analysis needs to be conducted and the 1368 residual risk determined. The results of the risk analysis along with the steps taken to address 1369 it (mitigate or accept) must be communicated to the business owner. The authorizing official 1370 must be informed of the residual risk. The approval or rejection to deploy/release must 1371 include recommendations and support from the security team. Ultimately it is the authorizing 1372 official (AO) who is responsible for change approvals. 1373 1374 9.1.4 Risk acceptance and exception policy 1375 To ensure the residual risk acceptable and/or tracked as an exception if it is not within the 1376 threshold. 1377 Risk that remains after the implementation of security controls (residual risk) needs to be 1378 determined first. The best option to address total risk is to mitigate it so that the residual risk 33

1379 falls below the business defined threshold in which case the residual risk can be accepted. 1380 Risk must be accepted by the business owner and not by officials in the IT department. 1381 1382 9.1.5 Documentation of software 1383 To ensure all necessary documentation are in place. 1384 Documenting what the software is supposed to do, how it is architected, how it is to be 1385 installed, what configuration settings need to be preset, how to use it and administer it is 1386 extremely important for effective, secure and continued use of the software. Some of the 1387 primary objectives for documentation are to make the software deployment process easy and 1388 repeatable, and to ensure that operations are not disrupted and the impact upon changes to 1389 the software is understood. 1390

1391 9.2 Verification and validation (V&V) 1392 1393 Objective 1394 To ensure application functionality and secured for production environment. 1395 1396 Implementation guide 1397 1398 9.2.1 Reviews 1399 To conduct review at the end of each phase for ensuring that the software performs as 1400 expected and meets business specifications. This can be done informally or formally. 1401 1402 9.2.2 Testing 1403 To demonstrate that the software truly meets the requirements and determine any variances 1404 or deviations from what is expected using the actual results from the test. The different kinds 1405 of tests that are conducted as part of V&V are: 1406 i) Error detection tests – unit and component level testing. Errors may be flaws (design 1407 issues) or bugs (code issues). 1408 ii) Acceptance tests – used to demonstrate if the software is ready for its intended use or 1409 not. Software that is deemed ready should not only be validated for all functional 1410 requirements but also be validated to ensure that is meets assurance (security) 1411 requirements. 1412 iii) Independent (Third Party) tests – testing of software functionality and assurance is the 1413 process in which the software is reviewed, verified and validated by someone other than 1414 the developer of the software. This is commonly also referred to as Independent 1415 Verification &

34

1416 9.3 Certification and accreditation (C&A) 1417 1418 Objective 1419 1) To obtain technical verification of application functionality. 1420 2) To obtain overall acceptance for deployment into production environment. 1421 1422 Implementation guide

1423 9.3.1 Obtain certification 1424 To achieve the certification for the technical verification of the software functional and 1425 assurance levels. Security certification considers the software in the operational 1426 environment. At the minimum, it will include assurance evaluation of the following: 1427 i) User rights, privileges and profile management 1428 ii) Sensitivity of data and application and appropriate controls 1429 iii) Configurations of software, facility and locations 1430 iv) Interconnectivity and dependencies, and 1431 v) Operational security mode 1432 Organizations can obtain Common Criteria certification for ICT products through Malaysian 1433 Common Criteria Evaluation and Certification (MyCC) Scheme and also Malaysia 1434 Trustmark certification for e-business in private sector, which are provided by the 1435 CyberSecurity Malaysia (CSM). 1436 1437 9.3.2 Obtain accreditation 1438 To achieve the accreditation of management’s formal acceptance, need verification from 1439 participant/target users regarding the software after an understanding of the risks to that 1440 software rating in the computing environment. 1441 It is management’s official decision to operate a software in the operational security mode 1442 for a stated period and is the formal acceptance of the identified risk associated with 1443 operating the software.

1444 9.4 Installation 1445 1446 Objective 1447 To securing application production environment and configuration 1448 1449 Implementation guide 1450 1451 9.4.1 Hardening 1452 To lock down software to the most restrictive level so that it is secure. These minimum (or 1453 most restrictive) security levels are usually published as a baseline that all software in the 1454 computing environment must comply to. This baseline is commonly referred to as a 1455 minimum baseline created based on the usage of operating system. 1456 It is important to harden the host operating system by using baseline, updates and patches. 1457 It is also critically important to harden the applications and software that run on top of these 35

1458 operating systems. Hardening of software involves the setting the necessary and correct 1459 configuration settings and architecting the software to be . 1460 1461 9.4.2 Environment configuration 1462 To ensure that the needed parameters required for the software to run are appropriately 1463 configured by using pre-installation checklists. However, since it is not always possible to 1464 statically identify dynamic issues, checklists provide no guarantee that the software will 1465 function without violating the security principles with which it was designed and built. 1466 Therefore, it is crucial to ensure that the development and test environment match the 1467 configuration makeup of the production environment and simulation testing identically 1468 emulates the settings (including the restrictive settings) of the environment in which the 1469 software will be deployed post acceptance. 1470 1471 9.4.3 Release management 1472 To ensure the properly released of software into the operating computing environment once 1473 hardware and software resources are hardened and the environment configured for secure 1474 operations. 1475 Release management is the process of ensuring that all changes that are made to the 1476 computing environment are planned, documented, thoroughly tested and deployed with least 1477 privilege, without negatively impacting any existing business operations, customers, 1478 end-users or user support teams. 1479 1480 9.4.3 Bootstrapping and secure startup 1481 To determine that the software startup processes do not in any way adversely impact the 1482 confidentiality, integrity or availability of the software upon the installation of software. The 1483 Power-on self-test (POST) is the first step in an Initial Program Load (IPL) and is an event 1484 that needs to be protected from being tampered so that the Trusted Computing Base (TCB) 1485 is maintained. 1486

1487 10. Phase 6: Security Maintenance 1488 1489 Objective 1490 1) To monitor and guarantee that the software will continue to function in a reliable, resilient and 1491 recoverable manner. 1492 2) To identify the software and conditions under which software needs to be disposed or replaced. 1493 1494 There are five (5) types of security tasks, which are i) operations, monitor and maintenance; ii) 1495 incident management; iii) problem management; iv) change management; v) disposal; to 1496 accomplish the security maintenance phase. The details will be provided in next sub-section:

36

1497 10.1 Operations, monitor and maintenance 1498 1499 Objective 1500 1) To provide services to the business or end users for the needs of software operations and 1501 maintenance. 1502 2) To ensure assurance aspects of reliable, resilient and recoverable processing of the software. 1503 Implementation guide 1504 1505 10.1.1 Carry out the operations security. 1506 To ensure the operations security is about staying secure or keeping the resiliency levels of 1507 the software above the acceptable risk levels. It is the assurance that the software will 1508 continue to function as is expected to in a reliable fashion for the business, without 1509 compromising its state of security by monitoring, managing and applying the needed 1510 controls to protect resources (assets). 1511 Different types of operations security controls are as follows: 1512 i) Detective Controls are those that can be used to build a historical evidence of user and 1513 software/process actions. 1514 ii) Preventive Controls are those which make the success of the attacker difficult as its 1515 goal is to prevent the attack actively or proactively. 1516 iii) Deterrent Controls are those, which don’t necessarily prevent an attack nor are they 1517 merely passive in nature. 1518 iv) Corrective Controls are those which aim to provide the recoverability of software 1519 assurance. 1520 v) Compensating Controls are those controls that must be implemented when the 1521 prescribed software controls as mandated by a security policy or requirement cannot be 1522 met due to legitimate technical or documented business constraints. 1523 1524 10.1.2 Continuous monitoring 1525 To perform monitoring to any system, software or their processes. It is important to first 1526 determine the monitoring requirements before implementing a monitoring solution. 1527 Monitoring requirements need to be solicited from the business early in the software 1528 development life cycle. 1529 Along with the requirements, associated metrics that measure actual performance and 1530 operations should be identified and documented. 1531 Continuous security test should be conducted at planned interval or according to changes 1532 based on needs or requirements. 1533 1534 10.1.3 Audit for monitoring 1535 To provide an independent review and examination of software records and activities. An 1536 audit is conducted by an auditor whose responsibilities include the selection of events to be 1537 audited on the software, setting up of the audit flags which enable the recording of those 1538 events and analysing the trail of audit events. Audits must be conducted periodically and can 1539 give insight into the presence and effectiveness of security and privacy controls.

37

1540 10.2 Incident Management 1541 1542 Objective 1543 To detect and monitor the incident of security breach. 1544 1545 Implementation guide 1546 1547 10.2.1 Determine events, alerts, and incidents 1548 To determine if a security incident has truly occurred or not, by firstly to define what 1549 constitutes an incident. Failure to do so can lead to potential misclassification of events and 1550 alerts as incident and this could be costly. 1551 i) Event is any action that is directed at an object which attempts to change the state of the 1552 object 1553 ii) Alerts are flagged events that need to be scrutinized further to determine if the event 1554 occurrence is an incident. 1555 iii) Alerts can be categorized into incidents and adverse events can be categorized into 1556 security incidents if they violate or threaten to violate the security policy of the network, 1557 system or software applications. 1558 1559 10.2.2 Identify types of incidents 1560 To identify several types of incidents and the main security incidents, there are include the 1561 following: 1562 i) Denial of Service (DoS) – is an attack that prevents or impairs an authorized user from 1563 using the network, systems or software applications by exhausting resources. 1564 ii) Malicious Code – has to do with code based malicious entities such as viruses, worms 1565 and Trojan horses that can successful infect a host. 1566 iii) Unauthorized Access – access control related incidents refers to those wherein a person 1567 gains logical or physical access to the network, system or software applications, data or 1568 any other IT resource, without being granted the explicit rights to do so. 1569 iv) Inappropriate Usage – comprise of those in which a person violates the acceptable use 1570 of software resources or company policies. 1571 v) Multiple Component – are those which encompass two or more incidents. 1572 1573 10.2.3 Incident response process 1574 To enables and assure operations security of organization and remain in business. The major 1575 phases of the incident response process involve are preparation, detection and analysis, 1576 containment, eradication and recovery, and post-incident analysis. Brief explanations are as 1577 follows: 1578 i) Preparation – the organization aims to limit the number of incidents by implementing 1579 controls that were deemed necessary from the initial risk assessments. 1580 ii) Detection and analysis – the organization able to detect security breaches will be aware 1581 of incidents before or when they occur and if the incident is disruptive and unknown, 1582 appropriate actions that must be taken.

38

1583 iii) Containment, eradication and recovery – upon the detection and validation of a 1584 security incident, the first course of action that needs to be taken is that the incident is 1585 contained to limit any further damage or additional risks. Containment is the steps 1586 necessary to remove and eliminate components of the incident must be undertaken. 1587 Eradication steps may be performed during recovery to enforce that any fixes or steps 1588 to eradicate the incident is steps only after appropriate authorization is granted. 1589 Recovery mechanisms aim to restore the resource (network, system or software 1590 application) back to its normal working state. 1591 iv) Post-incident analysis – Is a lesson learned activities produce a set of objective and 1592 subjective data regarding each incident.

1593 10.3 Problem management 1594 1595 Objective 1596 To determine and eliminate the root cause of the problem (unknown incident) and to improves 1597 the service that the software provides to the business, so that it will not be repeated. 1598 1599 Implementation guide 1600 1601 10.3.1 Incident notification 1602 To identify and notify the unknown incident, problem happened to the software. 1603 1604 10.3.2 Root cause analysis 1605 To determine the reason for the problem by implementing the root causes analysis (RCA) 1606 steps. RCA is performed to determine ‘Why’ the problem occurred, repeatedly and 1607 systematically until there are no more reasons (or causes) that can be answered. 1608 1609 10.3.3 Solution determination 1610 To determine solution, which is a temporary workarounds or permanent fixes to be 1611 implemented. 1612 1613 10.3.4 Request for change 1614 To include workarounds (to support existing users), known errors, update problem 1615 information, management information and request for changes. 1616 1617 10.3.4 Implement solution 1618 To implement the identified solution after initiating a request for change. 1619 1620 10.3.5 Monitor and report 1621 To monitor the solution to problem by preparing the report.

39

1622 10.4 Change management 1623 1624 Objective 1625 1) To determine recovery and resolution of the problem after the root cause is identified. 1626 2) To track the vulnerability and monitor the problem resolution to ensure that it was effective 1627 and that the problem does not happen again. 1628 1629 Implementation guide 1630 10.4.1 Patch and vulnerability management 1631 To update or fix existing software with patches, which is a piece of code that are used so that 1632 the software is not susceptible to any bugs. patching is the process of applying these updates 1633 or fixes. Patches can be used to address security problems in software or simply provide 1634 additional functionality. Patching is a subset of hardening. 1635 Some of the necessary steps that need to be taken as part of the patching process include: 1636 i) Notifying the users of the software or systems about the patch 1637 ii) Testing the patch in a simulated environment so that there is no backward compatibility 1638 or dependencies (upstream or downstream) issues. 1639 iii) Documenting the change along with the rollback plan. The estimated time to complete 1640 the installation of the patch, criteria to determine the success of the patch and the 1641 rollback plan must be included as part of the documentation. 1642 iv) Identifying maintenance windows or the time when the patch is to be installed must be 1643 performed. The best time to install the patch is when there is minimal disruption to the 1644 normal operations of the business but with most software operating in a global economy 1645 setting, identifying the best time for patch application is a challenge today. 1646 v) Installing the patch 1647 vi) Testing the patch post-installation in the production environment is also necessary. 1648 Sometimes a reboot or restart of the software where the patch was installed is necessary 1649 to read or load newer configuration settings and fixes to be applied. Validation of 1650 backward compatibility and dependencies also needs to be conducted. 1651 vii) Validating that the patch did not regress the state of security and that it leaves the 1652 systems and software in compliance with the minimum-security baseline. 1653 viii) Monitoring the patched systems so that there are no unexpected side effects upon the 1654 installation of the patch. 1655 ix) Conducting post-mortem analysis in case the patch had to be rolled back and using the 1656 lessons learned to prevent future issues. If the patch was successful, the minimum- 1657 security baseline needs to be accordingly updated. 1658 x) Conducting virtual patching to cover the legacy system. Virtual patching refers to 1659 establishing an immediate security policy enforcement layer that prevents the 1660 exploitation of a known vulnerability, without modifying the application's source code, 1661 binary changes, or restarting the application. Virtual patching can be used to add 1662 temporarily protection, giving the development teams time to deploy physical patches 1663 according to their own update schedules. Virtual patches can also be used permanently 1664 for legacy systems that may not be patchable.

40

1665 10.4.2 Backups, recovery and archiving 1666 To assure uninterrupted business operations and continuity. The continuity of business 1667 without disruptions is an important factor of secure software operations. Not only must the 1668 data be available but also the system itself. 1669 1670 In addition to regularly scheduled backups, when patches and software updates are made, it 1671 is advisable to perform a full backup of the software that is being changed. It is also crucial 1672 that the integrity and restorability of the backup (especially if it is data backups) is verified. 1673 1674 Additionally, when a software has been infected by malware such as Trojan horses and 1675 , the only option left for assuring continued integrity, may be to completely format 1676 and reinstall the software accompanied with restoring the data from a secure, trusted and 1677 verified backup. 1678 Archives can come in handy in user support, especially for past customers. Integrity of 1679 archives can be accomplished using hashing and proper key management needs to be in 1680 place to make the cryptographically protected data in archives usable upon recovery.

1681 10.5 Disposal 1682 1683 Objective 1684 To identify software that not operational to be disposed to reduce the residual risk 1685 1686 Implementation guide 1687 1688 10.5.1 End-of-Life policies 1689 To perform risk management activities for system components that will be disposed or 1690 replaced to ensure that the hardware and software are properly disposed. 1691 In order to manage risk during the disposal phase, it is essential that we have an End-of-Life 1692 (EOL) policy to be developed and followed. The EOL policy is the first requirement in 1693 secure disposal of software and its related data and documents. 1694 1695 10.5.2 Sun-setting criteria 1696 To dispose or replace a product such as software or the hardware on which the software runs 1697 by following the sun-setting criteria as guidance. 1698 1699 10.5.3 Sun-setting processes 1700 To dispose the software related technologies that are deemed insecure, but which have no 1701 means to mitigate the risk to the acceptable levels of the organization must be sun-set as 1702 soon as it is possible. In compliance with the organization’s EOL policy, appropriate EOL 1703 processes must be established. EOL processes are the series of technical and business 1704 milestones and activities, which when complete make the hardware or software obsolete and 1705 no longer produced, sold, improved, repaired, maintained or supported.

41

1706 10.5.4 Information disposal and media sanitization 1707 To ensure that software assurance is maintained by following the software disposal steps, an 1708 important part in that process is to also ensure that the media that stored the information is 1709 also sanitized or destroyed appropriately. Sanitization is the process of removing information 1710 from media such that data recovery and disclosure is not possible. It also includes the 1711 removal of classified labels, marking and activity logs related to the information.

42

1712 11. SSDLC Checklists 1713 1714 The provided guideline of Secure Software Development Life Cycle (SSDLC) has determined 1715 several security tasks that involves in each phase. Subsequent to that, checklists are used as a quick 1716 guide for the target audience. These checklists assisted audience to track and monitor the security 1717 tasks apply to the development of secure software which following the SSDLC phases. The listed 1718 checklists as in the following sub-sections are divided according to the phases in the SSDLC. 1719

1720 11.1 Phase 1: Security Requirements (5) 1721 Action No. Security Tasks (Done þ / Not ý) 5.1 Sources for security requirement 5.1.1 Identify core security requirements Confidential requirements Integrity requirements Availability requirements Authentication requirements Authorization requirements Accountability requirements 5.1.2 Identify general requirements Session Management requirements Errors & Exceptions Management requirements Configuration Parameters Management requirements 5.1.3 Identify operational requirements Deployment environment requirements Archiving requirements Anti-piracy requirements 5.1.4 Identify other requirements Sequencing and timing requirements International requirements Procurement requirements 5.2 Data classification 5.2.1 Types of data 5.2.2 Labeling the data 5.2.3 Data ownership and roles 5.2.4 Data lifecycle management (DLM) 5.2.5 Privacy requirements Data anonymization Disposition Security models Pseudonymization 43

5.3 Use case and misuse case modeling 5.3.1 Analyze the use case scenarios 5.3.2 Analyze the misuse case scenarios 5.3.3 Creating attack model 5.3.4 Select mitigation control 5.4 Risk management 5.4.1 Risk assessment 5.4.2 Risk mitigation Risk mitigation options Risk mitigation strategy Approach for control implementation Control categories Cost-benefit analysis Residual risk 5.4.3 Evaluation and assessment 1722

1723 11.2 Phase 2: Secure Design (6) 1724 Action No. Security Tasks (Done þ / Not ý) 6.1 Core Security Design Considerations 6.1.1 Confidentiality design 6.1.2 Integrity design 6.1.3 Availability design 6.1.4 Authentication design 6.1.5 Authorization design 6.1.6 Accountability design 6.2 Additional Design Considerations 6.2.1 Programming languages 6.2.2 Data type, format, range and length 6.2.3 Database security 6.2.4 Interface design 6.2.5 Interconnectivity 6.3 Threat modeling 6.3.1 Step 1: Decompose the software 6.3.2 Step 2: Determine and rank threats 6.3.3 Step 3: Determine countermeasures and mitigation. 1725

44

1726 11.3 Phase 3: Security Development (7) 1727 Action No. Security Tasks (Done þ / Not ý) 7.1 Common software vulnerabilities and controls 7.1.1 Vulnerability databases The most common software security vulnerabilities and risks are: buffer overflow stack overflow heap overflow injection flaws broken authentication and session management cross-site scripting (XSS) insecure direct object references security misconfiguration sensitive data exposure missing function level checks Cross-Site Request Forgery (CSRF) using known vulnerable components invalidated redirects and forwards file attacks race condition side channel attacks 7.1.2 Defensive Coding Practices The most common defensive coding practices and techniques are: input validation canonicalization sanitization error handling safe application programming interfaces (API) memory management exception management session management configuration parameters management secure startup cryptography concurrency tokenization sandboxing anti-tampering

45

7.2 Secure software processes 7.2.1 Source code versioning 7.2.2 Code analysis 7.2.3 Code review 7.2.4 Developer testing Unit tests Integration tests Regression tests Software tests 7.3 Securing build environments 7.3.1 Physically securing access to the systems that build code. 7.3.2 Using access control lists (ACLs) that prevent access to unauthorizedusers. 7.3.3 Using version control software to assure that the code built is of the right version. 7.3.4 Build automation is the process of scripting or automating the tasks that are involved in the build process. 7.3.5 Code signing routine 1728

1729 11.4 Phase 4: Security Testing (8) 1730 Action No. Security Activities (Done þ / Not ý) 8.1 Attack surface validation 8.1.1 Post-development testing 8.1.2 Perform security testing using security testing methods White box testing Black box testing Cryptographic validation testing 8.1.3 Perform software security testing for quality assurance 8.2 Test data management 8.2.1 Identify output test data to confirm software requirements 8.2.2 Apply testing with synthetic transactions 8.2.3 Test data management solutions can aid in the creation of referentially whole data subsets of production data. 8.2.4 Defect reporting and tracking 1731

46

1732 11.5 Phase 5: Security Deployment (9) 1733 Action No. Security Tasks (Done þ / Not ý ) 9.1 Software acceptance considerations 9.1.1 Completion criteria 9.1.2 Change management 9.1.3 Approval to deploy or release 9.1.4 Risk acceptance and exception policy 9.1.5 Documentation of software 9.2 Verification and validation (V&V) 9.2.1 Reviews 9.2.2 Testing 9.3 Certification and accreditation (C&A) 9.3.1 Obtain certification. 9.3.2 Obtain accreditation. 9.4 Installation 9.4.1 Hardening 9.4.2 Environment configuration 9.4.3 Release management 9.4.4 Bootstrapping and secure startup 1734

1735 11.6 Phase 6: Security Maintenance (10) 1736 Action No. Security Tasks (Done þ / Not ý ) 10.1 Operations, monitor and maintenance 10.1.1 Carry out the operations security. 10.1.2 Continuous monitoring 10.1.3 Audit for monitoring 10.2 Incident management 10.2.1 Determine events, alerts, and incidents 10.2.2 Identify types of incidents 10.2.3 Incident response process 10.2.4 Problem management 10.3 Problem management 10.3.1 Incident notification 10.3.2 Root cause analysis 10.3.3 Solution determination 10.3.4 Request for change 10.3.5 Implement solution 47

10.3.6 Monitor and report 10.4 Change management 10.4.1 Patch and vulnerability management 10.4.2 Backups, recovery and archiving 10.5 Disposal 10.5.1 End-of-Life policies 10.5.2 Sun-setting criteria 10.5.3 Sun-setting processes 10.5.4 Information disposal and media sanitization

48

1737 12 References 1738 1739 [1] Chris, R., "How to put the S (for security) into your IoT development," 18 July 2017. [Online]. 1740 Available: https://techbeacon.com/security/how-put-s-security-your-iot-development. 1741 [Accessed 7 May 2019]. 1742 [2] Giannakidis, A., 2018, “Overview and Evolution of Virtual Patching”, 25 September 2018. 1743 [Online]. Available: https://dzone.com/articles/introduction-to-virtual-patching. [Accessed 1744 28 August 2019] 1745 [3] ISO/IEC 27000: 2014(E), Information technology – Security techniques – Information 1746 security management systems – Overview and vocabulary. 1747 [4] ISO/IEC 27002:2013(E), Information technology – Security techniques – Code of practice for 1748 information security controls. 1749 [5] Kanda Software, “Software Security Services”, 1750 https://www.kandasoft.com/softwaredevelopment/software-security-services.html 1751 [6] Limited, Ernst & Young Global and The Associated Chambers of Commerce and Industry of 1752 India (ASSOCHAM), "Cybersecurity for Industry 4.0: Cybersecurity implications for 1753 government, industry and homeland security," Ernst & Young LLP. Published in India., 1754 Kolkata, 2018. 1755 [7] Malaysia Common Criteria (MyCC), [Online]. Available: 1756 https://www.cybersecurity.my/mycc/about.html. [Accessed 28 August 2019] 1757 [8] Malaysia Trustmark, [Online]. Available: https://mytrustmark.cybersecurity.my/. [Accessed 1758 28 August 2019] 1759 [9] Mano Paul, 2014, Official (ISC)2® Guide to the CSSLP® CBK® Second Edition, CRC Press, 1760 Taylor & Francis Group. 1761 [10] McGraw, G., 2005, “Software Security: Building Security In”, Addison-Wesley Software 1762 Security Series. 1763 [11] MySEAL (National Trusted Cryptographic Algorithm List), [Online]. Available: 1764 https://myseal.cybersecurity.my/en/about.html. [Accessed 28 August 2019]. 1765 [12] NIST Special Publication 800-115 Technical Guide to Information Security Testing and 1766 Assessment, 2008. 1767 [13] NIST special publication 800-64 Revision 2, Security Considerations in the System 1768 Development Life Cycle, 2008. 1769 [14] Null, C., "3 big IoT security fears, and how developers can tackle them," 31 March 2016. 1770 [Online]. Available: https://techbeacon.com/app-dev-testing/3-big-iot-security-fears-how- 1771 developers-can-tackle-them. [Accessed 7 May 2019]. 1772 [15] OWASP Code Review Guide Version 2.0, 2017. 1773 [16] OWASP Secure Coding Practices Quick Reference Guide, Version 2.0, 2010. 1774 [17] Pfleeger, C.P., Pfleeger, S.L., & Margulies, J. 2015. Security in Computing, Fifth Edition, 1775 Pearson Education, Inc.: Upper Saddle River, NJ. 1776 [18] Prasad, K.V.M. & Kishore, J.K., 2013, “Security for The 1777 Development of Secure GEOSCHEMACS”, International Journal of Engineering Research & 1778 Technology (IJERT), 2(8): 308-321. 1779 [19] Rangka Kerja Keselamatan Siber Sektor Awam (RAKKSSA), Version 1.0, April 2016.

49

1780 [20] Rodriguez, J., 2006, “Common Security Requirement Language for Procurements & 1781 Maintenance Contracts”, Homeland Security, 8 December 2006. 1782 [21] Silva, M.A. & Danziger, M., 2015, The importance of security requirements elicitation and 1783 how to do it. Paper presented at PMI® Global Congress 2015—EMEA, London, England. 1784 Newtown Square, PA: Project Management Institute. 1785 https://www.pmi.org/learning/library/importance-of-security-requirements-elicitation-9634 1786 [22] Software Assurance Maturity Model, Version 1.0 (SAMM-1.0), A guide to building security 1787 into software development. 1788 [23] Srivastava, S., "Cybersecurity and IOT Industry Facing Skill Shortage Leading to 1789 Development Issues," 25 April 2019. [Online]. Available: 1790 https://www.analyticsinsight.net/cyber-security-and-iot-industry-facing-skill-shortage- 1791 leading-to-development-issues/. [Accessed 7 May 2019].

50

1792 Appendix A 1793 1794 Reference on Examples for Secure Software Architectures 1795 1796 1. Figure A-1 below shows the example of secure software architecture. 1797

1798 1799 Figure A-1: Example of Secure Software Architecture. 1800 1801 The example given in Figure A-1 is a sample of suggestion for secure software architecture, 1802 which basically applied to secure software system. However, organizations can make their 1803 choice and decision on applying as many as possible controls according to the estimation 1804 cost of the development project. 1805 1806 Referring to Figure A-1, the requirements of controls are needed to assure the security of 1807 the proposed secure software by providing three (3) main segmentations, which are public, 1808 demilitarized zone (DMZ), and secure local area network (LAN). The DMZ segmentation 1809 is for the purpose of controls, filtering or monitoring conducted by the firewall, intrusion 1810 detection system (IDS) or intrusion prevention system (IPS). The management LAN is 1811 intended for staffs and developers who’s authorized to access the devices or applications. 1812 Furthermore, for secure LAN segmentation and accessing the databases are controlled, 1813 filtered or monitored by the firewall, IDS or IPS.

51

1814 2. Figure A-2 below shows the example of secure software architecture with protection controls. 1815

1816 1817 1818 Figure A-2: Example of Secure Software Architecture with Protection Controls 1819 1820 1821 As refer to Figure A-2, the protection controls involve are start from the end-user site, which 1822 is the authentication is been provided in order to access the secure software. An authorized 1823 end-user will be in a secure communication and with the authorization can pass through the 1824 firewall. 1825 1826 There are more protection controls can be provided. The possible controls can be provided 1827 specifically in web server are input and data validation, user authorization, secure exception 1828 management, secure configuration, or more others. For the web application server, protection 1829 controls that can be provided are authentication and authorization identities, secure auditing 1830 and logging, or others. For database have secure database configuration or others as the 1831 protection controls. 1832 1833 The example given in Figure A-2 is a sample suggestion, which the organizations can make 1834 their choice and decision on applying any possible controls based on the estimation cost of 1835 the development project. 1836

52

1837 Appendix B 1838 1839 Reference on Example of Security Requirements 1840 1841 The Requirements Analyst can describe the application using different tools, such as informal 1842 drawings, pictures, sketches etc. The Requirements Analyst also can use high-level risk tables to 1843 help the security requirements definition. Table B1 and Table B-2 give an example of core security 1844 requirements or security goals, presenting methods that can be used to achieve the protection or 1845 mitigation and some tools that may be used to help. The content should be reviewed during the 1846 initial analysis and/or always when some modification is necessary into software. The Table B-3 1847 presents an example of security requirement table, suggesting the priority level. 1848 1849 Table B-1 Core Security Requirements (Security Goals) and Associated Security Measures 1850 Core Security Requirements Associated Security Measures Confidentiality Access control; Physical protection; Security policy Integrity Access control; Non-repudiation; Physical protection; Attack detection Availability System recovery; Physical protection; Attack detection Accountability Non-repudiation; Attack detection 1851 1852 Table B-2 Security Measures and Associated Security Mechanisms 1853 Security Measures Associated Security Mechanisms Access control Biometrics; Certificates; Multilevel security; Passwords and keys; Reference monitor; Registration; Time limits; User permissions; VPN Security policy Administrative privileges; Malware detection; Multilevel security; Reference monitor; Secure channels; Security session; Single access point; Time limits; User permissions; VPN Non-repudiation Administrative privileges; Logging and auditing; Reference monitor Physical Access cards; Alarms; Equipment tagging; Locks; Off-site storage; protection Secured rooms; Security personnel System recovery Backup and restoration; Configuration management; Connection service agreement; Disaster recovery; Off-site storage; Redundancy Attack detection Administrative privileges; Alarms; Incident response; Intrusion detection systems; Logging and auditing; Malware detection; Reference monitor Boundary DMZ (Demilitarized Zone); Firewalls; Proxies; Single access point; VPN protection 1854

53

1855 Table B-3 Security Requirements with Priority Level 1856 Security Requirement Description Comments Priorit Type y Authenticatio The system shall have authentication To avoid unauthorized access 1 n measures at all the entry points, front panel, or inbound network connection. The system shall support Windows Currently many systems use 2 domain authentication, and when Single Sign-On (SSO) using authenticate to AD (Active Directory), Kerberos and Windows shall support NTLMv2, as well as domain. Kerberos protocol, and shall avoid transmitting username/password on the wire when authentication to the AD. The system shall support Smartcard, USB Improving the security using 1 token (two factors) authentication. smartcard technologies and facilitating the physical access in case of using SSO The system shall support Proximity card, To adequate some technologies 3 magnetic card authentication. as NFC (Near Field Communication), for example. The system shall support authentication In case of physical access. 1 based on device local authority. The system shall support authentication Security network aspects must 1 to external 3rd party authentication server receive special attention. and protect the user credential during authenticating to the external server. The system shall support multiple It is necessary a strong 1 authentication approaches at the same monitoring and auditing. time. The system shall support multiple login To avoid session hijacking. 1 sessions concurrently, and sessions shall be protected in relation to each other. The system shall support the outbound Security network aspects must 1 authentication as a Single Sign-On (SSO) receive attention. schema. The system shall support multi-level Security issues as bypassing 1 system access. permissions to be mitigated. The user accounts shall possess privileges Avoid an authorized user 1 within the application to perform their execute activities as another signing activities. However, the user. privileges must be limited. The system shall ensure business user Help to avoid attackers escalate 1 accounts cannot be administrator user’s accounts to access accounts and vice-versa. administrator’s features. The system shall ensure system level Help avoiding attackers 1 accounts have limited privileges. escalate user’s accounts to access administrator’s features.

54

The system shall ensure the database Apply database security 1 access is performed using parameterized principles. store procedures to allow all table access to be revoked. The system shall avoid store sensitive Avoiding attackers gain access 1 static content in web-accessible directory to the directory paths and paths. The content shall be stored in non- discover some important accessible directories and proxy access to information. this content using a handler that will implement authorization, logging, and other security functions. Availability The backup system shall store the recover To help in case of failure or 1 sources in a network system. intruder action The system shall do mirroring to allow Help minimize the risk of one 3 data and software to be available in single point of failure. physically separated locals (separate site when the application is on the Web) The system shall apply high availability Help minimizing the impact of 3 solution such as clustering. potential system failures. Integrity The system shall ensure all data provided To avoid an authorized person 1 by software has consistency (either create or system alter data a new and valid state of data, or, return inadvertently or intentionally. all data to its state before a transaction was started). Auditing The system shall keep historical records Define more specific security 1 (logging) of events and processes loggings to allow recreating a executed in or by an application. clear picture of security events. Non- The system shall implement Help the application and 1 Repudiation cryptographic methods such as system avoiding repudiation. generating digital signatures or digital fingerprinting. 1857 Notes: Priority depicted with 1-High, 2-Medium, 3-Low. 1858

55

1859 Appendix C 1860 1861 Reference on Procurement Requirements 1862 1863 In gathering requirement, organization should have basic specification needed and idea of a 1864 structure for secure software to be developed. Organization should state the common security 1865 requirement for the secure software to be developed. 1866 1867 The targets for the secure software to be developed are as follows: 1868 i) Risk reduction should focus to reduce vulnerabilities and minimize the severity of cyber- 1869 attacks. 1870 ii) Software assurance where organization should take it as a strategic initiative to promote 1871 integrity, security and reliability in software. 1872 iii) Procurement specification for control software where organization put initiative to develop 1873 procurement document for control software, including hardware and software. 1874 1875 The goal for preparing the procurement requirements is to ensure the secure software that will be 1876 developed has the complete available security features. Organization is responsible for: 1877 i) applying the security requirements to the secure software project and allocating financial, 1878 technical and human resources as required for meeting the security requirements of the project 1879 ii) ensuring that the security controls are tested and validated during acceptance test phase 1880 iii) maintaining the security controls throughout the life cycle of the secure software. 1881 1882 In terms of the procurement specification to describe the general idea of the secure software to be 1883 developed as shown in Figure A-1 (Refer Appendix A), which presents the structure of secure 1884 software architecture in web environment. This diagram shows the essential elements or 1885 components in the secure software and their relationship with users and the Internet. The 1886 importance of this architecture is to: 1887 i) to gives guidance to organization for their new secure software project development 1888 ii) to clearly define possible elements, involve such as assets and controls, 1889 iii) to define the scope of secure software to be developed an in-house software as presented in 1890 Figure A-1. 1891 1892 As refer to Figure A-1, organization can provide detail specification for as follows: identifying 1893 assets to determine how the software needs to be protected. Asset identification can be done 1894 through different viewpoints: the customer’s, the software owner’s, and the attackers. After 1895 identifying the assets, further analysis should be done on each to identify a priority on protection. 1896 This will ensure that the most valuable assets receive the most attention in threat mitigation. Assets 1897 include data, software and hardware components and communication services. 1898 1899 In advanced before the processes of propose specific controls to the components of software, 1900 organization should put attention to the critical risk management. With that, during the risk 1901 analysis process organization should consider the following: 1902 i) The threats who are likely to want to attack our software 56

1903 ii) The risks present in each component in software environment 1904 iii) The kinds of vulnerabilities that might exist in each component, as well as the data flow 1905 iv) The business impact of such technical risks, were they to be realized 1906 v) The probability of such a risk being realized 1907 vi) Any feasible countermeasures that could be implemented at each tier, taking into account 1908 the full range of protection mechanisms available 1909 1910 The continuing steps are defining and exploring the available possible security controls that should 1911 suggested by the organization to ensure the security. This document provides guideline for specific 1912 security tasks of each phase in Secure Software Development Life Cycle (SSDLC) will assists 1913 organization to provide security controls to the secure software to be developed. By following the 1914 phases from early to the end will guide organization to produce a complete procurement document. 1915 1916 The mechanism for the purpose of procurement and accreditation of resilient system against 1917 malicious encompasses the entire life cycle of an assets. This procurement can be accomplished 1918 through tenders, quotes and direct negotiations in accordance with current regulations. Steps 1919 involves are: 1920 i) Identify Needs – Organization shall identify needs before any procurement either through 1921 vendor or inhouse 1922 development. 1923 ii) Procurement Specifications – The procurement specifications shall contain specific clauses 1924 regarding security requirements, product security certifications, source code availability, data 1925 disposal requirements, preference for local technologies and expertise, as well as requirement 1926 of development team competencies. 1927 iii) Vendor Management – Vendor management includes management of vendors providing 1928 hardware and software, consultancy services and managed services. 1929 iv) Footprint of Resources – Footprint of resources refers to the complete history of movement of 1930 the assets from the origin to the organization. The acquisition process shall ensure complete 1931 record of the footprint of resources throughout the system life cycle. Footprint of resources 1932 shall include the complete hardware and software supply chain. 1933 v) System Life Cycle – Security shall be incorporated into all the phases of system development 1934 cycle, including software conceptualization, requirements gathering, design, implementation, 1935 testing, acceptance, deployment, maintenance and disposal. It is recommended that planning 1936 for Information Security Management Plan follow the stages described in most models of ICT 1937 System Life Cycle. 1938 vi) Commissioning Process – Commissioning process of administrator role and conduct security 1939 posture assessment prior to system commissioning and at regular intervals during 1940 implementation and when there are changes to the environment. 1941 vii) Decommissioning Process – Conduct backup and recovery test, and the data migration before 1942 decommissioning. Change management shall be conducted to inform the relevant parties 1943 regarding the decommissioning of the system. 1944 viii) Disposal – Disposal can be in the form of physical destruction and / or data sanitization. 1945 57

1946 Appendix D 1947 1948 Reference on Example of Use Case and Misuse Case 1949 1950 A use case models the intended behavior of the software. This behavior describes the sequence of 1951 actions and events that are to be taken to address a business need. From use cases, misuse cases 1952 can be developed. Misuse cases, also known as abuse cases, help identify security requirements 1953 by modeling negative scenarios. Meanwhile security use case should be used to specify 1954 requirements that the software application shall successfully protect or mitigate the misuse case. 1955 1956 Referring to the following Figure D-1 shows the example of use case and misuse case diagram. 1957 The use cases identified are the manage item, register item, edit/modify items, view data, and 1958 borrow item. 1959 The misuse cases identified are spoof user, invade privacy and manipulate item. 1960 Meanwhile for the security use cases identified are ensure authorization for access control, ensure 1961 privacy, ensure integrity and ensure non-repudiation. 1962 1963

1964 1965 Figure D-1: Example of Use case and Misuse case 1966

58

1967 Appendix E 1968 1969 Reference on Example of Threat Modeling 1970 1971 Microsoft Threat Modelling (MTM) allows software architects to identify and mitigate potential 1972 security issues early. This tool is providing structured analysis, guided analysis of threats and 1973 mitigations based on the STRIDE taxonomy. STRIDE is an acronym with the meanings are 1974 Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of 1975 privilege. Besides, this tool also provides reporting capabilities which can be used later in secure 1976 coding review and security testing/verification phases. 1977 The tool enables anyone to: 1978 • Communicate about the security design of their systems. 1979 • Analyze designs for potential security issues using a proven methodology. 1980 • Suggest and manage mitigations for identified security issues. 1981

1982 1983 1984 The figure above shows data flow diagram (DFD) of standard architecture diagram. This tool later 1985 will generate a set possible threat based on the architecture diagram provided. However, the result 1986 generated from the tool may produce false positive. An analysis is required to choose relevant 1987 possible threat against the application developed. The result may be varied due to setting set for 1988 each component in the diagram.

59

1989 Table below shows all the possible threat generated by MTM. 1990 Threat Category (STRIDE) A. Interaction: Web Server à Web Application 1. Elevation using Impersonation E 2. Cross site scripting T 3. Replay attacks T 4. Collision attacks T 5. Weak authentication scheme I B. Interaction: Web Application à Web Server 1. Elevation using Impersonation E 2. Cross site scripting T 3. Web server process memory tampered T 4. Collision attacks T 5. Replay attacks T 6. Weak Authentication scheme I C. Interaction: SQL Database à Web Application 1. Potential Excessive Resource Consumption for Web Application D or SQL Database 2. Potential SQL Injection Vulnerability for SQL Database T 3. Spoofing of Destination Data Store SQL Database S 4. Authorization bypass I 5. Weak credential storage I 6. Authenticated data flow compromised T D. Interaction: Web Application à SQL Database 1. Weak access control for a resource I 2. Persistent cross site scripting T 3. Cross site scripting T 4. Spoofing of source data store SQL DB S E. Interaction: Browser à Web Server 1. Elevation using impersonation E 2. Web server process memory tampered T 3. Collision attacks T 4.Replay attacks T 5. Weak Authentication scheme I F. Interaction: Web Server à Browser 1. Elevation using impersonation E 2. Cross site scripting T

60

Appendix F

FEEDBACK FORM

CYBER SECURITY GUIDELINES FOR SECURE SOFTWARE DEVELOPMENT LIFE CYCLE (SSDLC)

File name: MyVAC-3-FRM-0719-GD_SSDLC-V1 Document classification: Confidential

Name

Organisation

Please return this document to: [email protected]

61

Instruction to fill out this feedback form are as follows:

Field Description Section Indicate the section to which your comment refers.

Please choose General in this column if your comment: i. refers to the whole document, or ii. does not refer to any section of the document. Line number Indicates the line number to which your comment refers. Figure/Table Indicate the figure or table to which your comment refers. Type of comment Choose the type most relevant for your comment.

The following types are available: - general (ge) - technical (te) - editorial (ed)

Only enter the short form for the type: ge, te or ed. Comment Enter your comment in this column and explain the reason for the comment.

If you wish to submit figures or complex objects in addition to the textual comments on the particular section, insert them as separate pages. Proposed change If appropriate, enter a modified version of the section or sentence in this column.

62

Type of comment

Section Line number Figure/ Table ge = general Comments Proposed changes (e.g. 3.1) (e.g. 17) (e.g. Table 1) te = technical ed = editorial

63

Other comments (if any):

By signing this form, I hereby acknowledge that I have read and understand the “Cyber Security Guidelines for Secure Software Development Life Cycle (SSDLC)”. My comments towards this document are as mentioned above.

Signature:

………………………………… Name:

Date:

THANK YOU FOR YOUR TIME HAVE A PLEASANT DAY

---END OF DOCUMENT---