DEVELOPMENT OF A QUALITY MANAGEMENT ASSESSMENT TOOL TO EVALUATE SOFTWARE USING SOFTWARE QUALITY MANAGEMENT BEST PRACTICES

Item Type Other

Authors Erukulapati, Kishore

Publisher Cunningham Memorial library, Terre Haute,Indiana State University

Download date 23/09/2021 23:31:43

Link to Item http://hdl.handle.net/10484/12347

DEVELOPMENT OF A QUALITY MANAGEMENT ASSESSMENT

TOOL TO EVALUATE SOFTWARE USING SOFTWARE

QUALITY MANAGEMENT BEST PRACTICES

______

A Dissertation

Presented to

The College of Graduate and Professional Studies

College of Technology

Indiana State University

Terre Haute, Indiana

______

In Partial Fulfillment

of the Requirements for the Degree

Doctor of Philosophy

______

by

Kishore Erukulapati

December 2017

© Kishore Erukulapati 2017

Keywords: Software Quality, Technology Management, Software Quality Management, Bibliometric techniques, Best Practices

VITA

Kishore Erukulapati EDUCATION Ph.D. Indiana State University (ISU), Terre Haute, IN. Technology Management, 2017. M.S. California State University (CSU), Dominguez Hills, CA. Quality Assurance, 2011. B.Tech. National Institute of Technology (NIT), Calicut, Kerala, India. Computer Science and Engineering, 1996.

INDUSTRIAL EXPERIENCE 2011-Present Hawaii Medical Service Association. Position: Senior IT Consultant. 2008-2010 Telvent. Position: Operations Manager. 2008-2008 ACG Tech Systems. Position: Test Manager. 2000-2008 IBM Global Services. Position: Project Manager. 1996-2000 Tata Consultancy Services. Position: System Analyst.

PUBLICATIONS/PRESENTATIONS Erukulapati, K., & Sinn, J. W. (2017). Better Intelligence. Quality Progress, 50 (9), 34-40.

Erukulapati, K. (2016). What Can Software Quality Engineering Contribute to Cybersecurity?. Software Quality Professional, 19(1).

Erukulapati, K. (2015, Dec 2). Information Technology in Healthcare: Current Status and Future Perspectives. Presented at IEEE Hawaii Chapter - Computer Society Meeting, Honolulu, Hawaii.

Erukulapati, K. (2013, Nov 21). Integration of COTS (Commercial Off-The-Shelf) Software Products – A Quality Management Roadmap. Presented at 2013 ATMAE Annual Conference, New Orleans, Louisiana.

Erukulapati, K. (2013, Nov 21). Closing the Quality Gap in Healthcare through Systems Thinking. Presented at 2013 ATMAE Annual Conference, New Orleans, Louisiana.

Erukulapati, K. (2013, July 5). Advanced Analytics – The next frontier for innovation and STEM jobs. Presented at IEEE Hawaii Chapter - Computer Society Meeting, Honolulu, Hawaii.

Erukulapati, K. (2013, Mar 4). Integrated Lean Six Sigma. Presented at 2013 ASQ Lean Six Sigma Conference, Phoenix, Arizona.

Erukulapati, K. (2012, Nov 14). Software Quality Affects us all. Presented at ASQ Hawaii Meeting, Honolulu, Hawaii.

PROFESSIONAL AFFLIATIONS Senior Member & Section Membership Chair ASQ Senior Member & Section Vice-Chair & Section Computer Society Chair IEEE

iii

COMMITTEE MEMBERS

Committee Chair: M. Affan Badar, Ph.D.

Professor, Department of Applied Engineering and Technology Management

Indiana State University

Committee Member: Merwan B. Mehta, Ph.D.

Professor, Department of Technology Systems

East Carolina University

Committee Member: Jeff Ulmer, Ph. D.

Professor, Department of Industrial Management and Engineering Technology

University of Central Missouri

iv

ABSTRACT

Organizations are constantly in search of competitive advantages in today’s complex global marketplace through improvement of quality, better affordability, and quicker delivery of products and services. This is significantly true for software as a product and service. With other things being equal, the quality of software will impact consumers, organizations, and nations.

The quality and efficiency of the process utilized to create and deploy software can result in cost and schedule overruns, cancelled projects, loss of revenue, loss of market share, and loss of consumer confidence. Hence, it behooves us to constantly explore quality management strategies to deliver high quality software quickly at an affordable price.

This research identifies software quality management best practices derived from scholarly literature using bibliometric techniques in conjunction with literature review, synthesizes these best practices into an assessment tool for industrial practitioners, refines the assessment tool based on academic expert review, further refines the assessment tool based on a pilot test with industry experts, and undertakes industry expert validation. Key elements of this software quality assessment tool include issues dealing with people, organizational environment, process, and technology best practices. Additionally, weights were assigned to issues of people, organizational environment, process, and technology best practices based on their relative importance, to calculate an overall weighted score for organizations to evaluate where they stand with respect to their peers in pursuing the business of producing quality software. This research study indicates that people best practices carry 40% of overall weight, organizational best v practices carry 30% of overall weight, process best practices carry 15% of overall weight, and technology best practices carry 15% of overall weight. The assessment tool that is developed will be valuable to organizations that seek to take advantage of rapid innovations in pursuing higher software quality. These organizations can use the assessment tool for implementing best practices based on the latest cutting edge management strategies that can lead to improved software quality and other competitive advantages in the global marketplace.

This research contributed to the current academic literature in software quality by presenting a quality assessment tool based on software quality management best practices, contributed to the body of knowledge on software quality management, and expanded the knowledgebase on quality management practices. This research also contributed to current professional practice by incorporating software quality management best practices into a quality management assessment tool to evaluate software.

vi

ACKNOWLEDGEMENTS

I uncovered the existence of this unique and excellent doctor of philosophy in

Technology Management doctoral program by serendipity. I am very fortunate to meet so many wonderful people throughout the doctoral journey that helped me become an independent research scholar, critical thinker, life-long learner, and most importantly a better person. Thanks to the power of Internet and Software in 21st century!

I am indebted to the following professors and mentors for their time, dedication, commitment, enthusiasm, encouragement, motivation, advice, knowledge, experience, expertise, guidance, insights, perspectives, contribution, patience, support, and understanding.

• Dr. M. Affan Badar: Dissertation Committee Chair, Program Planning Committee

Member, and instructor for a Quality Systems Specialization core course (MET

612: Reliability, Maintainability, and Service).

• Dr. Merwan B. Mehta: Dissertation Committee Member, Program Planning

Committee Member.

• Dr. Jeff Ulmer: Dissertation Committee Member, An Alumnus of doctor of

philosophy in Technology Management program, Instructor for the first course

that I took in Technology Management program, a Technology Management core

course (INDM 5015: Legal aspects of Industry).

• Dr. John W. Sinn: Program Planning Committee Chair, Co-author of the first

scholarly article that I published in ASQ’s quality progress, Instructor for a vii

Technology Management core course (TECH 6820: Technology Systems

Assessment, and Innovation), Instructor for two Quality Systems Specialization

core courses (QS 7270: Documentation based Process Improvement, QS 7260:

Quality Change Culture and Systems), and Instructor for Technology

Management professional studies core courses (TECH 7880: Internship #1 in

Technology Management, TECH 7880: Internship #2 in Technology

Management).

I am thankful to the following professors and staff that helped me during this doctoral journey.

• Dr. A. Mehran Shahhosseini: Director of the Ph. D. in Technology Management

program.

• Dr. Barry Duvall: Instructor for a Technology Management core course (ITEC

6050: Strategies for Technology Management and Communication).

• Dr. Craig Rhodes: Instructor for a Technology Management core course (TECH

708-5A: Impacts of Technology).

• Dr. David Beach: Instructor for a Research core course (COT 702: Advanced

Technological Research Methods).

• Dr. George Maughan: Former director of the Ph. D. in Technology Management

program and Instructor for two Research core courses (COT 710: Research

Residency Orientation Seminar, COT 711: Research Residency Seminar).

• Dr. Michael Hayden: Instructor for three Quality Systems Specialization core

courses (TMGT 813: Quality Standards Leadership, TMGT 814: Quality Systems viii

Seminar, MET 611: Experimental Design and Process Analyses) and one

Research core course (COT 703: Advanced Statistical Analysis in Technology).

• Mary Griffy: Former Administrative Specialist supporting the Ph. D. in

Technology Management program.

• Marti Mix: Administrative Specialist supporting the Ph. D. in Technology

Management program.

• Nicholas Aballi: Technology Specialist supporting the Ph. D. in Technology

Management program.

I am thankful to academic experts, industry experts, and industry practitioners for their invaluable feedback that helped in refining the software quality management assessment tool. I am also thankful to alumni and students of Ph. D. in Technology Management program that interacted with me and shared their insights and perspectives during this journey.

ix

TABLE OF CONTENTS

COMMITTEE MEMBERS ...... iii

ABSTRACT ...... iv

ACKNOWLEDGEMENTS ...... vi

LIST OF TABLES ...... xii

LIST OF FIGURES ...... xiv

INTRODUCTION ...... 1

Chapter Overview ...... 1

Background ...... 1

Statement of the Problem ...... 10

Research Questions ...... 11

Statement of the Purpose ...... 11

Statement of the Need ...... 12

Statement of the Assumptions ...... 17

Statement of the Limitations ...... 19

Statement of the Delimitations...... 22

Statement of the Methodology ...... 25

Definition of the Key Terms ...... 25

Summary ...... 28 x

REVIEW OF LITERATURE ...... 29

Chapter Overview ...... 29

Survey of Prior Scholarly work on Software Quality Management ...... 29

Survey of Prior Scholarly Work on Bibliometric Techniques ...... 65

Chapter Summary ...... 77

METHODOLOGY ...... 78

Chapter Overview ...... 78

Research Design...... 79

Academic Expert Reviews ...... 81

Industry Expert Pilot Test ...... 81

Industry Practitioner Validation ...... 82

Data Analysis Tools ...... 82

Data Collection and Analysis Approach ...... 83

Institutional Review Board (IRB) ...... 88

Validity ...... 88

Chapter Summary ...... 91

RESULTS ...... 92

Chapter Overview ...... 92

Results ...... 92

DISCUSSION, CONCLUSIONS, AND RECOMMENDATIONS ...... 126

REFERENCES ...... 138

APPENDIX A: LISTING OF STANDARDS ...... 173 xi

APPENDIX B: IRB TRAINING COMPLETION REPORT ...... 175

APPENDIX C: PRODUCTIVE AND INFLUENTIAL JOURNALS ...... 176

APPENDIX D: IRB DETERMINATION OF EXEMPT STATUS ...... 178

APPENDIX E: INVIATIONAL EMAIL TO PARTICIPANTS ...... 179

APPENDIX F: INFORMED CONSENT ...... 180

APPENDIX G: INDUSTRY EXPERTS PILOT TEST ...... 181

G.1 Industry Experts Pilot Test: Organizational Environment ...... 181

G.2 Industry Experts Pilot Test: People ...... 183

G.3 Industry Experts Pilot Test: Process ...... 184

G.4 Industry Experts Pilot Test: Technology ...... 186

APPENDIX H: REVISED BEST PRACTICES...... 187

H.1 Revised Best Practices: People ...... 187

H.2 Revised Best Practices: Organizational Environment ...... 189

H.3 Revised Best Practices: Process ...... 191

H.4 Revised Best Practices: Technology ...... 195

APPENDIX I: FINAL ASSESSMENT TOOL ...... 197

APPENDIX J: PRACTITIONER VALIDATION RESULTS ...... 205

J.1 Practitioner Validation Results: People ...... 205

J.2 Practitioner Validation Results: Organizational Environment ...... 207

J.3 Practitioner Validation Results: Process ...... 209

J.4 Practitioner Validation Results: Technology ...... 215 xii

LIST OF TABLES

Table 1. Digital Disruptions ...... 3

Table 2. Agile and Non-agile practices...... 45

Table 3. Productive Journals ...... 94

Table 4. Productive Organizations ...... 98

Table 5. Productive Authors ...... 99

Table 6. Top Funding Agencies ...... 99

Table 7. Productive Countries...... 100

Table 8. Influential Authors ...... 102

Table 9. Influential Countries ...... 103

Table 10. Influential Journals ...... 103

Table 11. Author Co-citation Clusters ...... 106

Table 12. Software Quality Cluster Themes 1 ...... 108

Table 13. Software Quality Cluster Themes 2 ...... 109

Table 14. Organizational Environment - Best Practices ...... 110

Table 15. People - Best Practices ...... 111

Table 16. Process - Best Practices ...... 112

Table 17. Technology - Best Practices ...... 115

Table 18. Factor Weights ...... 117

Table 19. Academic Expert Feedback On Best Practices ...... 119 xiii

Table 20. Academic Expert Feedback On Factor Weight ...... 120

Table 21. Revised Factor Weight Based On Academic Expert Feedback ...... 120

Table 22. Scoring Table ...... 121

Table 23. Industry Expert Pilot Test Results – Weighted Score...... 121

Table 24. Industry Expert Feedback On Best Practices ...... 122

Table 25. Industry Expert Feedback On Factor Weight ...... 123

Table 26. Revised Factor Weights Based on Industry Expert Feedback ...... 123

Table 27. Industry Practitioner Validation Results – Weighted Score ...... 125

xiv

LIST OF FIGURES

Figure 1. Holistic View of Software Production ...... 15

Figure 2. Research Design ...... 80

Figure 3. Data Analysis for Identifying Software Quality Management Best Practices ...... 83

Figure 4. Journal Count by Year ...... 96

Figure 5. Journal Citation Count by Year ...... 97

Figure 6. Network Visualization Author Co-citation ...... 105

Figure 7. Density Visualization Author Co-citation ...... 106

Figure 8. Network Visualization Co-word...... 107

Figure 9. Density Visualization Co-word ...... 108

1

CHAPTER 1

INTRODUCTION

Chapter Overview

The purpose of this chapter is to provide foundation for the proposed research. First, background information is presented. This includes critical role of software and software quality, need to treat software quality management in a broader and more integrated context, and usage of bibliometric techniques to study areas of science, engineering, technology, and management.

Next, the proposed research outline is presented in terms of the problem statement, the research questions, the purpose of the study, a justification of why this study is needed, the assumptions and limitations that are related to the proposed methodology, and statement of the methodology.

Finally, the chapter is concluded with definition of the key terms that are important for the proposed research.

Background

Software has changed and continues to change the world beyond human imagination

(DeMarco, 2011). Technological advances and innovations in software occur more rapidly than most other fields and software plays central and growing role in society (Tockey, 2015; Selby,

2007; Smith, 2002). Software is part of daily lives such as commercial, e-business, e- government, e-education, e-health, e-learning, embedded systems, cloud, mobile services, smart grids, smart finance, smart agriculture, smart homes, smart devices, intelligent personal 2 assistants, intelligent transportation systems, smart cities, and so on (Kanellopoulos & Yu, 2015;

Huang, 2006). Software enables individuals to lead richer lives in a way that they learn about the world as well as communications with one another (Hsu, Marinucci, & Voas, 2015). Software also plays highly visible role in economy, security, societal well-being, international competitiveness, and defense of nations (Misa, 2015; Osterweil, 1996).

As software is simpler to update or enhance than hardware, there is a continuing trend to move functionality gradually away from hardware to software (Holzmann, 2017). Traditionally hardware controlled and hardware focused systems such as cars, robots, and power systems are getting transformed to software driven and software intensive industries that are now run by software (Gorschek, Fricker, Palm, & Kunsman, 2010).

Software is transforming all industries such as agriculture, aeronautic, automotive, construction, defense, energy, entertainment, financial, healthcare, manufacturing, nuclear, railways, telecommunications, media, transportation, printing, and publishing and so on and all these industries center increasingly on software (Ebert, Hofner, & Mani, 2015). Software provides competitive differentiation, improved effectiveness and efficiency of business processes through digital transformation and plays an important and strategic role in organizations’ business to survive and succeed in hyper competitive, volatile, and uncertain global marketplace

(Rodriguez, Piattini, & Fernandez, 2015; Dass, 2012; Kitchenham, & Pfleeger, 1996). Software is increasingly accepted globally as a key driver of social and economic growth to retain and expand market share and stay competitive in a rapidly changing global marketplace (Osterweil,

Ghezzi, Kramer, & Wolf, 2008). Examples of digital disruptions are listed in Table 1 below

(Demirkan, Spohrer, & Welser, 2016). 3

Table 1

Digital Disruptions

Organization Disruption Technology

Uber The world's largest taxi company but owns Cloud-enabled mobile no taxis location-based services

Airbnb The largest accommodation provider but Cloud-enabled lodging owns no real estate services

Skype One of the largest phone companies but Cloud-enabled communication owns no telecommunications infrastructure services

Alibaba The world's biggest online commerce Cloud-enabled retail services company but has no inventory

Facebook The most popular media owner but creates Cloud-enabled social network no content services

Netflix The largest movie house but owns no Cloud-enabled entertainment cinemas services

Amazon The largest retailer but produces no Cloud-enabled retail services products or services

Massive open The largest schools but have no campuses Cloud-enabled education online courses services (MOOCs)

Apple and The largest purveyors of software Cloud-enabled mobile app Google applications but don't write apps services

IBM Watson The largest cognitive solutions platform but Cloud-enabled cognition as a does not require customers to hand over service their proprietary and unique datasets

Organizations view software quality management as the basis for achieving competitive advantage and enhance business agility (Thomas, Hurley, & Barnes, 1996). Software quality 4 management becomes increasingly important especially when more organizations outsource their functions and there is an increasing trend towards adoption of software quality management within organizations globally (Rodriguez et al., 2015). Software quality management is an important research discipline (Bourque & Fairley, 2014; Selby, 2007). Due to continuous and rapid advances in technology and the critical strategic role of software as a competitive differentiator for businesses, software quality management is a hot topic for practitioners and researchers (Richardson, Reid, Seidman, Pattinson, Delaney, 2011; Rodríguez, Oviedo and

Piattini, 2016).

A Gartner report forecasts that the number of connected things will reach 20.4 billion in

2020 (Gartner, n.d.). As software is becoming more pervasive, ubiquitous, distributed, and dependable in daily personal and work lives, resilience, security, trustworthiness, privacy, and safety play a dominant role where data of individuals and critical infrastructures such as water, communications, power, and transportation are closely interconnected on a large scale in highly complex cyber-physical-social systems (Department of Homeland Security [DHS], n.d.; Hsu et al., 2015). Software quality can be a key differentiator an organization can offer to customers and innovation in software quality management is driven by critical role of software within product innovation on a fast changing technology landscape (Rodríguez et al., 2015). Software capabilities will strongly determine how well people will be able to come together to understand each other and find ways to make their local and global situations more mutually satisfactory and sustainable into the future (Boehm, 2008).

As software becomes increasingly complex, pervasive, transparent, connected, business- critical, safety-critical, mission-critical, and life-critical, software quality management continues to be a strategic issue for organizations and is very valuable to produce quality, resilient, safe, 5 and secure software (García et al., 2010; Larrucea, Combelles, & Favaro, 2013; Prahalad &

Krishnan, 1999; Whittaker & Voas, 2002). Globalization requires managing software beyond the boundaries of organizations and geographies and adds new challenges such as cultural issues, language issues, intellectual property issues, regulatory issues, political issues, operating cost overhead issues, and outsourcing issues to software quality despite advantages such as time-zone effectiveness, reduced overall costs, resources, competencies, new markets, new products, local customization, and winning local business (Boehm, Chulani, Verner, & Wong, 2008; Ebert et al.,

2015).

The impact of poor software quality can range from minor inconveniences such as microwave not coming up when set to bake, or an email to a friend not sent to undesirable, dangerous, and catastrophic consequences such as failure of antilock brake in cars, or software in x-rays, or avionics in airplanes that could cause destruction and loss of lives (Liebowitz, 2015;

Daughtrey, 2014; Nindel-Edwards & Steinke, 2008). Even the smallest mistake in software can have large consequences such as a single misplaced break statement that caused the entire AT&T long distance calling network crash in the US and a single uninitialized variable that caused Mars

Polar Lander to crash (Holzmann, 2015b). Other examples of software failures are Deep Impact mission’s clock counter overflow, LightSail spacecraft’s snag, Curiosity rover’s glitches, Year

2000 problem, and Boeing Dreamliner’s overflow issue (Holzmann, 2015c). Software quality is of high importance for all software products and software quality is a non-negotiable characteristic for critical software products on which lives of many people depend (Axelsson &

Skoglund, 2016). Organizations incur costs, lose credibility, reputation, and revenue, cause excessive rework and customer dissatisfaction due to low quality software products or delayed software projects and legal liability risks (Arora, Caulkins & Telang, 2006; Rodriguez et al., 6

2015; Hinkle, 2007). Software failures can have unintended negative impacts on environment, people, and society (Gotterbarn & Miller, 2010).

Software security is one of the main concerns of most organizations and recent security breaches could be traced back to software quality issues (Flinders, 2015). The active participation of human users is an integral part of software systems that include social networks, cloud computing systems, crowdsourcing systems, open source systems, and systems supporting gig economy (Shaw, 2016). The usage of software systems is not limited to benign environment with compassionate, honest, friendly, caring, and gentle users (Hilburn & Humphrey, 2002).

Software products that enhance lifestyles also have the potential to cause harm as several recent studies show rise of cyberbullying, cyberstalking, and cyber harassment (Cleland-Huang, 2016).

Cyber criminals such as hackers use cyber-physical-social systems to commit cybercrimes such as fraud, theft of sensitive, personal, classified, and confidential data, destruction of data, damage of data, corruption of data, disruption of operations, destruction of infrastructure, damage of infrastructure, distributed denial-of-service, and other types of attacks (Hsu et al., 2015).

Malware costs more than $500 billion to victims annually worldwide and digital crime costs an average loss of $7.7 million to a company annually worldwide (Greengard, 2016). The

United State government’s health insurance marketplace website healthcare.gov built to provide health insurance enrollment through Patient Protection and Affordability Care Act experienced quality and cyber security issues and was considered a disastrous software rollout that caused loss of billions of dollars of tax payers money (Howles, 2014; Shull, 2014c). Poorly written software is the root of expensive and destructive cyber security problems and this caused security breaches in organizations such as Sony, Home Depot, JP Morgan, eBay, and Target impacting their customers negatively (Maughan, 2010; Houser, 2015). The majority of security 7 vulnerabilities reside in top operating system software and applications software, including those from Adobe, Apple, Google, and Microsoft (Denning & Denning, 2016, United States Computer

Emergency Readiness Team [US-CERT], n.d.). A bug in open source free software OpenSSL caused security vulnerability in several organizations such as Amazon, Facebook, Netflix, and

Google impacting their customers negatively (Goldsborough, 2014; Kamp, 2014). FDA’s analysis of recalls conducted between 2003 and 2012 concludes that software is one of the most frequent causes for recalls (Food and Drug Administration [FDA], n.d.-a).

Automotive industry faced several recalls recently due to software defects such as Toyota’s unintended acceleration bug, Volkswagen’s embedded defeat device, and Fiat Chrysler’s software security loophole (Hatton & van Genuchten, 2016). Software defects caused airport closures, personal data theft, plane malfunctions, and unreachable 911 centers (Shull, Carleton,

Carriere, Prikladnicki, & Zhang, 2016). Electronic health record software systems intended to replace paper charts and illegible handwriting to prevent medical errors face criticism from medical community due to quality, safety, security, and interoperability issues (Shull, 2014c).

The Office of Personnel Management reported that sensitive information of 21.5 million individuals was stolen and 5.6 million federal employees' fingerprints were compromised in the massive cyber security breach (Everett, 2015, Office of Personal Management [OPM], n.d.). The

Anthem health insurance system breach exposed personal data of 79 million consumers

(Denning & Denning, 2016). The complicated software supply chains and software defects in these chains become avenues for vulnerabilities and cyber intrusions that cyber criminals, malcontents, terrorists, and hostile nations can exploit (Nielson, 2015). Software quality issues impact consumers, organizations, and nations. Cost overruns, schedule overruns, technical debt, 8 loss of market share, cancelled projects, loss of revenue, and loss of consumer confidence are inevitable as a result of software quality issues (Dalal & Chhillar, 2012).

There is a gap between the state-of-the-art and state-of-the-practice (Arce et al., 2013).

Software security and software assurance have become concerns for many (Mead, 2009).

Security assurance is a challenge for software development organizations but is a necessity for consumers (Lipner, 2015). Researchers and scholars are continually producing new knowledge and insights on software quality management strategies in scholarly peer reviewed journals and the number of scholarly articles has increased. Producing high quality reliable software depends on whether and how fast humans work from certain ideas originating in the past (Whittakar and

Voas, 2002). Research on Software quality management that focuses on evaluating and improving software quality made progress in theories and applications but it is fragmented, disjointed, and incoherent that slows the growth of new theory building and there were no systematic and coherent theories evolved for software quality management (Rai, Song, & Troutt,

1999; Palmquist et al., 2013). More work is needed to integrate software quality management approaches that are widely used but rarely integrated in a coherent way (Pardo, Pino, García, &

Piattini, 2011). Software quality management must be treated in a broader, more integrated context than in the past (Breu, Kuntzmann-Combelles & Felderer, 2014). Traditional approaches of perimeter security through guns, gates, and guards are insufficient in virtual world and virtual world requires different approaches to ensure the confidentiality, integrity, and availability of products that rely on software (Chrissis, Konrad, & Moss, 2013). Software researchers and practitioners must break through traditional barriers and look at software quality problems and approaches in a broader way to overcome software quality challenges and provide solutions that can help humankind more effectively (Notkin, 2009). These indicate the need to have an 9 integrated and comprehensive approach to gain better understanding of software quality management research field that helps in improving software quality management theory and practices.

Single-dimensional and silo approaches such as focusing on processes without considering people, technology, and organizational environment aspects or focusing on technology without considering process, people, and organizational environment and so on are insufficient to overcome software quality challenges. Promoting a holistic approach and systems thinking that focuses on an integrated and comprehensive approach, breaking through traditional barriers to design, develop, and improve software based on multidimensional best practices is needed. In addition, collaboration is needed among disparate research areas to break through traditional barriers and look at software quality problems and approaches in a broader way to overcome software quality challenges and take advantage of these software technological innovations that improve the health, wealth, knowledge, quality of life, and well-being of all.

These indicate the need to have an integrated and comprehensive approach to evaluate and improve software and develop quality management strategies based on software quality management best practices to support industry practitioners and promote collaboration among disparate research areas in the discipline.

Successful codification of best practices closes the gap between state-of-the-art and state- of-the-practice and allows organizations to move forward and avoid the mistakes by learning from the mistakes and experiences of others (Grady, 1993). Insight into best practices can be gained by considering the works of a great number of researchers in the discipline over an extended period of time (Acedo, Barroso, and Galan, 2006). This is an iterative process that contributes to the growth and evolution of a discipline and these researchers have contributed to 10 the breadth of knowledge by using published scholarly articles (Ramos‐Rodríguez &Ruíz‐

Navarro, 2004). Scholarly knowledge must be organized, carefully managed, expanded, and optimized on ongoing basis. Scholarly journals are not published solely for the sake of academic communities. Industry experts and scholars use journals to identify current best practices and communicate results to industry practitioners, policy makers and regulators (Ramos‐Rodríguez

& Ruíz‐Navarro, 2004). Identifying seminal sources and understanding relationships among them helps in understanding a field and best practices (Dingsøyr, Nerur, Balijepally, & Moe,

2012). Identification of software quality management best practices in the literature as the foundation for creating an assessment tool and using this to perform assessment to evaluate software helps identify gaps and develop strategies to improve software.

This study identifies software quality management best practices, synthesizes these best practices into an assessment tool for industrial practitioners, refines the assessment tool based on academic expert review, further refines the assessment tool based on pilot testing with industry experts, and validates with industry practitioners. This research aims to fill a gap in academic literature and in practice. This research adds to the knowledgebase of software quality management. This research is expected to complement and enhance the findings of other research on software quality management. The comprehensive software quality assessment tool provides specific action items for improvement.

Statement of the Problem

The problem of this study was to develop a quality management assessment tool to evaluate software using software quality management best practices. 11

Research Questions

RQ1. What software quality management best practices pertaining to people can be used to evaluate software organizations produce?

RQ2. What software quality management best practices pertaining to organizational environment can be used to evaluate software organizations produce?

RQ3. What software quality management best practices pertaining to process can be used to evaluate software organizations produce?

RQ4. What software quality management best practices pertaining to technology can be used to evaluate software organizations produce?

RQ5. What are the key elements of software quality management assessment tool to evaluate software organizations produce?

Statement of the Purpose

The purpose of the study is to develop a quality management assessment tool to evaluate software using software quality management best practices. Organizations are in search of competitive advantage in the complex global marketplace by improving quality, affordability, and delivery of products and services (McAfee & Brynjolfsson, 2008). Organizations and nations continue to invest a great deal of time, money, and effort in software (Kanellopoulos & Yu,

2015). Quality management strategies must be continuously improved to deliver high quality software quickly at an affordable price. This research presents best practices based assessment tool to organizations that are taking advantage of rapid innovations in software to help these organizations in developing quality management strategies for competitive advantage.

This research contributes to the current academic literature in software quality management by presenting the quality management assessment tool based on software quality 12 management best practices, contributes to body of knowledge on software quality management, and expands the knowledgebase on quality management practices. This provides a new and rich view of history of software quality management discipline through holistic view of the discipline.

This research also contributes to current professional practice by incorporating software quality management best practices into quality management assessment tool to evaluate software and continually improve knowledgebase. Professional practice utilizes software quality management best practices to differentiate their organization’s performance within the industry and within their respective market.

Many technology management disciplines could also benefit in a similar fashion by developing appropriate quality management best practices. Educators, researchers, engineers, journal editors, publishers, professional associations, policy makers, commercial and industrial communities, business and information technology management professionals, information technology consultants and so on can benefit from quality management assessment tool to incorporate software quality management best practices to keep up with rapid innovations and needs of business.

Statement of the Need

Despite rapid innovations in software, poor software quality continues to be an issue. The

United State government’s health insurance marketplace website healthcare.gov built to provide health insurance enrollment through Patient Protection and Affordability Care Act experienced quality issues and was considered a disastrous software rollout (Howles, 2014; Shull, 2014c).

Poorly written software is the root of cyber security problems and this caused security breaches in organizations such as Sony, Home Depot, JP Morgan, eBay, and Target impacting their 13 customers negatively (Maughn, 2010; Houser, 2015). A bug in open source free software

OpenSSL caused security vulnerability in several organizations such as Amazon, Facebook,

Netflix, and Google impacting their customers negatively (Goldsborough, 2014; Kamp, 2014).

FDA’s analysis of medical device recalls conducted between 2003 and 2012 concludes that software is one of the most frequent causes for recalls (FDA, n.d.-a). Automotive industry faced several recalls recently due to software defects such as Toyota’s unintended acceleration bug,

Volkswagen’s embedded defeat device, Fiat Chrysler’s software security loophole (Hatton & van

Genuchten, 2016). The complicated software supply chains have become avenues for cyber intrusions (Nielson, 2015). Software quality issues impact consumers, organizations, and nations.

Cost overruns, schedule overruns, cancelled projects, technical debt, loss of market share, and loss of consumer confidence are inevitable as a result of software quality issues (Dalal &

Chhillar, 2012).

There is a gap between the state-of-the-art and state-of-the-practice (Arce et al., 2013).

Software security and software assurance have become concerns for many (Mead, 2009).

Security assurance is a challenge for software development organizations but is a necessity for consumers (Lipner, 2015). Researchers and scholars are continually producing new knowledge and insights on software quality management strategies in scholarly peer reviewed journals and the number of scholarly articles has increased. Producing high quality reliable software depends on whether and how fast humans work from certain ideas originating in the past (Whittakar and

Voas, 2002). Research on Software quality management that focuses on improving software quality made rapid progress in theories and applications but it is fragmented, incoherent, and disjointed that slows the growth of new theory building and there were no systematic and coherent theories evolved for software quality management (Rai, Song, & Troutt, 1999; 14

Palmquist et al., 2013). More work is needed to integrate software quality management approaches that are widely used but rarely integrated in a coherent way (Pardo et al., 2011).

Software quality management must be treated in a broader, more integrated context than in the past (Breu, Kuntzmann-Combelles & Felderer, 2015). Traditional approaches of perimeter security through guns, gates, and guards are insufficient in virtual world and virtual world requires different software quality management approaches to ensure the confidentiality, integrity, and availability of products that rely on software (Chrissis et al., 2013). Software researchers and practitioners must break through traditional barriers and look at software quality problems and approaches in a broader way to overcome software quality challenges and provide solutions that can help humankind more effectively (Notkin, 2009). These indicate the need to have an integrated and comprehensive approach to gain better understanding of software quality management research field that helps in improving software quality management theory and practices.

Single-dimensional and silo approaches such as focusing on processes without considering people, technology, and organizational environment aspects or focusing on technology without considering process, people, and organizational environment and so on are insufficient to overcome software quality challenges. Promoting a holistic approach and systems thinking that focuses on an integrated and comprehensive approach, breaking through traditional barriers to design, develop, and improve software based on multidimensional best practices is needed. In addition, collaboration is needed among disparate research areas to break through traditional barriers and look at software quality problems and approaches in a broader way to overcome software quality challenges and take advantage of these software technological innovations that improve the health, wealth, knowledge, quality of life, and well-being of all. 15

These indicate the need to have an integrated and comprehensive approach to evaluate and improve software and develop quality management strategies based on software quality management best practices to support industry practitioners and promote collaboration among disparate research areas in the discipline. A holistic view should see software production at the intersection of people, process, technology, and organizational environment. This is illustrated in

Figure 1.

Process Technology

Organizational People Environment Software

Figure 1. Holistic View of Software Production

Successful codification of best practices closes the gap between state-of-the-art and state- of-the-practice and allows organizations to move forward and avoid the mistakes by learning from the mistakes and experiences of others (Grady, 1993). Insight into best practices can be gained by considering the works of a great number of researchers in the discipline over an extended period of time (Acedo, Barroso, and Galan, 2006). This is an iterative process that 16 contributes to the growth and evolution of a discipline and these researchers have contributed to the breadth of knowledge by using published scholarly articles (Ramos‐Rodríguez &Ruíz‐

Navarro, 2004). Scholarly knowledge must be organized, carefully managed, expanded, and optimized on ongoing basis. Scholarly journals are not published solely for the sake of academic communities. Industry experts and scholars use journals to identify current best practices and communicate results to industry practitioners, policy makers and regulators (Ramos‐Rodríguez

& Ruíz‐Navarro, 2004). Identifying seminal sources and understanding relationships among them helps in understanding a field and best practices (Dingsøyr et al., 2012). Identification of software quality management best practices in the literature as the foundation for creating an assessment tool and using this to perform assessment to evaluate software helps identify gaps and develop strategies to improve software.

This study identifies software quality management best practices, synthesizes these best practices into an assessment tool for industrial practitioners, refines the assessment tool based on academic expert review, further refines the assessment tool based on pilot testing with industry experts, and validates with industry practitioners. This research aims to fill a gap in scholarly literature and in practice. This research adds to the knowledgebase of software quality management. This research is expected to complement and enhance the findings of other research on software quality management. The comprehensive software quality assessment tool provides specific action items for improvement. A quality management assessment tool based on software quality management best practices provides several advantages listed below.

• Provides objective assessment of organizations based on industry best practices.

• Helps organizations, individuals, and governments to be confident about the

software procured. 17

• Helps software producers to find deficiencies in the organizational practices and

develop roadmaps to rectify deficiencies and improve organizational practices.

• Helps software producers to provide confidence about their software products.

• Helps software producers to demonstrate application of industry best practices.

• Helps organizations to foster learning, sharing of best practices, and improved

decision making across the organization.

• Helps organizations to practice industry best practices and prevent delivering

defective software to consumers.

• Helps organizations to bring efficiencies in global supply chain systems.

• Can be used for benchmarking.

• Can be used to choose the best software from multiple available software

products.

• Can be used to assess products delivered by outsourced vendors and their

subcontractors.

• Can be used to identify if software is ready for re-use in other software.

• Can be used to identify if software is ready to integrate with other software.

Statement of the Assumptions

The following assumptions were made. Existing knowledgebase on software quality management includes books, dissertations, theses, journals, blogs, encyclopedias, course materials, reading lists, magazine articles, conference proceeding papers, newspapers, newsletters, white papers, trade magazines, intellectual capital, wikis, and congressional records.

The data used for this research are scholarly, peer-reviewed journals with no time limit set for retrieval. This represents a subset of journal articles published on software quality management 18 and a smaller subset of existing knowledgebase on software quality management. Sources other than scholarly, peer-reviewed journals are not considered to minimize unnecessary noise introduced by these sources.

Information in the scholarly peer reviewed journals irrespective of the nationality of authors is accurate, reliable and complete (Van Eck & Waltman, 2010). Bibliometric techniques provide impartial, unbiased, unobtrusive, objective, and precise scholarly methods and a powerful form of communication (Greenberg, 2009). Best practices can be derived from scholarly literature through bibliometric techniques. All citations indicate scholarly influence and are appropriately referenced. Citations jointly constitute intellectual knowledge or the knowledge base of scholarly articles and analysis of scholarly documents representing a field at any given time can be used to detect cognitive structure of a field (Soos, Kampis, & Gulyás, 2013). All published journal articles are of high quality research and are of equal quality, value, and impact and these constitute certified knowledge. Titles, key words, and abstracts of scholarly articles represent the full contents including key concepts. Titles, key words, and abstracts are more focused than full texts and therefore more suitable for automated analysis (Heersmink, van den

Hoven, van Eck, & van den Berg, 2011).

Software does not function in isolation. Optimal software needs optimal hardware to provide the best possible consumer value. Industrial practitioners desire to improve the software quality based on best practices. Questionnaire appropriately represented the software quality management best practices to assess software. Academic experts, Industry experts, and Industry practitioners provide timely, accurate, honest, objective, unbiased, candid, and complete feedback on software quality assessment tool. Academic experts, Industry experts, and Industry practitioners have appropriate knowledge, skills, experience, and expertise to provide feedback 19 on software quality assessment tool. Software quality assessment tool is reviewed by academic experts, pilot tested by industry experts, and validated by industry practitioners from different industries, geographies and sizes. This approach considers theoretical and practical aspects of the assessment tool. A self-administered questionnaire to obtain feedback on assessment tool from

Academic experts, Industry experts, and Industry practitioners is interpreted by these participants in the same way irrespective of academic or industry background or prior competencies, geography or industry or size.

Statement of the Limitations

Not all journals publish keywords. To overcome this limitation, document titles and abstracts along with keywords were used to search Science Citation Index (SCI), Social Sciences

Citation Index (SSCI), and Emerging Sources Citation Index (ESCI) compiled by the Institute for Scientific Information (ISI) Web of Knowledge databases using selected keywords.

Quality of titles, abstracts, or keywords used to identify best practices is a limitation. The journals listed in Science Citation Index (SCI), Social Sciences Citation Index (SSCI), and

Emerging Sources Citation Index (ESCI) compiled by the Institute for Scientific Information

(ISI) Web of Knowledge databases are top scholarly journals and titles, abstracts, and keywords published in these journals are adequate quality for analysis.

Time lag is another limitation and the most recent scholarly articles may have been omitted due to insufficient number of citations or delay in getting accepted to be published due to stringent review standards set by scholarly journals compiled by the ISI Web of Science databases (Appio, Cesaroni, & Di Minin, 2014).

One of the leading objectives of scholarly research is to influence the thinking of other scholars (Tahai and Meyer, 1999). Citations are not the only measure of influence. Citations are 20 a subset of scholarly influence. Citations may not measure quality or intellectual rigor.

Sometimes, certain works are cited not because of their direct intellectual influence but because of the legitimacy they provide to certain topics (Mizruchi and Fein, 1999). Some high quality research articles may not be widely cited, whereas many relatively low quality articles may be extensively cited (Mizruchi and Fein, 1999). Articles that focus on theory tend to receive more citations than do empirical articles (Li and Tsui, 2002). The age of article may influence the citations. Very old or brand new articles may not receive citations (Li and Tsui, 2002). Scholarly articles may be cited for several reasons. Few of these include critique or praise or neutral inclusion or familiarity with the topic or publication (Parnas, 2007). Critics of the citation analysis argue that quality or impact measured in terms of novelty, importance, and contribution must be analyzed over quantity. However, this approach introduces several challenges. These include subjective assessment, biases of the reviewers, competence of the reviewer, and limitation on number of articles that can be thoroughly reviewed (Meyer, Choppy, Staunstrup, van Leeuwen, 2009). Citation analysis is based on the premise that heavily cited articles are likely to have exerted a greater influence on the subject than those less frequently referenced and thus are indicators of activity or importance to a field. There are weaknesses to this premise.

Adequate screening and a sufficiently large sample can overcome these weaknesses. Despite all limitations with citation analysis, citation analysis remains the most widely used method for evaluating scholarly impact (Li and Tsui, 2002).

A bibliometric map provides a better understanding of the field through summarization and a simplified visual representation of the structure inherent in a set of scholarly publications.

Bibliometric map is a powerful tool for studying the structure and dynamics of a field.

Visualizations provided by bibliometric maps can help to examine past research, to identify 21 potential future directions, and collaboration opportunities, and provide insights, and to give shape and structure to underlying pattern of intellectual activity as it develops and evolves (Di

Stefano, Peteraf, & Verona, 2010). Simplified visual representation generally implies some loss of information. Knowledge of an expert and traditional literature reviews are alternative options to using bibliometric techniques. However, the knowledge of an expert and traditional literature reviews in a certain field will also be limited in various ways (Heersmink et al., 2011). Expert judgments and traditional literature reviews are expensive, difficult to gather from diverse and large sample of researchers, and also introduce subjective and local biases (Nerur, Rasheed and

Natarajan, 2008). Despite the limitations, bibliometric techniques provide unbiased, impartial, unobtrusive, accurate, and objective view and bibliometric mapping is a useful tool to produce visual representations of field of interest (Rodrigues, Van Eck, Waltman, & Jansen, 2014). This research uses hybrid methodology that augments bibliometric techniques with academic expert reviews, industry expert pilot testing, and industry practitioner validation.

Experts may be restrained from volunteering to perform assessment tool review or test of the assessment tool due to company policies or client policies or lack of motivation or lack of monetary compensation or other higher priorities or time limitations. To overcome this limitation, editorial boards of top productive and highly influential journals are used to seek participation from more than one academic experts and more than one industry experts. Experts’ review and testing are self-reported and may have subjective bias and local bias limitation.

Industry practitioners may be restrained from volunteering to participate in validation of the assessment tool due to company policies or client policies or lack of motivation or lack of monetary compensation or other higher priorities or time limitations. To overcome this limitation, authors of top productive and highly influential journals are used to seek participation 22 from more than one industry practitioners. Industry practitioners’ validation is self-reported and may have subjective bias and local bias limitation.

Experts may be restrained from volunteering to perform assessment tool review or test of the assessment tool or Industry practitioners may be restrained from volunteering to participate in validation of the assessment tool due to fear that personal information would be compromised or feedback that includes name of volunteers would be published. To overcome this limitation, experts and practitioners participating in this study are informed of research goals and background information. Experts and practitioners are also informed that information provided would remain anonymous. University affiliation is included in the communication to experts to provide greater confidence that responses would remain anonymous.

Even though web survey or Internet survey offers several advantages such as real-time data collection, convenience, higher response rate, real-time results availability, and so on over traditional paper based surveys, unwanted email may be considered spam or junk or may be deleted or ignored especially with all cyber phishing spams that are occurring around the globe.

To overcome this limitation, University affiliation is included in the communication.

Statement of the Delimitations

The focus of the study is on identification of software quality management best practices and quality management best practices applied in service industries and manufacturing industries including manufacturing of computer hardware are not considered. Databases used for this study are limited to Science Citation Index (SCI), Social Sciences Citation Index (SSCI), and

Emerging Sources Citation Index (ESCI) compiled by the Institute for Scientific Information

(ISI) Web of Knowledge databases. These are the most comprehensive data sources frequently used by scholars for reviewing high-quality publications and assessing scientific accomplishment 23

(Braun, Schubert, & Kostoff, 2000; Tan, Fu, & Ho, 2014; Yin & Chiang, 2010). Other alternative data sources, Google Scholar and Scopus are considered inferior to ISI Web of

Science databases. Google Scholar cites many sources other than strictly scholarly journals

(Thomson & Walker, 2015). Scopus is a late addition to bibliometric source databases and does not provide broad coverage compared to Web of Science databases (Thomson & Walker, 2015;

Olijnyk, 2015). Another contributing factor for selecting ISI Web of Science database for this research is that it is accessible freely through Indiana State University Library for research purpose.

Identification of software quality management best practices is based on scholarly articles that are subset of existing knowledgebase on software quality management. Books, dissertations, theses, blogs, encyclopedias, course materials, reading lists, magazine articles, white papers, trade magazines, intellectual capital, wikis, congressional records, patents, research reports, and journals published in languages other than English are not considered for this analysis. Search filter is set to include scholarly articles written in English. Search filter is set to exclude any documents not in English or any non-scholarly documents such as reviews, proceeding papers, meeting abstracts, editorial material, letters, notes, discussions, book reviews, book chapters, bibliographies. Limiting the research to scholarly articles provides an unambiguous scholarly view of the discipline and reduces unnecessary noise introduced (Olijnyk, 2015). This research does not consider Arts & Humanities Citation Index (A&HCI) available as part of ISI Web of

Knowledge databases because this database does not compile scholarly work on software quality. This research does not consider new data sources related to scholarly documents such as clicks, views, and downloads because this information is not relevant to this research. 24

The terms used to retrieve scholarly articles for this study are “Software quality*” or

“Software assurance” or “Software management” or “Software improvement” or “Software process improvement” or “Software process assessment” or “Software process innovation” or

“Software process metrics” or “Software process engineering” or “Software process technology” or “Software process evaluation” or “Software process management” or “Software process appraisal” or “Software process certification” or “Software process audit” or “Software product assessment” or “Software product evaluation” or “Software product management” or “Software product appraisal” or “Software product certification” or “Software product audit” or “Software product metrics” or “Software product improvement”. “Software quality*” includes keywords such as “Software quality” or “Software quality plan”, “Software quality metrics” or “Software quality engineering” or “Software quality science” or “Software quality technology” or

“Software quality assurance” or “Software quality control” or “Software quality management”.

This study does not consider any other scholarly articles. These terms are selected based on extensive literature review to identify software quality management best practices. Terms selected do not include any specific frameworks or models such as Capability Maturity Model

Integrated (CMMI), Software Process Improvement and Capability and Determination (SPICE),

Agile, Lean, Six Sigma, Scaled Agile Framework.

Review of quality assessment tool is limited to academic experts from editorial boards of productive and influential scholarly journals that volunteered. Pilot testing of quality management assessment tool is limited to industry experts from editorial boards of productive and influential scholarly journals that volunteered. Editorial boards of productive and influential scholarly journals are diverse representing different geographies and industries, subject to experts willing to participate. Validation of the quality management assessment tool is limited to 25 industry practitioners from authors of productive and influential scholarly journals that volunteered. Authors of productive and influential scholarly journals are diverse representing different geographies and industries, subject to experts willing to participate.

This study is limited to seeking voluntary participation from a small sample size of more than one academic experts and more than one industry experts and more than one industry practitioners to allow necessary focus and diversity for obtaining feedback from thought leaders and leading industry practitioners.

Statement of the Methodology

This research applies bibliometric techniques on scholarly peer-reviewed journals to derive software quality management best practices and to differentiate important best practices from trivial best practices. Important best practices are analyzed and reviewed to refine and construct quality management assessment tool. Academic expert verification is conducted to further refine the quality management assessment tool. Pilot testing is conducted with industry experts to further refine quality management assessment tool and provide a final quality management assessment tool for industry practitioners. Validation is conducted with industry practitioners to further refine quality management assessment tool.

Definition of the Key Terms

Agile sweet spot: Ideal conditions in which agile software development practices originated from and where they are most likely to succeed (Kruchten, 2013).

Agility: The ability of an organization to react to changes faster than the rate of changes

(Kruchten, 2013).

Anti-patterns: Common practices that may look convenient, appropriate, and reasonable but are harmful in the long run and must be avoided (Eloranta, Koskimies & Mikkonen, 2016). 26

Assurance: Building systems that can resist attack (Lipner, 2015).

Business and Information Technology alignment: Enterprise Information Technology plan and execution that aligns with organization’s vision, goals, and objectives (Heston & Phifer, 2011).

Citation: An acknowledgement by one published article that its content has drawn upon the knowledge within another article (Zhuge, 2006). In the context of citation analysis, citation refers to both reference and citation (Olijnyk, 2015).

Co-author analysis: Co-author analysis connects authors when they co-author a scholarly document (Zupic & Čater, 2015).

Cluster: Clusters formed in bibliographic coupling network and co-citation networks represent research topics in a field. The number of scholarly documents and impact of these documents in each research topic are represented in the map (Belter, 2012). Group of researchers who share the same interests and coincide in citing the same references are represented in author co-citation network (Neeley, 1981).

Competencies: Competencies are skills, knowledge, and experiences (Lin, Lee, & Tai, 2012).

Cyber-Physical-Social systems: Ever growing network of uniquely addressable physical objects

(things). Every real world physical object could have a unique identifying address like IP address. Alternative names used are Internet Of Things, Industrial Internet, Industry 4.0, Cyber-

Physical systems, Smart systems, Embedded systems, Internet Of Everything, Pervasive

Computing, Ubiquitous Computing, Brilliant Machines, Intelligent Machines, Machine to

Machine Communication, Sensor networks, and Multi Sensor Fusion (Guo, Yu, & Zhou, 2015;

Lasi, Fettke, Kemper, Feld, & Hoffmann, 2014; Sowe, Simmon, Zettsu, de Vaulx, & Bojanova,

2016). 27

Distance based bibliometric network: Distance between nodes in distance based bibliometric network indicates relatedness of nodes. Nodes that are closer have high relatedness. Multi- dimensional scaling and visualization of similarities are examples of distance based bibliometric network (Van Eck, Waltman, Dekker, & Van den Berg, 2010).

I-shaped: An I-shaped person is one who has a great deal of depth in software technology, but little understanding of the other disciplines involved in such areas as business, medicine, transportation, or Internets of Things (Boehm & Mobasser, 2015).

Multidimensional scaling: Multidimensional scaling is a set of statistical techniques by which researchers can obtain quantitative estimates of similarity among groups of items by reducing the complexity of a data set and permitting visual appreciation of the underlying relational structures

(Hout, Papesh, & Goldinger, 2013).

Organizational culture: A set of key features that an organization values, uses, and shares to achieve its objectives (Passos, Dias-Neto & da Silva Barreto, 2012).

Pattern: A pattern is a best practice proven solution to a problem that is derived from real systems, occurs repeatedly in a given context, contains extensive discussion of when it’s appropriate to use that includes positive and negative consequences of usage (Kircher & Volter,

2007).

Product backlog: Ordered list of features needed in the product ordered based on value, risk, and priority (Eloranta et al., 2016).

Sprint: A time-boxed period where the actual scrum takes place (Eloranta et al., 2016).

Software: Software is an intellectual product consists of the computer programs, associated procedures, documentation, and data pertaining to the operation of a computer system

(Daughtrey, 2013). Examples of application software are Customer Relationship Management 28

(CRM) software, Enterprise Resource Planning (ERP) software, Supply Chain Management

(SCM) software, and Human Resource Management (HRM) software.

Software Sustainability: The capability of software systems to endure different types of changes over their life time (Shull, 2013).

System Development Life Cycle: Systems development life cycle activities consisting of requirements, analysis, architecture, design, implementation, validation, training, implementation, and maintenance (Heston & Phifer, 2011).

T-shaped: A T-shaped person is one who has technical depth in at least one aspect of the system's content, and a workable level of understanding of a fair number of the other system aspects (Boehm & Mobasser, 2015).

Zero day attack: Security threats that exploit vulnerable applications the same day the vulnerability becomes known requiring lightning-fast reaction (Spinellis, 2016b).

Summary

This chapter provided a conceptual foundation for the proposed research. Background information covered critical role of software and software quality, and need to treat software quality management in a broader and more integrated context, and need for a quality management assessment tool based on software quality management best practices. Next, the proposed research outline was presented in terms of the problem statement, the research questions, the purpose and the need for the study, and the assumptions and limitations that are related to the proposed methodology, and the statement of the methodology. Finally, the chapter concluded with definition of the key terms that are important for the proposed research.

29

CHAPTER 2

REVIEW OF LITERATURE

Chapter Overview

The purpose of this chapter is to review the main conceptual areas within the literature that were relevant to understanding the problem for this research. These areas are: 1) Survey of prior scholarly work on software quality management 2) Survey of prior scholarly work on bibliometric techniques.

Survey of Prior Scholarly work on Software Quality Management

Software is designed, built, and maintained by people, for people, through people and developing software, being an intellectual activity, is among the most difficult and complex activities performed by people (Capretz, 2014). Competent people produce great software as fast as the market needs it (Hermans et al., 2016). The primary asset of any organization developing software is people and not buildings, plants, or expensive machines (Rus & Lindvall, 2002).

Work to develop software is performed by knowledge workers working in teams (Shull, 2013).

Software development involves interactions between people, processes, and technology via organizational structure (Nagappan, Murphy, & Basili, 2008). Organizations need people with the best competencies that also fit team culture (Kaatz, 2012). Put the right people in place, surround them with a healthy organizational environment, manage them well, and collectively they will figure out how to build great software (Marasco, 2012). Organizations have a happier 30 environment when people practice servant leadership and system thinking to manage themselves and to lead by example through experimentation and learning (Prikladnicki, Lassenius, Tian, &

Carver, 2016).

Human factors in software quality management are essential because Knowledge workers can’t be effectively and efficiently managed through command and control (Spinellis, 2016b).

Knowledge workers resist emphasis on process based software quality management that focuses on compliance and continual improvement only (Thomas, Hurley, & Barnes,1996). Instead of focusing on overly process driven, process policing, end of cycle testing, and other adversarial approaches that demotivate knowledge workers, assurance activities enabled by process coaching must be well integrated into technology and processes and must occur throughout the software life cycle (Shull, 2012). The best software process is ineffective without competent professionals because of human-centric and design-intensive nature of software development

(Paulk, 2002). Rogue employees engaged in software development can initiate malicious code such as malware and Trojans leading to private data leakage and more (Forte, Perez, Kim, &

Bhunia, 2016). Processes can only help if knowledge workers performing the processes take ownership of processes and can create processes that help get the work done (Shull, 2013).

Technology can help to perform processes more effectively, if technology is people driven

(Heston & Phifer, 2011). Giving enough time and affording respect for delivering quality software is more important than processes or technology and is crucial to prevent software from becoming wasteful (Wirth, 2008). Processes or technology can’t replace knowledge workers that are empowered, stand behind and take pride in their personal work through professional discipline (Shull, 2013). Processes or technology can’t overcome lack of appropriate competency required to build software (Spinellis, 2016b). Processes, technology, and organizational 31 environment must be designed with focus on eliminating distractions and waste, and promoting use of industry best practices across the organization to allow knowledge workers to be intrinsically motivated and productive through individual and team’s unique strengths and build great software (Spinellis, 2016b). This does not indicate that people are the only factor. People are an important factor and more important than technology and processes (Capretz, 2014).

Agile values, principles, practices, and concepts such as “Just-In-Time”, Kanban, System thinking, Customer value focus, Continuous improvement, and Lean are known for years and successful organizations realize that agility is necessary to respond to rapid changes and opportunities in the global marketplace (Rigby, Sutherland, & Takeuchi, 2016). Software development has been sporadically using agile practices since at least the 1960s. The roots of agile go beyond software development. The recent agile initiatives have been to re-package, re- brand, and combine all these ideas and expect to be followed with an overemphasis on two major ideas being open to change rather than resisting change and providing value to stakeholders by expanding communication and continuous feedback (Duncan and Peercy, 2012). Plan-Do-Study-

Act (PDSA) invented by Shewart and popularized by Deming formed as a basis for Toyato

Production System’s lean thinking and application of iterative, incremental, and evolutionary methods (Rigby et al., 2016). There is an increased attention to agile practices in software development after agile manifesto was released in 2001 and agile continues to spread to improve innovation in every aspect of every industry far beyond software development (Dingsøyr et al.,

2012).

Agile practices take an iterative, incremental, and evolutionary approach to software development that is performed by self-organizing, cross-functional, motivated, committed, competent, and empowered teams collaboratively to produce quality software in a cost effective 32 and timely manner that meets the changing needs of its stakeholders in a rapidly changing environment (Hoda, Noble, & Marshall, 2013). Agile practices are considered lightweight,

Internet-speed, people based, and customer value focused practices, and are considered as alternative to traditional lifecycle methodologies such as waterfall and spiral that are considered as heavy weight, plan driven, predictive, process focused, document driven, bureaucratic, systematic, or Tayloristic methodologies (Silva et al., 2015). The sequential document driven water-fall process methodologies cause people to overpromise the capabilities of software before understanding risk implementations (Boehm, 1991). Traditional methodologies work best in environments where user requirements and technologies required to meet these requirements are well understood and application of these in uncertain, immature, volatile, continuously and rapidly evolving environments such as Internet software is problematic (Hermans et al., 2016;

MacCormack, 2001). Agile methodologies used in software development are Extreme

Programming, Scrum, Kanban, Scrumban, Lean Scrum, Crystal, Dynamic Systems Development

Method, Disciplined Agile Delivery, Scaled Agile Framework, Scaled Professional Scrum, Large

Scale Scrum, Adaptive Software Development, Agile modeling, Feature Driven Development,

Test Driven Development, Agile Unified Process, and Lean Software Development (Stankovic,

Nikolic, Djordjevic, & Cao, 2013; Stavru, 2014).

Traditional methodologies suffer from issues such as poor communication with users and stakeholders, overreliance on processes and technology, lack of or low user engagement, failure to manage user expectations, misunderstanding of user requirements, and failure to accommodate changes in requirements and scope (Papadopoulos, 2015; Sherer & Alter, 2004).

Traditional practices of team members working on different initiatives or performing multiple roles concurrently may lead to disruptions, unsustainable pace, stakeholder expectation 33 mismatches, context switching, overtime, burnout, or not delivering appropriate value to customers. If feedback loops are long or non-existent, this may cause re-work, waste, and non- value add deliverables. Agile methodologies effectively and efficiently deal with these issues through collaboration with users and stakeholders, high user engagement, multidisciplinary skills, pluralistic decision making, small teams, continuous communication and feedback

(Lindsjørn, Sjøberg, Dingsøyr, Bergersen, & Dybå, 2016). Agile practices are more demanding due to the need to bring the customer close to the developers, ensure the software's frequent delivery, rapid response to changes, motivate individuals, empower teams to organize themselves, and promote a culture of technical excellence through common focus, mutual trust, and respect in a turbulent, fast paced, and uncertain environment (Spinellis, 2016b). Agile practices have demonstrated promising outcomes in large teams, non-collocated teams, and distributed teams across the globe and these are beyond agile sweet spots (Bavani, 2012). There are no external barriers to adopting agile methods when developing embedded software such as medical device software and that barriers that do exist are primarily in-house barriers within the organization, which can be overcome (McHugh, McCaffery, & Casey, 2014).

Human and social factors are essential to succeed. Reflective practice, continuous learning from reflecting on past practices is a defining characteristic of professional practice

(Maiden, 2012b). Agile practices unlock the intrinsic motivation and creativity of workers and call for reflection, ability to learn from experience and experimentation, ability to learn from mistakes, and adaptation centering on the idea of lifelong continuous learning at individual level, team level, and organizational level (Dyba, Maiden, & Glass, 2014). Reflection during problem solving called reflection-in-action and reflecting back on past experience called reflection-on- action are valuable to improve the proficiency and performance of software professionals (Bull 34

& Whittle, 2014). Opportunities for reflection-in-action in agile implementation are release planning, estimation and iteration planning, group programming, pair in-need, project wikis, and product demo and acceptance testing, and opportunities for reflection-on-action in agile implementation are retrospective, daily standup, and learning spikes (Babb, Hoda, & Norbjerg,

2014). Competency development, capturing, sharing, and applying knowledge from previous projects to new projects are essential so that each person can deliver more value with time

(Cockburn & Highsmith, 2001; Liebowitz, 2015). Agile practices promote investing heavily on individual competencies rather than organizational rule sets (DeMarco & Boehm, 2002).

Development of human and organizational intellectual capital is not sufficient. Knowledge management elevates individual knowledge to the organizational level by capturing and sharing individual knowledge and turning it into knowledge the organization can access globally independent of time and space (Rus & Lindval, 2002). Strengthening the social capital accumulation through explicit cultivation of networking is at least as important as the development of the human and organizational capital (Šmite, Moe, Šāblis, & Wohlin,

2017). These are consistent with concepts that Basili advocated, the concepts of Quality

Improvement Paradigm based on application of continuous improvement philosophy to software development and Experience Factory model for software process improvement based on effective feedback mechanisms from the learning organizations (Shull, Seaman, & Zelkowltz,

2006). If there are no retrospectives or lessons learned, team won’t be able to understand what works and what does not work to learn and continuously improve, and contribute to productivity and efficiency gains.

Poor software engineering economics is due to poor software quality and software community needs to take up-front quality practices such as defect prevention, defect removal, 35 reusability, and more seriously (Jones, 2015). Root cause analysis techniques such as 5 whys help in preventing reoccurrence of defects. Despite the fact that the highest quality cost savings can be accomplished by doing things right and preventing defects in the first place, prevention costs have gained the least attention (Karg, Grottke, & Beckhaus, 2011).

Defect prevention and early removal of defects through Personal Software Process (PSP) practices such as structured personal review and checklist based personal review of artifacts such as design and code is less expensive and more effective (Vallespir & Nichols, 2016; Nichols,

2012). Dynamic testing is necessary but is not sufficient to provide confidence that software is fit for intended use (FDA, n.d.-b). Static testing techniques such as Fagon inspections of artifacts, walkthroughs, reviews, and traceability, acceptance criteria up-front, and analysis of defect patterns have proven to be more effective than dynamic testing (Hinkle, 2007; Phan, 2001). In addition to improving software quality, reviews, inspections, and walkthroughs provide a channel for experts to share domain specific tacit knowledge with other team members and improve organizational learning (Guo, 2014).

Tools and testing supplement rather than replace effective and efficient human reviews

(Vallespir & Nichols, 2016). Error removal practices include assuring adequate code coverage of unit testing through automated code coverage tools and automated regression testing (Hackbarth,

Mockus, Palframan, & Sethi, 2016). As humans excel at identifying design flaws and are much less reliable at mundane tasks such as rule-compliance and avoidance of common coding errors, integrating the output of the automated static analyzers and any other type of background checkers with human walkthroughs, reviews, and inspections proves the value (Holzmann,

2014). Automated static analysis is economical and complements verification and validation

(Gleirscher, Golubitskiy, Irlbeck, & Wagner, 2014). It is important to promote a culture of defect 36 prevention and quality assurance across the organization than promoting specific technology or tool (Louridas, 2011).

When software is built, enhanced, or modified without full care for structure and extensibility, software incurs technical debt (Mohan, Greer, & McMullan, 2016). Technical debt is similar to financial debt and accumulates interest to be paid later (Buschmann, 2011). The metaphor of technical debt helps to understand consequences of poor software development practices and trade-offs between long-term quality and short-term goals (Shull, 2013). Technical debt is a balance between software quality and business reality and the choice of debt management strategy is context specific (Lim, Taksande, & Seaman, 2012). Other names used for technical debt are software maintenance, evolution, aging, decay, reengineering, sustainability, and so on (Avgeriou, Kruchten, Nord, Ozkaya, & Seaman, 2016). Technical debt has been gaining more attention due to broad economic and technical impacts (Behutiye,

Rodríguez, Oivo, & Tosun, 2017).

Coding standards provide benefits such as improved readability, increased automate tool usage, enhanced peer reviews, and enabling of customer audits (Spiewak & Mcritchie, 2017).

Keeping the code clean and simple through modern coding standards and paying attention to the complier built-in checks and warnings from the tools that are designed to prevent common mistakes and avoid risky coding patterns to protect from software failures are essential

(Holzmann, 2015b). Technical debt extends beyond coding practices and includes architecture, design, requirements, testing, or documentation, security, and so on (Elbanna & Sarkar, 2016).

Safety-critical software requires more emphasis on excluding systematic errors through systematic and disciplined development processes such as requirements engineering, architecture, design, implementation, verification, validation, inspection, reviews, walkthroughs, 37 testing, defect prevention, end to end traceability, supplier management, engineering management, configuration management, and quality management implemented through standards, culture change, and professional change management (Ebert, 2015).

Providing a reuse platform helps software professionals to share potentially interesting and useful experiences and knowledge anywhere, anytime, by anyone across the organization

(Rech, Bogner, & Haas, 2007). Software reuse improves productivity and software quality and software reuse is win-win situation for reusing organization and solution delivery organization

(Adams, Kavanagh, Hassan, & German, 2016). Lack of reuse considerations for reference architecture, architecture patterns, design patterns, business rules, use cases, user stories, or code lead to suboptimal software solutions and increases technical debt.

Architects need to be part of agile teams so that they can address the most significant architectural concerns, where economic impact, risk, and cost prove to be good indicators of architectural significance (Poort, 2014). Effective architectural oversight must be timely, objective, systematic, constructive, and, pragmatic to provide an appropriate oversight ensuring adherence to the original vision, concept, and architecture as software moves from vision, concept and, architecture to construction, testing, and deployment (Rozanski, 2015).

Regardless of methodology used, estimating is very important due to undesirable consequences of over estimation or underestimation. Underestimation causes schedule and effort overruns causing poor quality, rework, delayed projects, or cancelled projects (Minku & Yao,

2013). Overestimation causes non acceptance of promising ideas, wasting resources, lowering productivity, losing customers, and threats the organizational competitiveness (Kocaguneli,

Menzies, & Keung, 2012). Software organizations can improve the estimation accuracy by developing and using simple estimation models tailored to local contexts in conjunction with 38 expert opinions, using team based structured estimation process where independence is assured, avoiding exposure to misleading and irrelevant information, and using historical estimation error to set effort estimate intervals (Jorgensen, 2014). Traditional practices of top-down and inaccurate estimates rather than bottom-up and realistic team estimates lead to unsustainable pace, stakeholder expectation mismatches, overtime, burnout, or not delivering appropriate value to customers. Velocity requires adjusting when there are resource changes such as inducting a new team member or transferring work to existing team member.

If sprints are too short, this could cause not having sufficient time for innovative and creative solutions, rushed implementation, and increases technical debt. Long or variable sprints cause disruption, predictability issues, and not delivering value to customers. If the sole aim of product development is on accomplishing short term benefits such as meeting schedules at the cost of fixing defects later by cutting corners instead of long term benefits such as high quality code, technical debt increases (Sneed & Verhoef, 2016). Taking shortcuts such as accumulation of technical debt or not following coding standards from the beginning to speed up delivery or overcommitting beyond capabilities or not assigning right people in right roles at right time or not considering repayment of legacy technical debt due to past decisions or unrealistic business pressures causes or violating vendor specified products for COTS software increases in technical debt (Ramasubbu & Kemerer, 2015). Accumulation of technical debt can lead to software that is more complex, less understandable, and more difficult to maintain leading to unexpected delays and lower software quality (Elbanna & Sarkar, 2016). Most organizations pay lip service to non- functional or quality requirements, but non-functional or quality requirements such as security, scalability, resilience must be a priority throughout the life cycle (Basili et al., 2004). Focusing solely on functional requirements and ignoring or neglecting non-functional or quality 39 requirements increases technical debt and causes building software that is not resilient, secure, trustworthy, and safe. Focusing on fixing critical and high vulnerabilities and ignoring or procrastinating medium and low vulnerabilities results in accumulation of security debt and increases the risk of security breaches (Elbanna & Sarkar, 2016). Writing reliable code requires understanding bounds such as finite amount of memory, finite amount of storage, finite amount of time to compute, limited stacks, limited queues, limited file system capacity, limited numbers, and so on and take appropriate preventive actions to prevent software failures (Holzmann,

2015c).

Establishing technical debt management as a core engineering management practice helps organizations in getting ahead of quality and innovation curve (Avgeriou et al., 2016). Making technical debt visible, evaluating tradeoffs, and systematically controlling technical debt need to be carefully applied with sound engineering management sense (Gat, 2012). Conducting audits with entire software development team and tracking using wikis, backlog, or task board to make technical debt visible and explicit, managing the expectations of stakeholders by making them equal partners and facilitating open communication about consequences of technical debt, and using risk management approach to evaluate and prioritize technical debt’s cost and value are few of the technical debt management strategies (Lim et al., 2012). Refactoring improves the structure of software without impacting functionality or external behavior, typically to improve software design (Murphy-Hill, Roberts, Sommerlad, & Opdyke, 2015). Refactoring is useful to remove bad smells and keep technical debt low (Mohan et al., 2016). Refactoring, Test driven development, binary tracking, and burndown charts are useful in technical debt management regardless of software development lifecycle methodologies used (Fairley & Willshire, 2017). If visual representation of progress such as burn-down charts are not available, stakeholders may 40 not be aware of the progress and blockers. Effort saved through defect prevention oriented quality management strategies allow software development teams to succeed in the global marketplace by re-orienting from scrap, rework, and playing defense towards delivering new features early, improving performance, implementing adaptive changes, and innovating (Cantor

& Royce, 2014).

Organizations get trapped into traditional organizational processes and culture and waterfall approach of testing at the end or in next sprint or too much reliance on manual testing cause disruptions, waiting, context switching, code configuration, velocity, coordination, and communication issues (Prikladnicki et al., 2016). Testing methods used in traditional lifecycle methodologies such as waterfall and spiral are too late and ineffective in early removal of defects. Shift left approaches such as continuous and proactive upfront validation is essential to provide assurance (Michael, Drusinsky, Otani, & Shing, 2011). The persistence to measure outcomes beyond the simple pass or fail test cases, caring about the quality of work, and communicating issues to others directly or indirectly impacted by software matter regardless of whether software is developed individually, in small teams, or large teams is essential (Eldh &

Murphy, 2015). Quality must be designed into the product and not left to testing at the end

(Basili et al., 2004). Continuous integration and continuous delivery practices sometimes called as DevOps or SecDevOps or DevSecOps practices integrate systems administrators, network engineers, infrastructure specialists, and security specialists into agile teams, tighten the release cycles into days or sometimes hours, and integrate security testing especially based on misuse and abuse cases into constantly running regression test suite (McGraw, 2017). Lack of cross- functional teams with representatives from each functional area such as development, architecture, design, testing, release management, operations, product owner, and scrum master 41 in agile teams cause disruptions, waiting, context switching, code configuration, velocity, coordination, and communication issues.

To contribute to society at large, software professionals need to develop awareness of wider issues and influence of individual decision making on outcomes, contribute specialist technical advice to policy discourse, provide voluntary specialist services to communities, and contribute to professional societies' public-interest initiatives (Hall, 2009). Avoiding unintended negative impacts of Software on society, people, and environment, looking for human values in technical decisions, thinking early and often about impacted stakeholders, clients, developers, users, and anyone that is affected by use, identifying adverse impacts of software using professional judgement guided by ethical code in conjunction with stakeholder identification and understanding context are few of the ethical considerations for software development

(Gotterbarn & Miller, 2010). Ethical testing standard calls for transparent culture of sharing bugs openly to allow customers to evaluate impact of known bugs (Nindel-Edwards & Steinke, 2008).

Improving coordination between business users, software professionals, and finance to design, develop, and maintain software systems within scope, time, and cost is essential to prevent software and information technology failures (Liebowitz, 2015). In agile environments, precise understanding of requirements emerges from different forms of communication and collaboration between diverse stakeholders to deliver safe systems and support innovation and change through continuous learning (Maiden, 2012a; Cleland-Huang, 2013). A formal stakeholder management process helps to understand and manage stated and unstated needs of stakeholders in order to satisfy stakeholder expectations (Dudash, 2016). Continuous prioritization and trade-offs are necessary in a world of finite resources (Shull, 2011). If requirements, use cases, or user stories in product backlogs are not ordered based on value, risk, 42 dependencies, and priority, features that add value may not be delivered to customers in a timely manner. Choosing to work on easy, trivial, and unimportant stuff rather than working on right stuff does not deliver value to consumers. Every system needs requirements to capture business and user needs and suggesting that requirements are not necessary in agile environment increases risks dynamically (Orr, 2004). Insufficient requirements that are not mapped to value streams and business cases, lack of standardized business processes across the company with slow and inefficient decision making and ad hoc agreements are obstacles to success (Ebert, 2015). Not validating requirements with stakeholders before delivering in sprints causes re-work, waste, and non-value add deliverables. Poor product management causes insufficient planning, continuous changes to scope and requirements, configuration issues, and defects and empowering competent product managers is a key success factor for a product over its life cycle (Ebert, 2014). Product owners must be able to handle conflict and manage the expectations of stakeholders through mutual trust (McHugh, Conboy, & Lang, 2012). Requirements and architecture are highly interdependent and projects without early architecture evaluation will likely overlook requirements especially non-functional requirements and most likely fail in prioritizing requirements leading to budget overruns and rework due to inappropriate and late design changes

(Ebert et al., 2015). It is essential to take a disciplined approach to ensure investments lead to sustainable benefits to producing organization and stakeholders (Jacobson, Spence, & Seidewitz,

2016). Ethics must be injected into design thinking through socially responsible requirements process that protects the needs of vulnerable and potentially adversely impacted stakeholders

(Cleland-Huang, 2016).

Sequential development processes, silos, stovepipes, and dysfunctional culture contribute to most of the causes of software development waste such as unnecessary features, lost 43 knowledge, partially done work, handovers, communication overhead, late decision making, multitasking, and late defect finding and fixing (Poppendieck & Cusumano, 2012). In dysfunctional organizations, requirements analysts throw over the wall to developers and testers, developers throw over the wall to testers and operations, testers throw over the wall to developers and operations leading to quality issues, loss of reputation, and loss of market share

(Parnin et al., 2017). Poor quality software is the result of deeper organizational problems and deeply inherent organizational habits of staff development, recruitment, retention, and motivation (Middleton, 2001). Dysfunctional organizations with unclear responsibilities and silo work resulting in continuously changing focus and schedules, lack of strategy, unclear strategy, roadmaps with unclear dependencies, fuzzy technical requirements and impacts are obstacles to the success of organizations (Ebert, 2015). Lack of engaged, empowered, committed, motivated, and competent product owner, scrum master, and development, architecture, design, testing, release management, operations team members does not deliver value to consumers. Engaged, empowered, committed, motivated, and competent staff at all levels increases the likelihood of delivering value to consumers through high performance quality oriented culture and creates competitive advantage.

Lean philosophy focuses on waste elimination using principles of value, value stream, flow, pull, and perfection (Maglyas, Nikula, & Smolander, 2012). Causes of waste are more easily exposed and addressed through value stream maps (Poppendieck & Cusumano, 2012).

Agile practices streamline interfaces, reduce rework, and inefficiencies (Ebert et al., 2015). It is essential to combine the concepts of value engineering and risk management to negotiate and gain acceptance of product quality assurance activities and acceptable quality targets with the stakeholders (Poth and Sunyaev, 2014). 44

Experts analyzed epic failure of Healthcare.gov that have followed waterfall methodologies (Moore, O'Reilly, Nielsen, & Fall, 2016) and major reasons for failure are listed below.

• Poor planning (Cundiff et al., 2014)

• Lack of employee engagement (Cundiff et al., 2014)

• Lack of buy-in from stakeholders (Cundiff et al., 2014)

• Inadequate requirements (Freund, 2014)

• Lack of coordination, collaboration, and communication among contractors and

stakeholders (Cundiff et al., 2014)

• Silos and dysfunctional culture (Petrie, 2014)

• Misaligned Incentives (Petrie, 2014)

• Architectural problems (Petrie, 2014; Cleland-Huang, 2014)

o No monitoring

o No capacity planning

o Not investing time early in the process to understand requirements

especially non-functional requirements such as performance, reliability,

usability, dependability, portability, availability and so on.

• Inappropriate or missing design elements (Freund, 2014)

• No small competent team (Petrie, 2014)

• Testing against the software that was developed rather than the software that

should have been built (Freund, 2014)

• Production as an afterthought (Petrie, 2014) 45

• Lack of Independent verification and validation that has full life-cycle experience

(Freund, 2014)

• Poor IT procurement practices (Petrie, 2014)

Differences between agile and non-agile practices synthesized from prior research

(Conboy, Coyle, Wang, & Pikkarainen, 2011; Ebert, Abrahamsson, & Oza, 2012; Laanti, Salo,

& Abrahamsson, 2011; Mandal & Pal, 2015; McHugh et al., 2012; Misra, Singh, & Bisui, 2016;

Poppendieck & Cusumano, 2012; Prikladnicki et al., 2016; Rigby et al., 2016; Shull, 2014b;

Spinellis, 2011; Spinellis, 2016a) are listed in Table 2.

Table 2

Agile and Non-agile practices

Non-agile Practices Agile Practices

Sequential, phased, and non-iterative approach Iterative, incremental, and evolutionary

approach

Relay race approach Rugby approach

Process and technology focused and not people People and interactions focused focused (Dehumanizing or anti-people)

Contract negotiation and stringent service level Customer and stakeholder collaboration agreements

Functional silos, stovepipes, and dysfunctional Cross-functional collaboration and

coordination

Detailed upfront and predictive planning Adaptive and continuous plan driven by rapid driven (Arbitary, unrealistic, and top-down) changes (Realistic and bottom-up) 46

Non-agile Practices Agile Practices

Heavyweight (Formal and heavy Lightweight (Document what is absolutely documentation) necessary)

Formal and need basis communication Informal, Face-face, and open communication

Command and Control Leading, coaching, teaching, mentoring,

facilitating, relationship building, and

collaboration

Task and activity oriented Feature and customer value oriented

Formal, rigid, hierarchical, mechanistic, Informal, flexible, flat, participative, authoritarian, and bureaucratic organization cooperative, and adaptive organization

structure

Micro management (Microscopic approach) Macro management (Telescopic approach)

Specialized individuals Self-organizing and coherent teams

Large teams Multiple small teams

No or minimal feedback Continuous feedback

Culture of blame, finger pointing, individual Culture of trust, mutual respect, team focus focus

Lack of motivated, empowered, engaged, Motivated, empowered, engaged, committed, committed, and competent workforce and competent workforce

Milestone and output focus Customer value and outcome focus

Infrequent interaction with customers Frequent interaction with customers 47

Non-agile Practices Agile Practices

Final delivery at the end of the project Continuous integration, Early and continuous

delivery of valuable software

Fixed set of requirements Rapid evolution of requirements

Long time spans Rapid delivery of value to customers

No “lean” mindset Dominant “lean” mindset to minimize

unnecessary and wasteful work

Legacy technology Modern technology

Late and manual testing phase at the end Continuous, automated, and rigorous testing

Learning at the end through end of the project Experimentation, continuous and rapid post mortems learning through reflection

Overtime, burnout, and work life balance Sustainable pace, fun environment, work life issues balance, quality of life

Top-down and hierarchical decision making Bottom-up and team decision making

The greatest risk to agile practices is likely to be misuse and abuse of agile quality management practices (Paulk, 2013). Even after huge investments in education and coaching, many organizations are struggling to broaden their agile adoption to the whole of their organization or sustain agile adoptions (Jacobson et al., 2016). Some adopting agile practices don’t understand what agile practices are and equate agile practices to ad-hoc, chaotic software development (Paulk, 2002).

Organizational environment consists of organizational cultural values such as commitment to quality policies, clear vision, goals and objectives, commitment to deadlines and 48 goals, monitoring planned activities, engagement, capacity to work in groups, capacity to self- organize for change, cooperation and collaboration, strategic management planning, and information based decision making (Passos et al., 2012). Recruiting competent workforce is not enough. Professional attitude that combines relationship building, respect for others, and personal humility is essential (Radziwill, 2005). Elements of professionalism include competence, accountability, integrity, responsibility, and obligation to public (Walrad, 2017).

Staffing agile teams with people who have faith in their ability combined with excellent interpersonal skills and trust is important (Dyba & Dingsoyr, 2009). Open and frequent communication, knowledge sharing and feedback in agile teams facilitate trust by promoting a sense of belonging and good will (McHugh et al., 2012). The major problems in software industry are sociological and not technological and organizations should devote more attention to retaining good people (Yourdon, 2007). Turnover of software professionals is expensive and reasons for turnover in most cases are lack of cohesive and supportive teams, lack of rewarding job assignments, work-life balance issues, and lack of management appreciation (Humphrey,

2001). Organizational environment must support people practices that include developing, exciting, and retaining competent staff through motivation, empowerment, respect, and trust

(Spinellis, 2016b).

If organizational environment encourages individualism and don’t foster sharing and cooperative culture and, knowledge workers might not be willing share knowledge due to fear, lack of motivation, and so on (Rus & Lindvall, 2002). Organizational environment must support team effectiveness and efficiencies through formation of more efficient software development teams by taking personality factors into consideration, control of the software development team environment, have people feel safe about putting forward their opinions or ideas and encouraging 49 all team members to do a good job (Acuña, Gómez, Hannay, Juristo, & Pfahl, 2015).

Organizations must have appropriate reward and recognition system to not only encourage knowledge sharing but also reward and recognize employees who share knowledge and reuse existing knowledge ((Rus & Lindvall, 2002).

Overall life-cycle costs are negatively impacted by limiting agile practices to teams or projects that have too much short term focus such as reducing documentation and unnecessary features (Ebert et al., 2012). Agile practices promote documenting what is absolutely necessary and nothing more. Agile adopters that don’t understand or misinterpret agile practices equate wasteful, non-value added, and non-essential documentation as not needing any documentation at all (Paulk, 2002). Lack of necessary documentation makes it difficult to evolve the software over time, induct new team members, transfer work to other team members, or ability to reuse or refactor, meet regulations in high risk environment, and so on. This is especially challenging when team members move on to different assignments within organization or leave the organization. Waterfall thinking of producing large upfront documentation or resistance to change or culture of organization or not defining or not modifying legacy processes such as requirements engineering and management or not using modern technology to be consistent with agile practices may dictate huge requirements documentation causing comprehension and wastage issues. Organizations need to take a more holistic approach and move from craft to engineering to look beyond small-scale agile, beyond independent competitive islands of agile excellence, beyond individual craftsmanship and heroic teams, beyond the culture of entitlement, beyond the short-term thinking and beyond the short-term instant gratification (Jacobson et al.,

2016). Individual autonomy must be balanced with team autonomy and social responsibility

(Dyba & Dingsoyr, 2009). Organizations need both agility and governance. Agility without 50 governance cannot scale and governance without agility cannot compete (Cantor & Royce,

2014). Appropriate governance model that complements the dynamics of agile practices is a key factor that differentiates organizations significantly improving productivity and organizations that flounder (Cantor & Royce, 2014). Strong governance and ethics in the design and implementation of software such as Artificial Intelligence is paramount to prevent exploitation of

Artificial Intelligence for evil purposes that could have far reaching consequences of unknown dimensions (Hurlburt, 2017).

Incorporating the practices of change management into operational plans increases the probability of success of software quality improvement (Allison and Narciso, 2015). Ineffective change management can overwhelm an organization and its members (Mathiassen, Ngwenyama,

& Aaen, 2005). Unprofessional change management is a common reason for failing efficiency initiatives (Ebert et al., 2015). Change management goes beyond simply selecting a method and trying to practice it and sustainable change management needs to be managed professionally

(Ebert et al., 2012). Lack of communication across the organization causes uncertainty, anxiety, and fear leading to motivation issues, productivity losses, and adoption issues.

Self-directed team work requires a major cultural change and can only be accomplished through top executive management support (Humphrey, 2007). Executive sponsorship and leadership commitment, collaboration, and transparency through organizational portfolio management and business IT alignment provides an opportunity to implement agile practices across organization and is essential for the success of agile practices (Stettina & Hörz, 2015).

Lack of executive sponsorship and leadership commitment, misaligned team backlogs, and strategic portfolio backlogs such as agile teams working on low priority backlog items in the 51 portfolio rather than high priority backlog items that deliver value to consumers cause agile implementation initiative failures.

Software provides unique challenges such as lack of awareness, education, and training to adopt best practices (McConnell, 2002). Organizational environment must also support workforce with implementation of process and technology best practices across the organization so that people can have access to technology and processes to do their best work effectively and efficiently. Lacking knowledge of best practices or ignoring known best practices affects the security, safety, and reputation of products. If staff and management don’t have awareness, education, and training on best practices, these could cause misinterpretations or expectation mismatches or adoption issues. Lack of common terminology leads to confusion, misinterpretations, and adoption issues. It is critical to have staff and management get awareness, training, and education on process and technology practices so that they can recognize the importance and incorporate into daily work.

When new classes of vulnerabilities or techniques for building more secure software are identified, providing knowledge about security vulnerabilities, understanding how these vulnerabilities occur throughout software lifecycle, continuously improving and institutionalizing processes and technology are essential so that security is designed into software

(Lipner, 2015). Integrating security well into processes and technology results in significant reduction of the number of vulnerabilities (Denning & Denning, 2016). Product liability is not a substitute for secure software development process and may lead to developers practicing legally defensive coding slowing down digital innovations (Lipner, 2015). There is a misconception that

Open source software is better than commercial software because there are more eyes on it

(Wolf, 2016). Open source is not a substitute for secure software development process and may 52 lead to exposing code to malicious attacks as exposed in GHOST bug and Heartbleed bug impacting millions of users and websites (Forte, Perez, Kim, & Bhunia, 2016; Lipner, 2015).

Organizations are taking advantage of complementary nature of quality management practices and frameworks (Baldassarre, Caivano, Pino, Piattini, & Visaggio, 2012). Software

Engineering Institute combined quality management practices with software development and led the development of prescriptive software quality management frameworks such as Capability

Maturity Model (CMM), Personal Software Process (PSP), Team Software Process (TSP),

People Capability Maturity Model (PCMM), Capability Maturity Model for Integration (CMMI)

(Shull, Basili, Booch & Vogel, 2011). These frameworks are based on the quality management principles of Shewart, Deming, Juran, and Crosby (Humphrey, 2000). Other variations of CMMI are Trillium developed by North American Telecommunications, Bootstrap developed by

European software industry, Software Technology Diagnostic developed by Scottish Industry

(Thomas, Hurley and Barnes,1996). A number of empirical studies involving individual case studies, multiple-case studies, surveys, and correlational studies show organizational performance improvements associated with CMMI process maturity (Butler, 1995; Dion, 1993;

Hayes & Zubrow, 1995; Krishnan, 1997; Software Engineering Institute [SEI], 2010; Swinarski,

Parente, & Kishore, 2012; McManus & Wood-Harper, 2007; Falessi, Shaw, & Mullen, 2014).

The basic premise of these models is that higher maturity processes would lead to increased productivity, reduced cycle time and improved stakeholder satisfaction (Paulk, 2005). CMM based models provide focus on process performance and improvement by significantly complementing medical device standards in medical device manufacturing (Walker, 2012).

Organizations that have achieved CMMI level 5 implement quality centric culture and use hands-on quality management focusing on both process quality and product quality (Honda and 53

Yamada, 2012). Agile practices are combined with frameworks such as CMMI (Cohen & Glazer,

2009; Garzás & Paulk, 2013; Jamieson & Fallah, 2012; Łukasiewicz & Miler, 2012).

Organizations are taking advantage of agile practices to accomplish higher level CMMI maturity levels quickly (Silva et al., 2015). CMMI brings a systematic, disciplined, rigorous, and quantitative approach to agile practices implementation (Łukasiewicz & Miler, 2012).

Infrastructure is important for institutionalizing best practices (Paulk, 2002). CMMI helps organizations to formalize and institutionalize agile methods (Silva et al., 2015). Using pilot projects to clearly define, refine, and stabilize processes and technology helps people to be able to learn and apply the practices before deployment of changes and institutionalization

(Margarido, Faria, VidaL, & Vieira, 2013). It is necessary to blend the best features of the agile and disciplined approaches such as CMMI to develop capabilities quickly and deliver these capabilities with quality assurance (Moore et al., 2016). The people focused, autonomous, and value driven nature of agile practices work very well with CMMI (Bass, Allison, & Banerjee,

2013). Organization that use CMMI practices and organizations that want to use CMMI practices benefit from agile practices (Marçal et al., 2008). Critics of process based models argue that conformance to processes and software process improvement programs guarantee the uniformity of output but it does not guarantee high quality software (Rodriguez et al., 2015; Passos et al.,

2012).

The Malcolm Baldrige National Quality Award is both a quality standard and a way of comparing various organizations’ quality practices. Organizations submit applications describing their practices in seven categories: leadership, strategic planning, customer and market focus, information and analysis, human resources focus, process management, and business results. The criteria evolve each year, but the categories are fairly stable. 54

A well-structured measurement program is needed for software organizations to understand, control and improve the current status of software processes, products and resources. Organizations must implement quality centric culture to define the aspects of quality tied to their business goals and then decide how to measure them (Kitchenham & Pfleeger,

1996). Measurement programs lose support and eventually fail if these programs don’t align with business goals that add value. Organizations can integrate measurement program across all levels across the enterprise using GQM+ strategies for measuring the success or failure of goals and strategies, adding enterprise-wide support for determining action on the basis of measurement results (Basili et al., 2010).

Organizations realize that relying on quality of processes is not enough and quality of products developed and acquired is also important (Rodriguez et al., 2015). Module stability,

Defect density ratio, and Unit test coverage are few of the metrics that can be used to improve software development processes (Kandt, 2013). The cost of software quality metric which consists of prevention costs, appraisal costs, internal failure costs, and external failure costs can be used to improve software development processes (Laporte, Berrhouma, Doucet, & Palza-

Vargas, 2012). Organizations that use data driven metrics based decision making succeed and organizations that don’t have metrics, or have wrong metrics, or narrow metrics suffer (Mithas &

McFarlan, 2017).

ISO/IEC 15939, is an international standard that defines a measurement process for software development and systems engineering. ISO/IEC published ISO/IEC 25000 family of standards known as SQUARE, Software Product Quality and Requirements Evaluation for the evaluation of software product quality, replacing ISO/IEC 9126 and ISO/IEC 14598 (Rodriguez 55 et al., 2015). This is extension of software quality models proposed by McCall, Boehm, Dromey, and Grady (Curcio, Malucelli, Reinehr, & Paludo, 2016).

Standards at team level improve communications resulting in fewer misinterpretations and standards at organizational level help in reducing learning time, improving productivity, and faster delivery. ISO 9001:2015 is an international standard that specifies requirements for quality management system for any domain. ISO in collaboration with IEEE and PMI developed

ISO/IEC/IEEE 24765:2010 to provide common terminology for systems and software engineering work (International Organization for Standardization [ISO], n.d.). ISO/IEC

90003:2014 is developed for defining software lifecycle processes. ISO/IEC 12207 is an international standard for software lifecycle processes (Savolainen, Ahonen, & Richardson,

2012). The international committee on software and systems engineering standards ISO/IEC

JTC1/SC7 developed ISO/IEC 15504 or SPICE or Software Process Improvement Capability

Determination (von Wangenheim, Anacleto, & Salviano, 2006) to guide process and life-cycle assessments. The standard for avionics software development is DO-178C standard. The standard for medical device software development is IEC 62304. The standard for automotive electronic and electrical subsystems including software development is ISO 26262. The standard for safety related parts of control system including design of software is ISO 13849. Appendix A lists relevant standards from IEEE (Institute of Electrical and Electronics Engineers [IEEE], n.d.). Software failures such as London ambulance system service interruption and Therac-25 radioactive overdose indicate that standards are necessary but are not sufficient to produce safety critical software (Ebert, 2015).

Agility and discipline are not opposites. Rather than treating agile practices and disciplined frameworks as CMMI as opposites, it is important to focus on efforts to synthesize 56 the best from agile practices and frameworks such as CMMI to address future challenges of simultaneously achieving high software dependability, agility, and scalability (Beck & Boehm,

2003). Lack of processes supporting agile practices causes confusion and adoption issues. Lack of standardized technology leads to the company's loss of negotiation power with technology vendors and of the potential economies of scale in licensing. Standardization improves knowledge sharing across teams and organization (Elbanna & Sarkar, 2016). Organizations willing to spend the money on certifications such as CMMI or ISO 9000 are less interested in the value of agile practices, while those needing agility for business reasons are less interested in adopting CMMI or ISO 9000 certification (Williams & Cockburn, 2003).

Preparing future software professionals will have profound impact on society. The challenges of developing quality software with the proper confidentiality, integrity, and availability requires personnel with substantial education, training, and experience (Davis,

Humphrey, Redwine, Zibulski, & McGraw, 2004). To deliver safe, secure, resilient, dependable, and reliable software, educating software professionals requires substantial shift and there is a greater need for close collaboration between industry and educational institutions to close the gap (Tockey, 2015). Continuing to seek silver bullets in isolation causes more issues and a holistic and interdisciplinary approach, close collaboration among educational institutions, public sector organizations, and businesses frequently referred as Triple helix is needed to take advantage of software technological innovations. Tight collaborations with public sector organizations and businesses could include interdisciplinary, cross-faculty center with professional scientists and engineers in charge of technology development, technology transfer, and management (Briand, 2012). There is a clear knowledge gap and reliance on on-the-job learning for topics pertaining to people skills, process skills, and human-computer interaction 57

(Lethbridge, 2000). Software professionals lack the appropriate knowledge and skills to effectively build security into software. To spread software quality management and best practices among organizations that produce software, one of the ways to encourage dissemination is to put greater emphasis on quality management in the basic education of software developers and well-educated software developers will provide younger and smaller software organizations with a better starting point concerning their quest for quality (Kautz &

Åby Larsen, 2000). Better education of current and future software leaders is essential to prevent

IT failures (Liebowitz, 2015). These are in line with Humphrey’s recommendation of teaching quality frameworks through the educational system similar to other professional disciplines

(Humphrey, 2000).

SEI developed a software assurance competency model based on the Master of Software

Assurance reference curriculum approved by the IEEE Computer Society and ACM to help individual professionals to improve their software assurance skills, to help universities to align course content with skills needed in industry, and to help industry to promote employee professional growth as well as to screen prospective employees (Mead & Hilburn, 2013). The

Guide to the Software Engineering Body of Knowledge (SWEBOK), ISO/IEC TR 19759 provides foundation for curriculum development, university program accreditation, professional certification, and professional licensing of software professionals and was not only developed and used for academic programs, but also can be used by industry to enhance knowledge of practitioners (Alarifi, Zarour, Alomar, Alshaikh, & Alsaleh, 2016). Software Engineering

2004 (SE2004) Computing Curricula Series developed by the Joint Task Force on Computing

Curricula of the IEEE Computer Society and Association for Computing Machinery that presents curriculum guidelines for undergraduate SE degree programs acknowledges the importance of 58 the human aspects of SE (Hazzan, 2010). An updated version Software Engineering

2014 (SE2014) Computing Curricula Series incorporates agile practices, provides visibility of requirements and security, includes recent advances in teaching technologies such as Massive

Open Online Courses, and provides example courses and curricula (IEEE & Association for

Computing Machinary [ACM], n.d.). Software professionals need to be taught software engineering skills such as divergent thinking and collaborative learning rather than narrow focus on convergent thinking and individual learning (Offutt, 2013). As most people learn best by doing, the best training and education incorporate hands-on activities as part of training and education and supplements with mentoring as appropriate (Payne, 2010). Reflective activities to encourage reflective activities such as constant questioning, open-ended problems, complex problems, rapid iterations of design solutions, culture of critique, teamwork, collaborative learning, interactive problem solving, student presentations, peer review, coaching, and mentoring need to be more explicitly considered in software engineering education and studio based approach is more appropriate than traditional methods such as lecture based teaching and project based teaching approaches even though traditional methods can be designed to support reflective activities (Bull & Whittle, 2014). Hill, Johnson, & Port (2016) propose an athletic pedagogical approach to help students efficiently master mechanics of software engineering and creative problem solving.

It is essential to educate software professionals to be T-shaped in order to prepare software professionals to apply multi discipline systems thinking in software development rather than I-shaped engineers that have technical depth but little understanding of other disciplines such as business (Boehm and Mobasser, 2015). T-shaped professionals and more forward looking Pi-shaped (π) professionals combine deep disciplinary knowledge along with a keen 59 ability to communicate across social, cultural, and economic boundaries and be able to plunge into and adapt to different situations (Kruchten, 2015).

It is important for educational institutions to partner with other universities and industry partners and educate students on global software engineering (Sommerville, 2016). The bodies of knowledge of many technical fields such as Information Technology are more dynamic and are changing more rapidly than curricula (Denning, 2014). Niazi (2015) point out the need to close the gap between software industry and academia by incorporating up-to-date research knowledge in educating students. McQuaid & Banerjee (2015) recommend long term relationships between academia and industry so that academia and industry can work together and produce graduates who can understand the challenges of software development and come up with ideas for solutions in addition to mastering the theory and practice of software engineering and process improvement. New strategies in addition to old strategies listed below are needed to promote industry and academia collaboration and it is essential to understand the environment in which strategies are successful (Mead, 2015).

• Participate in advisory boards

• Make donations

• Provide discounts on equipment and software

• Encourage employees to work as adjunct faculty or guest lectures

• Support faculty development workshops

• Provide grants to develop new programs

• Provide scholarships to students

• Provide internships to students

• Support capstone projects 60

• Update employee position descriptions with inputs from academia

• Create endowed chair positions to support assurance programs

To increase awareness of human factors in software development, organizations can establish and promote a culture in which human aspects are cultivated, acknowledged, and considered, develop professional programs that legitimize and elicit attention to human aspects, and communicate the importance of human aspects to policy makers in academia and curriculum developers (Hazzan, 2010). In addition to educating and training software professionals on human factors, Software research must be broadened to encourage research on human factors in software engineering rather than narrow focus on tools and processes (Hazzan, 2010).

Continuous education opportunities and academic training throughout the careers is critical to keep up with rapid changes in computing field (Walrad, 2016). People who design, code, and test must be trained and motivated to rigorously adapt and adopt the secure development processes and technology that support the process to affect the software (Lipner,

2015). Proactive approaches such as thorough awareness, education, and a security-centric culture, focus on entire life cycle by designing and coding security into software prevent software defects that cause security vulnerabilities (DHS, n.d.; Howles & McQuaid, 2012;

Rakitin, 2016). Ways to improve software competency development are engaging in professional societies, engaging in professional user groups, engaging in social media networking communities, obtaining and maintaining professional licenses, mentoring, obtaining and maintaining professional certifications, attending conferences, presenting in conferences, attending workshops, presenting in workshops, reading scholarly publications, contributing to scholarly publications, and organizing knowledge for easy retrieval (Kruchten, 2015; Mead,

2009). Professional licensing of software engineering professionals such as the one that IEEE 61 developed in conjunction with the National Society of Professional Engineers and the Texas

Board of Professional Engineers offered by National Council of Examiners for Engineering and

Surveying helps promote the interests of the public in providing another avenue to enhance the quality of products and services with software components leading to safer, more secure, and more reliable software through commitment to professional and ethical practices (Laplante,

2012; Thornton, 2012). Software competency development can make use of modern methodologies such as agile methods, modern technologies such as advanced analytics, open source, and so on and modern learning methods such as virtual learning environments, social learning environments, e-learning, e-mentoring, experiential learning, massive open online courses such as such as Coursera, Udacity, edX, the Khan Academy, Codecademy, and Code

School, experimental learning, webinars, and so on to make the whole learning process to be more effective and efficient to help students and software professionals to adapt to dynamic field

(Fox & Patterson, 2012).

Artificial intelligence technology driven modern advanced analytics makes use of structured, semi-structured, and unstructured data and provide descriptive analytics, diagnostic analytics, predictive analytics, and prescriptive analytics that continue to transform all industries beyond human imagination (Porter & Heppelmann, 2015). Artificial intelligence technology driven modern advance analytics revolves around advances in cyber-physical-social systems, high- quality big data, data mining, sensor mining, crowdsourcing, cloud computing, storage systems, advanced statistical methods, supervised machine learning, unsupervised machine learning, hybrid machine learning, pattern recognition and matching, cognitive computing, neural networks, natural language processing, speech synthesis, computer vision, object synthesis, deep learning and so on (Booch, 2016; Chang, 2016; Earley, 2016; Louridas & Ebert, 2013; O'Leary, 62

2013; Maalej, Nayebi, Johann, & Ruhe, 2016). Analytical methods have been important to quality management and modern advanced analytics open many new opportunities for quality management (Evans, 2015). Application of real-time modern advanced analytics in software is limited and there are tremendous opportunities for exploiting real-time advanced analytics to improve software quality management practices (Menzies & Zimmerman, 2013a). Software analytics is an area of explosive growth and there is a good rush to gain insights using software analytics (Menzies & Zimmerman, 2013b). Software analytics can be used to support learning, understanding, evaluating, decision making, software defect and fault prediction, software change proneness prediction, bad smell prediction in software artifacts, clone prediction, effort prediction, cost prediction, health of development process prediction, reliability modeling, optimization, automated regression testing, code recommendation, resource recommendation, bug assignment recommendation, new feature recommendation, prioritization recommendation, proactive maintenance recommendation, consumer oriented summarization, task oriented summarization, and so on (Batarseh & Gonzalez, 2015; Czerwonka, Nagappan, Schulte, &

Murphy, 2013; Hermans et al., 2016;; Malhotra, 2013; Malhotra and Pritam, 2012; Selby, 2009).

Software analytics driven quality management enables the efficient allocation and usage of finite resources (Spinellis, 2016c).

Software development is not just source code even though source code is an important component of software development. Modern and advanced Software analytics is not limited to using source code. Software analytics make use of rich internal, external, traditional, and consumer generated data produced during software development, operations, and maintenance to provide insightful, intelligent, meaningful, useful, and actionable information in real-time for improving software quality. Data used may include use cases, user stories, feature requests, 63 architecture, design, test suites, test plans, test cases, source code, source code comments, check- in information, build commit messages, artifact review comments, bug reports, discussion comments on issue trackers, release notes, runtime data, real-time system logs, real-time application usage, user data, feature usage, survey data, emails, blogs, wikis, videos, user manuals, user reviews, user ratings, real-time user feedback on social media, call center recordings, consumer call logs, chat logs, user forums, feedback systems, user experiences, crash reports, and so on (Maalej et al., 2016; Zhang et al., 2013). Integrating biometric sensing into human work and leveraging real-time analytics to reduce the number of interruptions at inopportune moments or preventing errors from entering artifacts or providing support when cognitive load reaches a critical level or not allowing people to work on safety critical tasks when tired or not causing disturbing when developer is in state of high concentration boosts productivity and quality (Hermans et al., 2016).

Software analytics consists of both quantitative analytics that highlight high-level data trends and qualitative analytics that enable real-time decision making for daily tasks through visual dashboards (Baysal, Holmes, & Godfrey, 2013). Software analytics empowers software development individuals and teams to share real-time insights from data to make better decisions and to help evaluate current practices (Menzies & Zimmerman, 2013a). Regardless of the size of the company, software analytics supports evidence based decision making at strategic, tactical, and operational levels rather than making decisions based on intuition, experience, judgement, and gut feelings and helps to transform from reactive mode to proactive (Robbes, Vidal, &

Bastarrica, 2013). Software analytics helps in getting the right information in right amounts to right people at right time. As artificial intelligence and machine learning will become a more pervasive component of future software applications, software professionals need to enhance 64 their development and testing approach knowledge in mission-critical environments where safety, security, and regulatory concerns are most important (Radziwill, Benton, Boadu &

Perdomo, 2015). Sentiment analysis of software reviews can be used to deal with challenges in evaluating software quality in use (Atoum & Bong, 2015). New strategies are needed to blend quality control techniques such as traditional testing with new advances in formal methods, modeling, simulation, machine learning for autonomy and control, automated testing, assertive testing, and real-time field data collection to address the challenges of building larger, more complex, and more connected systems (Moore et al., 2016, Holzmann, 2015a). As release engineering has been revolutionized toward shorter cycle times and move towards roll your release strategies enabling new paradigms such as micro services to bring features to users faster and obtaining feedback quickly, research is needed to automatically predict and resolve integration conflicts, identify backward-compatibility problems, profile and optimize the build system and tests, or determine the best deployment and release strategy for a certain feature

(Hermans et al., 2016).

Software analytics is a natural extension and evolution of analytical methods to software quality management. Software analytics must service various stakeholders involved such as marketing, sales, customer support, and legal and not just software professionals (Hassan et al.,

2013). Software analytics can be used to understand when and how end-users actually use the product, which features they prefer, which features bring in the most revenue, paths followed to perform user tasks, how users deal with new features, and future features anticipated (Maalej et al., 2016; Pachidi, Spruit, & Van de Weerd, 2014). Software analytics is moving into a new phase of technology transfer where implementation of software analytics is occurring in 65 increasing number of organizations while research and development of new prototypes and technologies are continuing (Shull, 2014a).

Software analytics is the one of the technologies that is available to improve software quality, productivity, and stakeholder satisfaction and not the only technology. Software analytics approaches and solutions don’t replace people and must not be implemented in isolation. The critical success factors in deploying software analytics include organizational environment that provides senior management commitment, alignment with organizational vision, mission, goals, and objectives, having mature development processes, measurement and metrics culture, continuous improvement culture, information based decision making culture, and competent people (Misirli, Caglayan, Bener, & Turhan, 2013). Overhead, invasiveness, and privacy aspects must be considered while determining tradeoff between easily obtained and limited use software analytics versus richer and high impact software analytics (Johnson, 2013). Data quality, ethics, and security are other considerations (Otero & Peter, 2015). Software analytics technology must be implemented and continuously improved in conjunction with organizational environment, people, process, and technology best practices to gain real-time insights and help in smart, accurate, effective, and timely decisions and actions.

Survey of Prior Scholarly Work on Bibliometric Techniques

The knowledge that underlies the best practices is unfortunately scattered throughout the literature (Zarour, Abran, Desharnais, & Alarifi, 2015). There are different approaches to derive best practices. One approach is a qualitative approach that uses more traditional theoretical literature review and the second approach is a novel approach that uses quantitative bibliometric techniques (Fernandez‐Alles and Ramos‐Rodríguez, 2009). 66

The term bibliometrics refers to the mathematical and statistical analysis patterns that appear in the publication and use of large volume of documents (Diodato, 1994). Bibliometric is defined as a research method that uses elements of bibliographies and other literary logistical elements to conduct qualitative and quantitative studies (Tsay, 2004). Examples of elements of bibliographies are the author(s), the place of publication, the associated subject keywords, and the citations. The qualitative and quantitative exploration of characteristics of a bibliography as well as elements of literature such as subject and content can be studied in a bibliometric analysis. Bibliometrics is used synonymously with technometrics, scientometrics, informetrics, webometrics, netometrics, cybermetrics, librametrics, altmetrics (Tsay & Yang, 2005; De Bellis,

2009; Jackson, 2013; Thomson & Walker, 2015).

Bibliometrics was selected as the methodology for the dissertation. The usage of bibliometrics goes beyond information and library science (Khalil, Crawford, 2015). Based on the extensive literature review, bibliometric techniques addressed similar research questions concerning the best practices, status, trends, strategies, and future directions of science, technology, engineering, and management disciplines. Online tool availability makes it easy to retrieve and analyze data for researchers that use bibliometric techniques (Thomson & Walker,

2015). The effectiveness of bibliometric techniques is based on the premise that scholars cite scholarly documents they consider to be important in the development of their research.

Frequently cited documents cause greater influence on the discipline than those less frequently cited (Tahai and Meyer, 1999). Scholars in a specific field extend, test, and refine the research contributions of other scholars. Theories advance and compete in a field until paradigms emerge.

Emergence of paradigms indicates a discipline's maturity. Patterns of citation indicate exchanges between members of the network (Culnan, 1987). As Ramos‐Rodríguez and Ruíz‐Navarro 67

(2004) state, articles published in scholarly peer-reviewed journals constitute certified knowledge that was reviewed and approved by fellow researchers. The use of citations from articles in research journals is a standard practice that enhances the reliability of research results

(Ramos‐Rodríguez & Ruíz‐Navarro, 2004). Bibliometric methods involve large amounts of bibliographic data and are more objective (Vogel & Güttel, 2012).

Citation data of publications from the Science Citation Index (SCI) and the Social

Sciences Citation Index (SSCI) compiled by the Institute for Scientific Information (ISI) Web of

Science databases are included because SCI and SSCI are two of the best sources for citation data (Kajikawa & Takeda, 2009). These are the most comprehensive data sources frequently used by scholars for reviewing high-quality publications and assessing scientific accomplishment

(Braun et al. 2000; Tan et al. 2014; Yin & Chiang, 2010). Other alternative data sources, Google

Scholar and Scopus are considered inferior to ISI Web of Science databases. Google Scholar cites many sources other than strictly scholarly journals (Thomson & Walker, 2015). Scopus is a late addition to bibliometric source databases and does not provide broad coverage compared to

Web of Science databases (Thomson & Walker, 2015; Olijnyk, 2015). Another contributing factor for selecting ISI Web of Science database for this research is that it is accessible freely through Indiana State University Library for research purpose. Freeware called VOSviewer is used for constructing and viewing bibliometric maps. VOSviewer provides graphical representation of large bibliometric maps in a way that can be easily interpreted (Van Eck &

Waltman, 2010).

Bibliometric studies are mainly classified into descriptive studies and structural studies.

Descriptive studies provide a single dimension view of discipline by using bibliometric analysis methods such as citation analysis and content analysis combined with descriptive statistics based 68 on information from individual units such as authors, articles, words, or journals. Structural studies provide a multi dimension view of discipline by using bibliometric analysis methods such as bibliographic coupling, author co-citation analysis, document co-citation analysis, and co- word analysis and combine these with network analysis based on relationship among individual units such as authors, documents, keywords, or journals (Olijnyk, 2015).

Citation analysis provides data relating to the influence or impact of research efforts.

Citation analysis can provide view of the field but it does not provide the structure of influence within a field. Useful insight into individual units such as journals, papers, words, and authors such as identification of influential journals, papers, themes, and authors can be provided by citation analysis. Citation analysis is biased toward older publications and biased against newer publications that don’t have time to be cited (Zupic, & Čater, 2015). Citation analysis also lacks the ability to identify the links and interactions among individual units such as journals, papers, words, and authors (Gmür, 2003).

Co-author analysis connects authors when they co-author a scholarly document and provides evidence of collaboration (Zupic & Čater, 2015). Co-author analysis is useful in providing social structure of field (Li, Liao, & Yen, 2013).

Content analysis is an analysis of text and it involves the systematic, objective, and quantitative examination of texts. It provides another approach to uncovering the underlying structure of a field as it is taking shape. While the object of research is comprised of words written by others, many scholars separate it from secondary data as it is used to generate new knowledge about a phenomenon using written words chosen to represent that phenomenon

(Bakulina, 2008; Cowton, 1998; Jackson, 2013). Examples are key word analysis and lexicographic analysis (Di Stefano et al., 2010). Keywords can be extracted from the title, 69 abstract, or author-supplied keyword list of a document, or full text of scholarly document.

Keywords extraction may require additional analysis tools that use natural language processing and text mining techniques, if source databases for research do not include these as part of metadata of documents. Keywords may consist of single words or multiple words.

Co-citation analysis is one of the best structuring techniques that is most used, validated, and reliable bibliometric technique (Zupic & Čater, 2015). Co-citation analysis reveals the intellectual structure of the underlying field. This technique supplements citation analysis and traces the links and interactions among authors, topics, journals, documents, keywords, or research methods, and illustrates how such groupings relate to each other. Two documents are co-cited if there is a third scholarly document that cites both scholarly documents. The larger the numbers of scholarly documents by which two scholarly documents are co-cited together, the stronger the co-citation relation between the two scholarly documents and the more likely they belong to the same research front. Document co-citation analysis connects documents on the basis of joint appearance in reference lists. Author co-citation analysis connects the authors who produced documents on the basis of joint appearances in reference lists and is useful to identify the most important authors. Co-citations are useful in detecting a shift in paradigms and schools of thought when examined over a period of time (Pasadeos, Phelps, & Kim, 1998).

Co-word analysis provides relationship between keywords based on co-occurrence in abstracts, titles, and key words of selected scholarly articles (He, 1999). The number of co- occurrences of two keywords is the number of publications in which both keywords occur together in the full text of scholarly document, title, abstract, or keyword list. Co-word analysis uses actual contents of documents and provides an alternative method for providing cognitive structure of a field. The key difference between co-word analysis and co-citation method is that 70 citations in co-citation analysis recognize the past scholarly work while keywords in co-word analysis reflect the current scholarly work that provides growth, ongoing trends, emerging trends, and emerging topics in a field (Soos et al., 2013; Khan & Wood, 2015). Words provide more direct view of scholarly work than citations (Olijnyk, 2015). The main criticism related to co-word analysis is misinterpretations and risky semantics due to the way words appear in different forms, have different meanings, and are associated without context.

Bibliographic coupling is the opposite of co-citation. Bibliographic coupling is about the overlap in the reference list in terms of shared scholarly documents, authors, or journals. Two scholarly documents are bibliographically coupled if there is a third scholarly document that is cited by both scholarly documents. The larger the number of references two scholarly documents have in common, the stronger the bibliographic coupling relation between these documents. Co- citation is considered forward looking and bibliographic coupling is considered backward looking.

Bibliometric mapping provides an innovative and exciting approach for presenting the impact of a discipline (Alfonzo, Sakraida, & Hastings-Tolsma, 2014). Bibliometric mapping aims to produce visual representations of the relations between certain units of interest. The units of interest can be documents, authors, or keywords, and the relations between the units can be based on citations, co-citations, co-authorships, or co-occurrences of keywords. Three basic types of maps that can be created are collaboration networks, semantic networks, and citation networks. In addition to showing what has already been accomplished in a discipline, research topic, or clusters, bibliometric maps also identify opportunities and directions for future research through blank spaces on map and among topics, clusters, and group of clusters. (Belter, 2012). 71

Citation network is a graph in which nodes are published scholarly documents and edges occur when one scholarly document references, or cites, another (Neeley, 1981). Citation network displays the relationships among scholarly documents based on their citations and represent the structure of the scholarly research in the underlying field. Three kinds of citation networks are direct citation network, bibliographic coupling network, and co-citation network.

Direct citation network is a graph where nodes represent scholarly documents and edges represent a direct citation from citing scholarly document to cited scholarly document. Direct citation networks represent the pattern of information flow and address longitudinal dynamic content or historical aspects based on the citation network of scholarly articles over a given timescale or a given topic in the underlying field (Soos et al., 2013). These are also called citation-flow maps or algorithmic historiography. Direct citation networks are simple and least effective in representing the scholarly research (Belter, 2012). Bibliographic coupling network is a graph in which an edge is drawn between two nodes if scholarly documents represented by the nodes both cites the same scholarly document (Belter, 2012). Co-citation network is a graph in which an edge is drawn between two documents that are co-cited if there is a third scholarly document that cites both scholarly documents (Belter, 2012). Journal title co-citation network provides the links and relations amongst disciplines. Document co-citation network provides the links and relations amongst underlying themes. Author co-citation network provides the most important individuals and their role in developing a field. Clusters formed as a result of author co-citation network are research communities around a specific research topic forming the cognitive structure based on thematic composition of the underlying field (Soos et al., 2013).

Clusters formed as a result of document co-citation network represent the global map of scientific paradigms at a given time slice (Soos et al., 2013). 72

Collaboration network is a social network shows the collaboration of authors, affiliated institutions, or countries. The most basic type of collaboration network is the co-author network.

This can be used to create institutional and country collaboration networks. Name disambiguation issues and authority control issues are known limitations across all of these networks. Despite these limitations, collaboration networks can be used to identify the most prominent and influential, and highly cited authors, institutions, and countries in a field. Clusters of nodes represent research communities that collaborate on a specific topic and relationships among these research communities. Lack of nodes between adjacent nodes or clusters may represent future collaboration opportunities (Belter, 2012). Co-author network is a graph where nodes represent authors and edges represent scholarly documents on which authors have collaborated. These can be used to create institutional and country collaboration networks based on institutional affiliations and geographic locations from co-author networks (Belter, 2012).

Semantic networks analyze the occurrence of certain words in a set of publications. The most common form of semantic networks is a word co-occurrence network. A pre-processing cleanup step is normally used prior to network generation to clean through truncation and the removal of stop words. These networks have limitations due to issues with synonymy and homonymy. Despite these limitations, semantic networks can identify the most common words used, centrality of these words, and major themes and can be used to study the underlying field.

Words mapped at the center of the network represent words common to all of the scholarly documents and words near the edges of the network represent those used by smaller number of scholarly documents. Major research directions can be identified based on the clusters of related words at the edges of the network (Belter, 2012). 73

The co-word networks or key-word co-occurrence networks are formed when keywords co-appear forming an association. Nodes in co-word network represent keywords and edges represent the co-occurrence of those words in the same scholarly document. Words can be retrieved from the full text of the scholarly documents, or from the title, keywords, and/or abstracts of scholarly documents. Visualization maps produced using this approach are called term maps or co-word maps or co-occurrence maps.

Several research articles report the usage of bibliometric techniques to study areas of science, technology, engineering, and management research. Zhi et al. (2015) used keywords analysis to review the trends and developments of carbon cycling research during the period of

1993–2013. Singh, Perdigones, García, Cañas-Guerrero, & Mazarrón (2014) used descriptive studies to provide a global overview of the research activity carried out in the field of cybernetics based on 15 years data from 1997 to 2011. Jackson (2013) used non-participant, descriptive bibliometric analysis of content, trends, and patterns found in three journals from 2007 to

2011: Journal of Green Building , International Journal of Sustainable Development and World

Ecology , and Sustainable Development in an effort to understand literary usefulness, composition, and direction.

Tahai and Meyer (1999) used journal citation analysis to Strategic Management Journal in order to identify the most influential journals in the field of Management. Radziwill (2009) modeled the evolution of knowledge flows in quality management using journal citation network analysis of the Quality Management Journal. Kajikawa and Takeda (2009) analyzed a citation network of Organic light emitting diodes and used a topological clustering method to detect the emerging research trends. Peng and Zhou (2006) analyzed Journal of International Management using citation analysis to identify the most cited articles and authors in global strategy research. 74

Neeley (1981) used an interdisciplinary cross-citation analysis to study the management and social sciences literatures. Calero-Medina and Noyons (2008) combined bibliometric mapping and citation network analysis to investigate the process of creation and knowledge transfer through scientific publication in the absorptive capacity field.

Culnan (1986) performed two studies to assess the development of Management

Information Systems (MIS) field using bibliometric techniques. The primary objective of these studies was to provide an understanding of intellectual structure of MIS. In the first study,

Culnan (1986) mapped the intellectual development of MIS research by conducting an author co- citation analysis in the MIS discipline between 1972 and 1982. In the second study building upon the first study, Culnan (1987) mapped the intellectual structure of Management Information

Systems (MIS) research by conducting an author co-citation analysis of authors in the MIS discipline between 1980 and 1985. Culnan et al. (1990) explored the structure of Organizational behavior research by conducting an author co-citation analysis of citations appeared in published research from 1972 and 1984. White and McCain (1998) used an author co-citation analysis for analyzing information science discipline. White and Griffith (1981) used an author co-citation analysis to study the intellectual structure of Information Science. Hoffman and Holbrook (1993) conducted a bibliometric study of the Journal of Consumer Research using an author co-citation analysis and examined the intellectual structure of Consumer Research. Kuo and Yang (2014) employed the document co-citation analysis to model the intellectual structure of Activity Based

Costing method. Ramos‐Rodríguez and Ruíz‐Navarro (2004) used document co-citation analysis to explore the intellectual structure of strategic management field. Nerur, Rasheed and

Natarajan (2008) supplemented this study using author co-citation analysis to trace the evolution of the intellectual structure of strategic management field. Shiau, Dwivedi, and Tsai (2015) 75 employed document co-citation analysis to explore the intellectual structure of supply chain management. Chen and Lien (2011) used an author co-citation analysis to examine the intellectual structure of e-learning. Using co-citation analysis for studying the intellectual structure of a scholarly field has a long history and wide usage and this technique is found to be valid and reliable (Gartner, Davidsson & Zahra, 2006).

Assefa and Rorissa (2013) used a co-word analysis method to analyze the STEM education field to identify visual insights that will have policy and curricular development implications and to identify core knowledge areas that constitute the domain of STEM education.

This study analyzed titles, author-provided keywords, and abstracts of tens of thousands of scholarly works from Web of Science and ERIC database. Chao et al. (2015) used a co-word analysis method to summarize the scientific trends in phytoplankton studies between 1991 and

2013. This study analyzed scholarly articles containing the keyword “phytoplankton” in title, abstract and keywords fields, published between 1991 and 2013, from Web of Science (Thomson

Reuters). Heersmink et al. (2011) used bibliometric mapping technique co-word map of the field of computer and information ethics to provide a visual representation of computer and information ethics field.

Estabrooks, Winther, & Derksen (2004) used an author co-citation analysis to identify the structure of scientific community including the network of researchers in nursing and co-word analysis to map the research utilization as a field of study in nursing. Rorissa and Yuan (2012) used bibliometric maps: co-authorship network, highly productive authors, highly cited journals and papers, author-assigned keywords, and active institutions to study, visualize, and map the intellectual structure of information retrieval. Fernandez-Alles and Ramos-Rodríguez (2009) used combination of bibliometric techniques citation analysis, document co-citation analysis, and 76 author co-citation analysis to study the Intellectual structure of human resources management research. Zhang, Xie and Ho (2010) used the frequency of author keywords, words in title, words in abstract, and keywords plus to identify world volatile organic compounds research trends.

Zheng et al. (2015) used a comprehensive bibliometric analysis and descriptive statistics in industrial wastewater research from 1991 to 2014 to analyze document types, languages, categories, journals, countries, territories, institutions, and publication patterns. They have analyzed author key words across different periods to identify research trends and used word cluster analysis to trace industrial wastewater research hotspots. Xie (2015) performed bibliometric analysis of international anticancer research references during years 2000–2014 to study anticancer research trends and identify hot research topics based on bibliometric maps: publication output map based on date, collaborative map of country and institution, journal co- citation map, author co-citation map, document co-citation map, keyword co-occurrence map, keyword co-occurrence timeline map. Koseoglu, Akdeve, Gedik, & Bertsch (2015) researched the pattern of strategic management literature in healthcare management research by evaluating articles published in academic health management journals based on their journal years, authors, authors’ institutions, research methodology, co-authorship situation and research themes. Using citation and co-citation analysis on cloud computing literature published in scholarly journals,

Wang et al. (2016) examined the knowledge structure of cloud computing discipline in the

Information Systems field.

The number of scholarly publications using the bibliometric analysis for science studies has been increasing steadily during recent years. Bibliometric analysis is gradually being accepted as a useful and valuable tool for evaluating scientific production not only for the academic community but for the professional community as well. This increase is due to a 77 number of factors including availability, accessibility, affordability, and wide usage of tools to treat large datasets and publishing of a sufficient number of articles within a field in order to evoke a bibliometric investigation, and politicians and funding agencies looking for evaluation of research and productivity within many scientific communities (Ellegaard and Wallin, 2015).

Chapter Summary

The purpose of this chapter was to review the main conceptual areas within the literature that were relevant to understanding the problem for this research. First, prior scholarly work on software quality management was researched. Finally, survey of prior work that used bibilometric techniques in other fields was summarized.

78

CHAPTER 3

METHODOLOGY

Chapter Overview

The purpose of this chapter is to provide the research methodology that will be used to develop quality management assessment tool to evaluate software based on software quality management practices. The research questions were developed based on significant course work and extensive literature review. The research questions are:

RQ1. What software quality management best practices pertaining to people can be used to evaluate software organizations produce?

RQ2. What software quality management best practices pertaining to organizational environment can be used to evaluate software organizations produce?

RQ3. What software quality management best practices pertaining to process can be used to evaluate software organizations produce?

RQ4. What software quality management best practices pertaining to technology can be used to evaluate software organizations produce?

RQ5. What are the key elements of software quality management assessment tool to evaluate software organizations produce? 79

Research Design

As in any research, research design is the most important activity in dissertation research.

Research design activity focuses on mapping appropriate research methods to conduct rigorous research in order to answer carefully selected research questions (Zupic & Čater, 2015). Software quality assessment tool to evaluate software using software quality management best practices uses an integrated and holistic approach that evaluates software from multiple dimensions by combining organizational environment, process, people, and technology perspectives. This research applies bibliometric techniques on scholarly peer-reviewed journals to derive software quality management best practices and to differentiate important best practices from trivial best practices. Weight is assigned to organizational environment, people, process, and technology bes practices. These best practices are analyzed and reviewed to refine and construct quality management assessment tool. Academic experts review is conducted to further refine the quality management assessment tool. Pilot testing is conducted with industry experts to further refine quality management assessment tool and develop a final quality management assessment tool.

Validation is conducted with industry practitioners to further verify quality management assessment tool.

Assessment tool consists of series of questions organized into organizational environment, process, people, and technology dimensions using Likert scale. The response options for each question are: Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree.

See Figure 2 that shows all the steps for the methodology. 80

Identify the best practices

Assign weight to best practices

Review best practices

Refine best practices

Develop assessment tool

Academic expert review of assessment tool

Refine assessment tool

Industry expert pilot testing of assessment tool

Refine and finalize assessment tool

Industry practitioner validation of assessment tool

Figure 2. Research Design

Research Data Sources

Databases used for this study are limited to Science Citation Index (SCI) and the Social

Sciences Citation Index (SSCI) compiled by the Institute for Scientific Information (ISI) Web of

Science databases. This research uses bibliometric techniques to articles published in scholarly peer-reviewed journals. Citation data of publications from the Science Citation Index (SCI) and the Social Sciences Citation Index (SSCI) compiled by the Institute for Scientific Information

(ISI) Web of Science databases are included because SCI and SSCI are two of the best sources for citation data (Kajikawa & Takeda, 2009). These are the most comprehensive and authoritative scientific and technology literature data sources frequently used by scholars for reviewing high-quality publications and assessing scientific and technology accomplishments

(Braun et al. 2000; Tan et al. 2014; Yin & Chiang, 2010). 81

Other alternative data sources, Google Scholar and Scopus are considered inferior to ISI

Web of Science databases. Google Scholar cites many sources other than strictly scholarly journals (Thomson & Walker, 2015). Scopus is a late addition to bibliometric source databases and does not provide broad coverage compared to Web of Science databases (Thomson &

Walker, 2015; Olijnyk, 2015). Another contributing factor for selecting ISI Web of Science database for this research is that it is accessible freely through Indiana State University Library for research purpose.

Academic Expert Reviews

Inputs and feedback from academic experts’ review is invaluable in refining the software quality management assessment tool. Academic expert reviewers are selected based on the following criteria.

• Academic scholars with professional doctoral degrees in Technology management,

Engineering management or related disciplines.

• Academic scholars have experience in research and teaching in an university setting.

• Academic scholars serve in editorial board of at least one highly productive and highly

influential scholarly journal that publishes scholarly literature on software quality

management.

Industry Expert Pilot Test

Inputs and feedback from industry experts’ pilot testing is invaluable in refining the software quality management assessment tool. Industry experts for validation and pilot test are selected based on the following criteria.

• Industry practitioners or Industry researchers with professional doctoral degrees in

Technology management, Engineering management or related disciplines. 82

• Industry practitioners or Industry researchers have experience in software industry.

• Industry practitioners or Industry researchers serve in editorial board of at least one

highly productive and influential scholarly journal that publishes scholarly literature on

software quality management.

Industry Practitioner Validation

Inputs and feedback from industry practitioners’ validation is invaluable in further verifying the software quality management assessment tool. Industry practitioners are selected based on the following criteria.

• Industry practitioners work in software industry.

• Industry practitioners are the most recent authors of at least one highly productive and

influential scholarly journal that publishes scholarly literature on software quality

management.

• Industry practitioners are responsible for Technology management of software with titles

such as delivery manager, director, vice president, president, chief information officer,

and chief technology officer.

Data Analysis Tools

Freeware called VOSviewer is used for constructing and viewing bibliometric maps.

VOSviewer provides graphical representation of large bibliometric maps in a way that can be easily interpreted (Van Eck & Waltman, 2010). This tool was developed by the Leiden

University’s Center for Science and Technology Studies and has been made available freely to the bibliometric research community (Cobo, López-Herrera, Herrera-Viedma, & Herrera, 2011).

VOSviewer uses mapping technique and clustering techniques based on visualization of similarities (Heersmink et al., 2011). VOS mapping technique is a distance based bibliometric 83 network technique and is an alternative to multidimensional scaling (Van Eck & Waltman,

2010). The VOS clustering technique is a distance based bibliometric technique and is a variant of modularity function of Newman and Girvan (Waltman, van Eck, & Noyons, 2010).

VOSviewer version 1.6.4 released on April 7, 2016 was downloaded for conducting this research. This version includes the new features and improvements to citation networks, keyword co-occurrence networks, automated user interface, and so on.

Data Collection and Analysis Approach

Steps involved in identifying software quality management best practices are depicted in Figure 3.

Step 1: Retrieve data

Step 2: Pre-processing data

Step 3: Export data for analysis

Step 4: Perform analysis

Step 5: Interpret findings

Figure 3. Data Analysis for Identifying Software Quality Management Best Practices

Step 1 (Retrieve Data) 84

Keywords selected were “Software quality*” or “Software assurance” or “Software management” or “Software improvement” or “Software process improvement” or “Software process assessment” or “Software process innovation” or “Software process metrics” or

“Software process engineering” or “Software process technology” or “Software process evaluation” or “Software process management” or “Software process appraisal” or “Software process certification” or “Software process audit” or “Software product assessment” or

“Software product evaluation” or “Software product management” or “Software product appraisal” or “Software product certification” or “Software product audit” or “Software product metrics” or “Software product improvement”. “Software quality*” includes keywords such as

“Software quality” or “Software quality plan”, “Software quality metrics” or “Software quality engineering” or “Software quality science” or “Software quality technology” or “Software quality assurance” or “Software quality control” or “Software quality management”. Not all journals publish keywords. To overcome this limitation, document titles and abstracts along with keywords were used to search Science Citation Index (SCI), Social Sciences Citation Index

(SSCI), and Emerging Sources Citation Index (ESCI) compiled by the Institute for Scientific

Information (ISI) Web of Knowledge databases using selected keywords. Arts & Humanities

Citation Index (A&HCI) that is part of the Institute for Scientific Information (ISI) Web of

Knowledge databases was excluded. Search filter was set to include scholarly articles written in

English. Search filter was set to exclude any documents not in English or any non-scholarly documents such as reviews, proceeding papers, meeting abstracts, editorial material, letters, notes, discussions, book reviews, book chapters, bibliographies.

Step 2 (Pre-processing Data) 85

To verify and improve the validity of results, a multidimensional verification approach based on activity listed below was utilized.

1. Retrieved results were ranked using “document type” field. This resulted in record

count with document type as “Article”. This count was used to verify that results did

not include conference proceeding papers and book chapters.

2. Retrieved results were ranked using “languages” field. This resulted in record count

with Languages as “English”. This count was used to verify that results included

articles written English only.

3. Retrieved results were ranked using “Source Titles” field. This listing was used to

verify that results included scholarly journals. These include publications produced

by eminent professional organizations such as IEEE, ACM, and IET.

4. Retrieved results were ranked using “research area” field. Research areas that have at

least 5% of total results were reviewed. This information was used to verify the

research areas where software quality management literature gets published.

5. Retrieved results were ranked using “Web of Science Categories” field. Web of

Science Categories that have at least 5% of total results were reviewed. This

information was used to verify the Web of Science Categories where software quality

management literature gets published.

6. Retrieved results were ranked using “Publication Years” field. This information

was used to verify that the results represent the contemporary thinking and practices

while taking all available results.

7. Retrieved results were ranked using “Organizations - Enhanced” field. This was

used to verify the diversity of organizations that produced the source data. These 86

include higher educational research institutions and multinational organizations

across the globe.

8. Retrieved results were ranked using “Author” field. This was used to verify the

diversity of authors that produced the source data.

9. Retrieved results were ranked using “Funding agency” field. This was used to

verify the diversity of funding agencies that produced the data.

10. Retrieved results were ranked using “Countries/Territories” field. This was used

to verify that the results represent the researchers across the globe.

11. Abstracts of retrieved documents were randomly examined to verify that search

returned documents related to software quality management discipline. Any

documents that are not related to software quality management were excluded.

Step 3 (Export Data for Analysis)

There was no filtering set on the output so that all records including cited references can be examined and retrieved for exporting. Full record including cited references is retrieved. Due to the limitation of data download limit of 500 publications at a time from Web of Science databases, downloading is performed in several batches resulting in multiple export files.

Step 4 (Perform Analysis)

To overcome limitations with each of these bibliometric techniques, hybrid techniques that are appropriate to answer research questions are used (Soos et al., 2013; Zupic & Čater,

2015). Combination of advanced bibliometric techniques, document co-citation analysis, and co- word analysis were used to reveal the intellectual structure of software quality management discipline, to identify best practices of software quality management discipline and to differentiate between trivial and non-trivial ones. Based on the data obtained in step 3, 87

VOSviewer software was used to perform document co-citation analysis, and co-word analysis and provided visualization maps. These visualization maps were analyzed to reveal the best practices of software quality management discipline.

Step 5 (Interpret Findings)

Results were described and findings were interpreted to derive software quality management best practices. Weightings are assigned to reflect the relative importance of organizational environment, people, process, and technology in software development. The initial draft of self-administered questionnaire is reviewed with committee members and refined based on their feedback and comments before administering to all identified sample participants.

Assessment tool is constructed and reviewed by academic experts and refined. Assessment tool is pilot tested by industry experts from different geographies and sizes and further refined. The final assessment tool is validated by academic practitioners from different geographies and sizes.

This approach considers theoretical and practical aspects of the assessment tool.

A self-administered online questionnaire using Qualtrics software referred as web survey or Internet survey is utilized to obtain feedback from academic experts, industry experts, and industry practitioners. Indiana State University approved Qualtrics as the tool for conducting surveys. Web survey or Internet survey offers several advantages such as real-time data collection, convenience, higher response rate, real-time results availability, and so on over traditional paper based surveys. Invitation only survey email sent out to each participant from

Qualtrics. Creation of unique credentials to take part in survey could discourage participation.

Qualtrics includes unique link that can be used each participant to participate in the survey and this does not require any participant to create unique credentials. Respondents could withdraw 88 their results from the survey at any time. Individual feedback or responses are not published.

Access to survey, list of participants, and results in Qualtrics are available to researcher only.

Feedback from academic experts and industry experts is used to refine the questionnaire and develop a final quality management assessment tool for industrial practitioners. Validation of the quality management assessment tool is conducted by industry practitioners.

Institutional Review Board (IRB)

As the research involved human subjects, academic experts, industry experts, and industry practitioners the IRB guidelines provided by the Indiana State University Institutional

Review Board are followed. The mandatory training provided by IRB is taken and approval to proceed is received prior to interaction with experts. Research material including draft assessment tool questionnaire is reviewed with committee members, refined based on feedback, and submitted to IRB.

Experts and participants participating in this study are informed of research goals and background information. Experts and participants are also informed that information provided would remain anonymous.

Validity

There are four types of threats: statistical conclusion, internal validity, construct validity and external validity.

Internal validity or Credibility

Internal validity is pertaining to the issue of whether the research results consistently represent reality (França, Da Silva, de LC Felix, & Carneiro, 2014). Internal validity threat is caused by instrumentation threat, selection threat, or maturation threat. 89

Instrumentation threat, bad design of self-administered questionnaire can lead to misinterpretation regarding the research and weakens the research results. To minimize this threat, the initial draft of self-administered questionnaire was reviewed with committee members and refined based on their feedback and comments before administering to all identified sample participants.

Maturation threat can be caused if participants’ behavior changes such as losing interest or acquiring new knowledge. To minimize this threat, self-administered questionnaire is designed precisely and accurately that can be completed within one hour.

Selection threat is caused by human performance and potential bias of participants. To minimize this threat, sample academic and industry experts with several years of experience that serve on editorial boards of top journals and sample industry practitioners with several years of experience that authored at least one scholarly article in a top journal are selected. Using academic experts for assessment tool review, industry experts for pilot testing, and industry practitioners for validation improves validity and credibility. All participants may not have the same competencies. However, this is considered as an advantage to refine the assessment tool from different perspectives rather than weakness.

External validity or Transferability

External validity threat is caused if sample selected not being representative of the population or if user or reader is unable to decide to what extent findings can be applied to specific situations.

To minimize threat of sample experts not being representative of the population, academic and industry experts representing editorial boards of top journals that are competent on software quality management representing different geographies and business domains are 90 selected. These represent thought leaders on software quality management. To minimize the threat of sample practitioners not being representative of the population, industry practitioners that have authored at least one scholarly article in a top journal are selected. These represent leading industrial practitioners on software quality management. These consider both the theoretical and practical aspects of the assessment tool.

This study uses type of purposive sampling known as criterion sampling to select experts and industry practitioners based on defined criteria. Some researchers label this as judgement sampling. Even though sample size of more than one academic expert for review, more than one industry expert for pilot testing, and more than one for industry practitioner for validation is small, small sample size provides required focus and diversity to obtain feedback on the assessment tool from academic experts, industry experts, and industry practitioners that have skills, knowledge, experience, and expertise. As sample volunteers recruited are diverse enough to fulfill the stated purpose of developing the assessment tool, the small sample size is generally adequate for purposeful sampling (Koerber & McMichael, 2008).

To minimize the threat of user or reader being unable to decide to what extent findings can be applied to specific situations, detailed description of the research methodology, the context in which the research was performed, and the results are presented. Defined search terms in the scholarly journals resulted in having primary sources all written in English language.

Studies written in other languages are excluded. Scholarly journals contained sufficient information to represent the knowledge reported by previous researchers or professionals.

Construct validity

Construct validity threat is caused by mono-operation bias or mon-method bias or evaluation apprehension. Mono-operation bias was introduced by single subject or mono-method 91 bias introduced by single method. To minimize this threat, three groups (academic experts, industry experts, and industry practitioners) with different experiences and background with more than one participant in each group is selected. Evaluation apprehension was introduced when feedback from subjects feel that they are being evaluated and don’t provide honest responses. Experts and practitioners were communicated that responses are anonymous and are used by researchers for research purpose only.

Conclusion Validity

Unterkalmsteiner et al. (2015) identify three threats that cause conclusion validity. These are random heterogeneity of subjects, random irrelevancies in experimental setting, and searching for a certain result. To minimize the threat caused by searching for a certain result, all responses from academic expert review, industry expert pilot testing, and industry practitioner validation whether those are positive or negative are utilized. To minimize the threat caused by random heterogeneity of subjects, experts are selected based on competencies in software quality, engineering management, and technology management. To minimize the threat caused by random irrelevancies in experimental setting, there are no prior discussions or information sharing with subjects prior to sending out the self-administered questionnaire.

Chapter Summary

This chapter explained the research design and data sources. Next it presented the data collection tools and data analysis and approach. Finally, it presented the Institutional Review

Board, and validity of the research.

92

CHAPTER 4

RESULTS

Chapter Overview

The purpose of this chapter is to provide the results of the research to develop quality management assessment tool to evaluate software based on software quality management practices using the research methodology.

Results

Data Retrieval

Using pre-defined terms identified in chapter 3, document titles and abstracts along with keywords were used to search Science Citation Index (SCI), Social Sciences Citation Index

(SSCI), and Emerging Sources Citation Index (ESCI) compiled by the Institute for Scientific

Information (ISI) Web of Knowledge databases. Arts & Humanities Citation Index (A&HCI) that is part of the Institute for Scientific Information (ISI) Web of Knowledge databases was excluded. The initial search returned 2242 results that included scholarly articles, proceeding papers, book chapters, and articles written in different languages. Search filter was set to exclude proceeding papers and book chapters and only include scholarly articles written in English. This returned 1613 results that included scholarly articles written in English. Total number of records searched was 54,806,269. Search was performed on August 14, 2016 and this search returned 93 result count of 1613. Search was performed again on August 16, 2016 and this search also returned result count of 1613.

Data Pre-processing

To verify the validity of results retrieved, results were analyzed by performing tasks listed below.

1. Retrieved results were ranked using “document type” field. This resulted in

record count of 1613 with document type as “Article”. This count was used to

verify that results did not include conference proceeding papers and book

chapters.

2. Retrieved results were ranked using “languages” field. This resulted in record

count of 1613 with Languages as “English”. This count was used to verify that

results included articles written English only.

3. Retrieved results were ranked using “Source Titles” field. This retrieved 386

journals with records that vary between 1 and 143. The previous versions of

Journal of Software: Evolution and Process, Software Engineering Journal,

Journal of Software Maintenance Research and Practice, Journal of Software

Maintenance and evolution research and practice are listed as different line

items in the listing. These are combined and counted under Journal of

Software: Evolution and Process. This listing was used to verify that results

included scholarly journals. These include publications produced by eminent

professional organizations such as IEEE, ACM, and IET. Top 20 source titles

are listed in Table 3. 94

Table 3

Productive Journals

Source Titles Records

SOFTWARE QUALITY JOURNAL 143

INFORMATION AND SOFTWARE TECHNOLOGY 129

JOURNAL OF SYSTEMS AND SOFTWARE 117

JOURNAL OF SOFTWARE EVOLUTION AND PROCESS 102

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 84

IEEE SOFTWARE 55

INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND

KNOWLEDGE ENGINEERING 45

EMPIRICAL SOFTWARE ENGINEERING 45

COMPUTER 30

IET SOFTWARE 23

SOFTWARE PRACTICE EXPERIENCE 18

IEEE TRANSACTIONS ON RELIABILITY 18

LECTURE NOTES IN COMPUTER SCIENCE 17

COMPUTER STANDARDS INTERFACES 17

EXPERT SYSTEMS WITH APPLICATIONS 16

INFORMATION MANAGEMENT 15

COMMUNICATIONS OF THE ACM 15

INFORMATION SCIENCES 13

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT 13 95

Source Titles Records

MIS QUARTERLY 12

IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS 11

4. Retrieved results were ranked using “research area” field. Research areas that

have at least 5% of total results were reviewed. These research areas based on

Web of Knowledge classification retrieved were Computer Science,

Engineering, and Business Economics. This information was used to verify the

research areas where software quality management literature gets published.

5. Retrieved results were ranked using “Web of Science Categories” field. Web of

Science Categories that have at least 5% of total results were reviewed. These

are Computer Science Software Engineering, Computer Science Information

Systems, Engineering Electrical Electronic, Computer Science Theory

Methods, Computer Science Hardware Architecture, Computer Science

Artificial Intelligence, and Management.

6. Retrieved results were ranked using “Publication Years” field. 80% of results

retrieved were produced after 1996. This indicated that software quality

management literature grew tremendously during last 20 years. This also

verified that the results represent the contemporary thinking and practices while

taking all available results since year 1974 into consideration. Retrieved results

organized by publication year are depicted in Figure 4. 96

Figure 4. Journal Count by Year

Retrieved results get cited in scholarly articles produced. Journal citation counts of retrieved results were reviewed. These are depicted in Figure 5. 97

Figure 5. Journal Citation Count by Year

7. Retrieved results were ranked using “Organizations - Enhanced” field. This

was used to verify the diversity of organizations that produced the source data.

These include higher educational research institutions and multinational 98

organizations across the globe. Top 25 contributing organizations are listed in

Table 4.

Table 4

Productive Organizations

Organizations-Enhanced Records

FLORIDA STATE UNIVERSITY SYSTEM 78 FLORIDA ATLANTIC UNIVERSITY 61 CARNEGIE MELLON UNIVERSITY 29 BLEKINGE INST TECHNOL 29 UNIVERSITY OF MICHIGAN SYSTEM 24 UNIVERSITY OF MICHIGAN 24 UNIVERSITY SYSTEM OF MARYLAND 22 INTERNATIONAL BUSINESS MACHINES IBM 20 POLYTECHNIC UNIVERSITY OF MADRID 19 UNIVERSITY OF MARYLAND COLLEGE PARK 18 PENNSYLVANIA COMMONWEALTH SYSTEM OF HIGHER EDUCATION PCSHE 18 UNIVERSITY SYSTEM OF GEORGIA 17 KING FAHD UNIVERSITY OF PETROLEUM MINERALS 17 UNIVERSITY OF TECHNOLOGY SYDNEY 16 UNIVERSIDAD DE CASTILLA LA MANCHA 16 UNIVERSITY OF OULU 15 UNIVERSITY OF NEW SOUTH WALES 15 UNITED STATES DEPARTMENT OF DEFENSE 15 AT T 15 UNIVERSITY OF ALBERTA 14 EINDHOVEN UNIVERSITY OF TECHNOLOGY 14 UNIVERSITY OF PITTSBURGH 13 UNIVERSITY OF CALIFORNIA SYSTEM 13 QUEENS UNIVERSITY CANADA 12 CHALMERS UNIVERSITY OF TECHNOLOGY 12

8. Retrieved results were ranked using “Author” field. Top 25 productive authors

are depicted in Table 5. These include authors across the globe. 99

Table 5

Productive Authors

Author Count KHOSHGOFTAAR TM 61 SELIYA N 19 PIATTINI M 16 ALLEN EB 15 GORSCHEK T 14 WOHLIN C 12 KEMERER CF 12 HASSAN AE 10 HALL T 10 GARCIA F 10 GAO KH 10 BRIAND LC 10 PEDRYCZ W 9 NIAZI M 9 SLAUGHTER SA 8 MATHIASSEN L 8 KRISHNAN MS 8 EL EMAM K 8 ALSHAYEB M 8 NIELSEN PA 7 LIU H 7 KITCHENHAM B 7 JUNG HW 7 ADAMS B 7 TURHAN B 6

9. Retrieved results were ranked using “Funding agency” field. Top 25 funding

agencies are depicted in Table 6. These include funding agencies across the

globe.

Table 6

Top Funding Agencies

Funding Agency Count 100

Funding Agency Count NATIONAL NATURAL SCIENCE FOUNDATION OF CHINA 41 SCIENCE FOUNDATION IRELAND 11 NSF 9 EUROPEAN COMMISSION 8 NATIONAL BASIC RESEARCH PROGRAM OF CHINA 7 NSERC 6 KING FAHD UNIVERSITY OF PETROLEUM AND MINERALS 6 CNPQ 6 AUSTRALIAN RESEARCH COUNCIL 6 SPANISH MINISTRY OF SCIENCE AND TECHNOLOGY 5 NATIONAL SCIENCE FOUNDATION 5 LERO THE IRISH SOFTWARE ENGINEERING RESEARCH CENTRE 5 CAPES 5 SFI PRINCIPAL INVESTIGATOR PROGRAMME 4 SCIENCE FOUNDATION IRELAND SFI 4 NATIONAL SCIENCE COUNCIL TAIWAN 4 MINISTRY OF EDUCATION SCIENCE AND TECHNOLOGY 4 KUWAIT UNIVERSITY 4 ACADEMY OF FINLAND 4 SFI 3 SCIENTIFIC AND TECHNOLOGICAL RESEARCH COUNCIL OF TURKEY 3 SCIENCE FOUNDATION IRELAND SFI STOKES LECTURESHIP PROGRAMME 3 PROGRAM FOR NEW CENTURY EXCELLENT TALENTS IN UNIVERSITY 3 NATURAL SCIENCES AND ENGINEERING RESEARCH COUNCIL OF CANADA 3 NATIONAL SCIENCE COUNCIL NSC OF TAIWAN 3

10. Retrieved results were ranked using “Countries/Territories” field. This verified

that the results represent the researchers across the globe. Top 25 contributing

countries based on productivity are depicted in Table 7.

Table 7

Productive Countries

Countries/Territories Records USA 518 ENGLAND 136 101

Countries/Territories Records CANADA 117 PEOPLES R CHINA 99 GERMANY 83 SPAIN 82 TAIWAN 64 AUSTRALIA 64 ITALY 63 SWEDEN 53 FINLAND 46 NETHERLANDS 45 SOUTH KOREA 41 FRANCE 41 JAPAN 39 TURKEY 37 BRAZIL 37 NORWAY 35 INDIA 35 IRELAND 29 DENMARK 26 SAUDI ARABIA 25 AUSTRIA 25 GREECE 22 POLAND 16

11. Abstracts of retrieved documents were randomly examined to verify that search

returned documents related to software quality management discipline. Based

on verification activities performed, no documents were excluded during this

pre-processing step.

Data export

There was no filtering set on the output so that all records including cited references can be examined and retrieved for exporting. Full record including cited references is retrieved. Due to the limitation of data download limit of 500 publications at a time from Web of Knowledge databases, downloading is performed in four different batches resulting in multiple export files. 102

Batch 1 contained 1 through 500 records. Batch 2 contained 501 through 1000 records. Batch 3 contained 1001 though 1500 records. Batch 4 contained all remaining records 1501through 1613.

Analysis

Top 25 most cited and influential authors are depicted in Table 8.

Table 8

Influential Authors

Authors Records KHOSHGOFTAAR TM 310 ALLEN EB 148 SELIYA N 142 HALL T 84 NIAZI M 80 HUDEPOHL J 61 KRISHNAN MS 56 GAO KH 55 SLAUGHTER SA 53 KEMERER CF 52 RAINER A 52 JONES WD 49 GORSCHEK T 48 MATHIASSEN L 47 PIATTINI M 46 GARCIA F 44 RAMASUBBU N 42 WOHLIN C 40 NIELSEN PA 35 ZHOU YM 34 EL EMAM K 31 ESCALONA MJ 30 MEJIAS M 30 NAPOLITANO A 30 BRIAND LC 29 103

Top 25 influential and highly cited countries are depicted in Table 9.

Table 9

Influential Countries

Countries/Territories Citations USA 978 ENGLAND 464 CANADA 320 AUSTRALIA 257 SPAIN 250 GERMANY 228 DENMARK 218 SWEDEN 207 PEOPLES R CHINA 203 IRELAND 185 TAIWAN 183 NORWAY 180 TURKEY 150 ITALY 141 SAUDI ARABIA 126 FINLAND 125 NETHERLANDS 99 BRAZIL 99 MEXICO 92 SOUTH KOREA 89 INDIA 78 COLUMBIA 73 BELGIUM 72 FRANCE 58 PAKISTAN 52

Top 25 influential and highly cited journals are depicted in Table 10

Table 10

Influential Journals

Source Titles Records JOURNAL OF SYSTEMS AND SOFTWARE 385 INFORMATION AND SOFTWARE TECHNOLOGY 373 104

Source Titles Records IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 360 SOFTWARE QUALITY JOURNAL 255 JOURNAL OF SOFTWARE EVOLUTION AND PROCESS 212 IEEE SOFTWARE 184 EMPIRICAL SOFTWARE ENGINEERING 128 INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING 94 MANAGEMENT SCIENCE 68 COMPUTER 55 IET SOFTWARE 53 COMMUNICATIONS OF THE ACM 53 MIS QUARTERLY 45 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT 44 IEEE TRANSACTIONS ON RELIABILITY 41 INFORMATION MANAGEMENT 38 ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY 35 COMPUTER STANDARDS INTERFACES 33 INFORMATION SYSTEMS RESEARCH 32 SOFTWARE PRACTICE EXPERIENCE 31 EXPERT SYSTEMS WITH APPLICATIONS 31 INFORMATION SCIENCES 31

Author co-citation analysis

VOSviewer configuration settings required to perform author co-citation analysis were performed. Type of analysis was selected as co-citation. Counting method was selected as full counting. Unit of analysis was selected as “cited authors (first author only)”. Minimum number of citations of an author used was changed from default value to 80 from 20 to return top 20 authors co-cited from 20448 authors. File thesaurus.txt was used to replace names of authors that were used with different names in articles. “boehm, b” was replaced with “boehm, bw”. “fenton, ne” was replaced with “fenton, n” after verifying that these name variations were used in articles for these authors. Default cluster size of 1 was changed to 5. Next, minimum visualization maps 105 were generated. Network visualization map was generated with 2 different clusters and this is depicted in Figure 6.

Figure 6. Network Visualization Author Co-citation

Density visualization map is depicted in Figure 7. 106

Figure 7. Density Visualization Author Co-citation

Clusters formed based on author co-citations are depicted in Table 11.

Table 11

Author Co-Citation Clusters

Cluster Authors 1 Banker, rd; Boehm, bw; Curtis, b; Dyba, t; Humphrey, ws; Jones, c, Kitchenham, b, Mccall, ja, Niazi, m, Paulk, m 2 Basili, vr, Briand, lc, Chidamber, sr, El emam, k, Fenton, n, Khoshgoftaar, tm, Mccabe, tj, Musa, jd, Wohin, c

VOSviewer software was used to create maps based on bibliographic data using 4 batches of files downloaded from Web of Knowledge databases. VOSviewer configuration settings required to perform co-word analysis were performed. Type of analysis was selected as co-occurrence. 107

Counting method was selected as full counting. Unit of analysis was selected as all keywords to take both author keywords and keywords plus into consideration. With default value of 5 as number of occurrences of keyword, 286 words out of a total of 4460 words met the threshold.

Minimum number of occurrences of a keyword used was changed to 10 from the default value of

5. 126 words out of a total of 4460 words met the threshold. The default value of minimum cluster size was unchanged and visualization maps were generated. Visualization maps generated

2 different clusters. Cluster 1 contained 68 words and Cluster 2 contained 58 words. Network visualization map is depicted in Figure 8.

Figure 8. Network Visualization Co-word

Density visualization map is depicted in Figure 9. 108

Figure 9. Density Visualization Co-word

Results analysis

There are two clusters formed based on co-word analysis. Software quality cluster themes

1 formed based on co-word analysis is depicted in Table 12.

Table 12

Software Quality Cluster Themes 1

Cluster Themes Key words Adoption, Coordination, Innovation, Experiences, People practices Improvement, Perspectives Management, Project management, Product management, Quality management, Risk Management processes management, Knowledge Management Organizational Standards, Technology, Tools, Systems, Models and environment Frameworks, Methodologies, Processes, Motivators Best practice models CMM, CMMI, SPICE, Agile software development 109

Cluster Themes Key words Requirements, Requirements engineering, Design, Software engineering Software development, Software engineering, processes Implementation Software quality control processes Inspections Software process definition, Software process Process Engineering assessment, Software process improvement Performance, Productivity, Security, Quality, Success Metrics factors Empirical study, Empirical analysis, Fuzzy logic, Research methods Grounded theory, Simulation

Software quality cluster themes 2 formed based on co-word analysis is depicted in Table

13.

Table 13

Software Quality Cluster Themes 2

Cluster Themes Key words People Human factors Software architecture, Software quality Software engineering assurance, Maintenance, Software processes evolution, Measurements, Metrics, Models Software quality control Verification, Validation, Software testing, processes Regression testing Algorithms, Genetic algorithms, Classification, Classification trees, Neural networks, Machine learning, Code, Modules, Data mining, Defect prediction, Fault prediction, Refactoring, Design patterns, Optimization, Software reuse Technology Unified Markup Language, Object oriented design, Object oriented software and Development tools systems, Open source software Usability, Reliability, Complexity, Cost, Design metrics, Maintainability, Object Metrics oriented metrics 110

Cluster Themes Key words Case study, Empirical software engineering, Empirical validation, Case-based reasoning, Research methods Controlled experiment

Trivial Practices

Keyword representing the same concept or having the same meaning are combined.

Words such as software quality that are used to retrieve scholarly journals and articles on software quality management are ignored. Keywords that have low number of occurrences and low total link strength are not considered.

Assessment Tool

Best practices pertaining to organizational environment included in the survey are listed in Table 14.

Table 14

Organizational Environment - Best Practices

Organizational Response environment

The management at all 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly levels demonstrates strong Disagree Agree and determined leadership and commitment to shared vision, mission, values, business strategy, goals, objectives, quality policy, and quality culture. The management at all 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly levels respects, trusts, and Disagree Agree empowers people across the organization and provides meaningful oversight. The organization attracts, 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly recruits, develops, Disagree Agree 111

Organizational Response environment

motivates, organizes, excites, and retains competent staff at all levels. The top management 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly leads the strategic change Disagree Agree management and provides the necessary infrastructure (People, Technology, Processes) to support diffusion and adoption of best practices across the organization. The organization provides 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly work environment that is Disagree Agree safe, secure, stress free, and contributes to productivity, work-life balance, quality of work- life, and employee health and well-being. The organization provides 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly appropriate compensation, Disagree Agree career development paths, rewards, and recognition system based on contribution and value.

People best practices included in the survey are listed in Table 15.

Table 15

People - Best Practices

People Response

Staff is competent, 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly self-motivated, and Disagree Agree empowered. 112

People Response

Teams are coherent, 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly cross-functional, and Disagree Agree self-organizing that cooperate, coordinate, communicate, and collaborate for a shared vision. Staff is engaged in 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly continuous learning, Disagree Agree continuous improvement, competency development, and career development through reflection, problem solving, experimentation, training, coaching, and mentoring. Staff embraces the 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly culture of agility and Disagree Agree innovation, and contributes to waste elimination, value stream optimization, and adding value to consumers. Staff embraces change, 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly adopts, and adapts best Disagree Agree practices across the organization.

Process best practices included in the survey are listed in Table 16.

Table 16

Process - Best Practices

Process Response 113

Process Response Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree engineering (requirements, architecture, design, development, and maintenance) processes are consistently applied across the organization. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree quality control engineering (verification, reviews, inspections, validation, testing, regression testing, and audit) processes are consistently applied across the organization. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree quality assurance engineering (process definition, process assessment, process improvement, process deployment, process quality assurance, product quality assurance, and metrics) processes are consistently applied across the organization. 114

Process Response Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree configuration management, product integration, software deployment, and software release management processes are consistently applied across the organization. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined fact driven Disagree Agree decision making process is consistently applied across the organization. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined problem Disagree Agree management and root cause analysis process is consistently applied across the organization. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined human Disagree Agree capital management, workforce planning, competency development, and knowledge management processes are consistently applied across the organization. 115

Process Response Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined engineering Disagree Agree management (project management, product management, program management, portfolio management, supplier management, and risk management) processes are consistently applied across the organization.

Technology best practices included in the survey are listed in Table 17.

Table 17

Technology - Best Practices

Technology Response Well defined 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly engineering standards Disagree Agree for requirements, architecture, design, code, and testing are consistently used across the organization. Appropriate tools for 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly requirements, Disagree Agree architecture, design, code, and testing are consistently used across the organization. 116

Technology Response Continuous 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly integration and Disagree Agree continuous delivery practices are applied through automated builds, deployment, and build verification across the organization. Reusability, 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly architectural patterns, Disagree Agree design patterns, and refactoring are incorporated across the organization. Appropriate static 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly analysis tools are used Disagree Agree across the organization. Appropriate tools are 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly used for knowledge Disagree Agree management, project management, program management, portfolio management, quality management, risk management, and product management across the organization. Appropriate artificial 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly intelligence based Disagree Agree software analytics tools are used across the organization.

The following questions were presented to experts at the end of the survey.

1. In your option, do each question and options represent the intention of assessing software

quality using software quality management best practices? Response options are:

Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree. 117

2. Do you have any specific feedback on adding or removing, or updating these questions

and options? Response is requested in a free text format.

Factor Weights

Weightings are assigned to reflect the relative importance of organizational environment, people, process, and technology in software development. These are similar to the weightings used by an integrated model for systems engineering-based development at small enterprises

(Pittman, 2016). These are slightly modified and are represented in Table 18.

Table 18

Factor Weights

Factor Weight Organizational environment 50% People 25% Process 15% Technology 10%

The following questions were presented to experts at the end of the survey.

1. In your opinion, does the factor weight represent the relative importance of

organizational environment, people, process, and technology in software development?

Response options are: Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree.

2. Do you have any specific feedback on updating factor weight? Response is requested in a

free text format.

Institutional Review Board

As the research involved human subjects, academic experts, industry experts, and industry practitioners the IRB guidelines provided by the Indiana State University Institutional Review

Board were followed. The mandatory training provided by IRB was taken as listed in Appendix

B. Academic and Industry experts that serve on the editorial boards of productive and influential 118 journals were reviewed to verify that boards were diverse that represent academic and industry experts covering different industries and geographies. Industry practitioners that are the most recent authors of productive and influential journals were reviewed to verify that authors are diverse covering different industries and geographies. Email listing of academic experts, industry practitioners, and industry practitioners was prepared. Duplicates were removed so that that experts serving on more than one editorial board were not included more than once in the email listing. Productive and influential journals were listed in Appendix C.

The initial draft of self-administered questionnaire survey was reviewed with committee members and refined based on feedback and comments. The Indiana State University

Institutional Review Board has reviewed the research material and determined that this research is exempt from IRB review according to federal guidelines as listed in Appendix D.

Academic Experts’ Review

Upon completion of IRB review and receiving exempt decision, email as listed in Appendix

E was sent to academic experts for validation using Qualtrics software. Academic experts participating in this study were informed of research goals and background information.

Academic experts were also informed that information provided would remain anonymous.

University affiliation was included in the communication to experts to provide greater confidence that responses would remain anonymous. The informed consent as listed in Appendix

F was included as part of the Qualtrics survey.

Three academic experts volunteered to take the survey. Two out of three academic experts agree that each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management best practices. One academic expert disagrees that each question 119 and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management practices. None of the experts provided any specific feedback on adding or removing, or updating these questions and options. Since there is no specific feedback from the participant who disagreed, no changes were incorporated into the assessment tool questions and options. Results of academic expert feedback on best practices are summarized on Table 19.

Table 19

Academic Expert Feedback On Best Practices

Response Count Percentage

Strongly disagree - -

Disagree 1 33%

Neutral - -

Agree 2 67%

Strongly Agree - -

Two out three academic experts volunteered disagree that the factor weight represents the relative importance of organizational environment, process, people, and technology in software development. One of these academic experts suggested to reduce weight on organizational environment and the second academic expert decided not to provide any feedback on updating factor weights. One out of three academic experts volunteered responded as neutral and suggested that people should have more impact. Results are summarized in Table 20. 120

Table 20

Academic Expert Feedback On Factor Weight

Response Count Percentage

Strongly disagree - -

Disagree 2 67%

Neutral 1 33%

Agree - -

Strongly Agree - -

Based on analysis of feedback from academic experts, factor weights have been adjusted to accommodate feedback and no changes were made to best practices. Revised factor weights are represented in Table 21.

Table 21

Revised Factor Weight Based On Academic Expert Feedback

Factor Weight People 40% Organizational environment 35% Process 15% Technology 10%

Industry Experts’ Pilot Test

Revised survey based on feedback obtained from academic experts was reviewed with committee members and was sent to industry experts for pilot test via email as listed in Appendix

E using Qualtrics software. Industry experts participating in this study were informed of research goals and background information. Industry experts were also informed that information 121 provided would remain anonymous. University affiliation was included in the communication to experts to provide greater confidence that responses would remain anonymous. The informed consent as listed in Appendix F was included as part of the Qualtrics survey.

Three industry experts volunteered to take the survey. Overall score is calculated based on the scoring listed in Table 22.

Table 22

Scoring Table

Response Score

Strongly Disagree 1 out of 5

Disagree 2 out of 5

Neutral 3 out of 5

Agree 4 out of 5

Strongly Agree 5 out of 5

Pilot test results from these three industry experts are listed in Appendix G. Overall weighted assessment score is listed in Table 23.

Table 23

Industry Expert Pilot Test Results – Weighted Score

Industry Expert Overall assessment score (Weighted)

Expert #1 22.8 out of 30

Expert#2 14.55 out of 30

Expert#3 23.15 out of 30

122

Two out of three industry experts agree that each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management best practices. One expert strongly disagrees that each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management practices. Additional feedback from industry expert on adding or removing, or updating these questions and options is as follows. “You seem to be equating formally defined practices that are standardized across the organization with software quality. The two have a very weak link and in my experience localized well defined practices are much more effective. We have an overall well defined approach that all of our projects use, but they define their own interpretation of software engineering practice in their local context.”

Results are summarized on Table 24.

Table 24

Industry Expert Feedback On Best Practices

Response Count Percentage

Strongly disagree 1 33%

Disagree - -

Neutral - -

Agree 2 67%

Strongly Agree - -

Two out of three industry experts volunteered provided neutral response regarding the factor weight representing the relative importance of organizational environment, process, 123 people, and technology in software development. One out of three academic experts volunteered agreed that the factor weight represents the relative importance of organizational environment, process, people, and technology in software development. One of these experts provided additional feedback on updating factor weight as follows. “It's very difficult to attribute these factors to Delivery quality in any widespread way. If anything I'd say the "technology" part is a bit low - modern analysis tools are quite effective and good testing practice (e.g. BDD) appears to have improved delivery quality significantly.”. Results are summarized on Table 25.

Table 25

Industry Expert Feedback On Factor Weight

Response Count Percentage

Strongly disagree - -

Disagree - -

Neutral 2 67%

Agree 1 33%

Strongly Agree - -

Based on analysis of feedback from industry experts, process and technology best practices are modified and revised best practices are included in Appendix H. Additionally, factor weights have been adjusted to accommodate feedback. Revised factor weights are represented in Table

26.

Table 26

Revised Factor Weights Based on Industry Expert Feedback

Factor Weight 124

People 40% Organizational environment 30% Process 15% Technology 15%

Industry Practitioners’ Validation

Revised best practices based on feedback obtained from industry experts was reviewed with committee members and final assessment tool for industry practitioners was prepared. At the end of the survey raw score and weighted score were presented to survey participants and feedback on strengths and opportunities for improvement were requested from industrial practitioners.

Final assessment tool listed in Appendix I was reviewed with committee members and sent to industry practitioners for validation via email as listed in Appendix E using Qualtrics software.

Industry practitioners participating in this study were informed of research goals and background information. Industry practitioners were also informed that information provided would remain anonymous. University affiliation was included in the communication to industry practitioners to provide greater confidence that responses would remain anonymous. The informed consent as listed in Appendix F was included as part of the Qualtrics survey.

Three industry practitioners volunteered to take the survey. One of the participants provide feedback on strengths and added value of assessment tool and opportunities for improvement.

Strengths and value add of assessment tool: “There is an implication that a higher score would yield higher quality software - Since the assessment mechanism is not well understood (by me), I would be suspect that there is such a correlation. The assessment seems highly subjective.”

Opportunities for improvements: “You would need to determine whether the qualities you are asking about ARE actually correlated with improved software quality. I would be skeptical that your lists highly correlate.” 125

Validation results from these three industry practitioners are listed in Appendix J. Overall weighted assessment score is listed in Table 27.

Table 27

Industry Practitioner Validation Results – Weighted Score

Industry Practitioner Overall assessment score (Weighted)

Practitioner #1 21.95 out of 30.25

Practitioner #2 23.6 out of 30.25

Practitioner #3 26.05 out of 30.25

Chapter Summary

This chapter provided the results of the research to develop quality management assessment tool to evaluate software based on software quality management practices using the research methodology.

126

CHAPTER 5

DISCUSSION, CONCLUSIONS, AND RECOMMENDATIONS

Chapter Overview

The purpose of this chapter is to present discussion of results regarding how the quality management assessment tool to evaluate software based on software quality management practices was developed, reviewed, pilot tested, refined, and validated, to provide general conclusions from this study and to recommend future directions for further study. The results of this study provide valuable feedback from academic experts, industry experts, and industry practitioners.

Discussion of Results

The purpose of the study is to develop a quality management assessment tool to evaluate software using software quality management best practices.

Assessment Tool – Initial Version

Initial version of assessment tool was developed based on best practices derived from scholarly literature using bibliometric techniques. Weightings are assigned to reflect the relative importance of organizational environment, people, process, and technology in software development. Initial version of assessment tool was reviewed with committee members and revised.

Assessment Tool Revised – Academic Experts’ Review 127

Assessment tool was sent to academic experts’ review using Qualtrics survey. Three academic experts volunteered to take the survey. Academic experts were asked if each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management best practices and if factor weight needs to updated. Two out of three academic experts agree that each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management best practices. One academic expert disagrees that each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management practices. None of the experts provided any specific feedback on adding or removing, or updating these questions and options. Since there is no specific feedback from the participant who disagreed, no changes were incorporated into the assessment tool questions and options.

Two out three academic experts volunteered disagree that the factor weight represents the relative importance of organizational environment, process, people, and technology in software development. One of these academic experts suggested to reduce weight on organizational environment and the second academic expert decided not to provide any feedback on updating factor weights. One out of three academic experts volunteered responded as neutral and suggested that people should have more impact. Factor weights have been adjusted to accommodate feedback and no changes were made to best practices. This resulted in revising factor weight assigned to organizational environment best practices from 50% of overall weight to 35% of overall weight and revising factor weight assigned to people best practices from 25% 128 of overall weight to 40% of overall weight. There are no changes to weight assigned to process best practices or technology best practices.

Assessment Tool Revised – Industry Experts’ Pilot Testing

Revised assessment tool based on feedback obtained from academic experts was reviewed with committee members and was sent to industry experts’ pilot testing using Qualtrics survey. Industry experts were asked if each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management best practices and if factor weight needs to updated. Two out of three industry experts agree that each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management best practices. One expert strongly disagrees that each question and options presented for assessment of software using software quality management best practices represent the intention of assessing software quality using software quality management practices and provided feedback. This feedback was reviewed and incorporated into the assessment tool by updating process best practices to incorporate tailoring of organizational process best practices and behavior driven development into technology best practices.

Two out of three industry experts volunteered provided neutral response regarding the factor weight representing the relative importance of organizational environment, process, people, and technology in software development. One out of three academic experts volunteered agreed that the factor weight represents the relative importance of organizational environment, process, people, and technology in software development. One of these experts provided additional feedback on updating factor weight. Factor weights have been adjusted to 129 accommodate this feedback. This resulted in revising factor weight assigned to organizational environment best practices from 35% of overall weight to 30% of overall weight and revising factor weight assigned to technology best practices from 10% of overall weight to 15% of overall weight. There are no changes to factor weight assigned to people best practices or process best practices.

Assessment Tool Final – Industry Practitioner Validation

Three industry practitioners volunteered to take the survey. Industry participants were asked to provide feedback on strengths and added value and opportunities for improvement. Two out of three participants validated the tool, received the score, and did not provide any specific feedback on strengths and added value of assessment tool and opportunities for improvement.

One of the participants validated, received the scope, and provided feedback on strengths and added value of assessment tool and opportunities for improvement. This participant commented that best practices incorporated into the assessment tool are highly subjective and the practitioner is skeptical that best practices incorporated into assessment tool highly correlate with higher quality software. As a future research work, additional guidelines and assessment tool guide can be incorporated into assessment tool to reduce or eliminate subjectivity in interpreting the questions. Additionally, empirical, longitudinal case studies, and benchmarking studies can be conducted to further assess the effectiveness of best practices in improving software quality.

Results of five research questions pertaining to this research were discussed below.

Research Question 1

Research question 1 was to find out software quality management best practices pertaining to people that can be used to evaluate organizations that produce software. Best practices identified are listed below. 130

• Staff is competent, self-motivated, and empowered.

• Teams are coherent, cross-functional, and self-organizing that cooperate, coordinate,

communicate, and collaborate for a shared vision.

• Staff is engaged in continuous learning, continuous improvement, competency

development, and career development through reflection, problem solving,

experimentation, training, coaching, and mentoring.

• Staff embraces the culture of agility and innovation, and contributes to waste elimination,

value stream optimization, and adding value to consumers.

• Staff embraces change, adopts, and adapts best practices across the organization.

Research Question 2

Research question 2 was to find out software quality management best practices pertaining to organizational environment that can be used to evaluate organizations that produce software.

Best practices identified are listed below.

• The management at all levels demonstrates strong and determined leadership and

commitment to shared vision, mission, values, business strategy, goals, objectives,

quality policy, and quality culture.

• The management at all levels respects, trusts, and empowers people across the

organization and provides meaningful oversight.

• The organization attracts, recruits, develops, motivates, organizes, excites, and retains

competent staff at all levels.

• The top management leads the strategic change management and provides the necessary

infrastructure (People, Technology, Processes) to support diffusion and adoption of best

practices across the organization. 131

• The organization provides work environment that is safe, secure, stress free, and

contributes to productivity, work-life balance, quality of work-life, and employee health

and well-being.

• The organization provides appropriate compensation, career development paths, rewards,

and recognition system based on contribution and value.

Research Question 3

Research question 3 was to find out software quality management best practices pertaining to processes can be used to evaluate organizations that produce software. Best practices identified are listed below.

• Formal and well-defined software engineering (requirements, architecture, design,

development, and maintenance) processes are consistently used across the organization

and tailored as appropriate.

• Formal and well-defined software quality control engineering (verification, reviews,

inspections, validation, testing, regression testing, and audit) processes are consistently

used across the organization and tailored as appropriate.

• Formal and well-defined software quality assurance engineering (process definition,

process assessment, process improvement, process deployment, process quality

assurance, product quality assurance, and metrics) processes are consistently used across

the organization and tailored as appropriate.

• Formal and well-defined software configuration management, product integration,

software deployment, and software release management processes are consistently used

across the organization and tailored as appropriate. 132

• Formal and well-defined fact driven decision making process is consistently used across

the organization and tailored as appropriate.

• Formal and well-defined problem management and root cause analysis process is

consistently used across the organization and tailored as appropriate.

• Formal and well-defined human capital management, workforce planning, competency

development, and knowledge management processes are consistently used across the

organization and tailored as appropriate.

• Formal and well-defined engineering management (project management, product

management, program management, portfolio management, supplier management, and

risk management) processes are consistently used across the organization and tailored as

appropriate.

Research Question 4

Research question 4 was to find out software quality management best practices pertaining to technology that can be used to evaluate organizations that produce software. Best practices identified are listed below.

• Well defined engineering standards for requirements, architecture, design, code, and

testing are consistently used across the organization.

• Appropriate tools for requirements, architecture, design, code, and testing are consistently

used across the organization.

• Continuous integration and continuous delivery practices are applied through behavior

driven development, test driven development, automated builds, deployment, and build

verification across the organization. 133

• Reusability, architectural patterns, design patterns, and refactoring are incorporated

across the organization.

• Appropriate static analysis tools are used across the organization.

• Appropriate tools are used for knowledge management, project management, program

management, portfolio management, quality management, risk management, and product

management across the organization.

• Appropriate artificial intelligence based software analytics tools are used across the

organization.

Research Question 5

Research question 5 was to find out the key elements of software quality management assessment tool to evaluate organizations that produce software. Key elements of software quality assessment tool include people, organizational environment, process, and technology best practices identified in research questions 1 through 4. Additionally, weight was assigned to people, organizational environment, process, and technology best practices based on their relative importance to calculate the overall weighted score for organizations that produce software.

This assessment tool provides flexibility to organizations to designate a senior management representative such as organization’s quality manager to work with practitioners in their organizations to conduct periodic assessments quickly, understand strengths and weaknesses, and identify actionable items for improvement. Organizations may choose to take assistance from external consultants that are competent in software quality management best practices to perform objective and unbiased assessments and help in implementing software quality management best practices. 134

Contribution

Quality management strategies must be continuously improved to deliver high quality software quickly at an affordable price. The key contribution of this research is synthesizing the fragmented, disjointed, and incoherent software quality management theory and practices to codify best practices and create an assessment tool that incorporates people, organizational environment, process, and technology best practices. This research fills a gap in academic literature and in practice. This research complements and enhances the findings of other research on software quality management.

This assessment tool will be valuable to organizations that are taking advantage of rapid innovations in software. These organizations can use the assessment tool for implementing best practice based management strategies that lead to improved software quality and competitive advantage in the global marketplace. The comprehensive software quality assessment tool provides specific action items for improvement.

This research contributes to the current academic literature in software quality management by presenting the quality management assessment tool based on software quality management best practices, contributes to body of knowledge on software quality management, and expands the knowledgebase on quality management practices. This research provides a new and rich view of history of software quality management discipline through holistic view of the discipline.

This research also contributes to current professional practice by incorporating software quality management best practices into quality management assessment tool to evaluate software and continually improve knowledgebase. 135

Many technology management disciplines could also benefit in a similar fashion by developing appropriate quality management best practices. Educators, researchers, engineers, journal editors, publishers, professional associations, policy makers, commercial and industrial communities, business and information technology management professionals, information technology consultants and so on can benefit from quality management assessment tool to incorporate software quality management best practices to keep up with rapid innovations and needs of business.

Conclusions

The primary premise of this dissertation is that holistic approach of incorporating people, organizational environment, process, and technology best practices in organizations that produce software improves software quality and these best practices are necessary to evaluate organizations that produce software. This research study indicates that people best practices are more important than organizational environment or process or technology best practices and carry 40% of overall weight. Organizational best practices are more important than process best practices or technology best practices and carry 30% of overall weight. Process best practices and technology best practices are equally important and carry 15% of overall weight.

Recommendations for Future Research

There are tremendous opportunities for further research in rapidly changing technology management fields such as software and software driven global industries. Research opportunities and recommendations include, but not limited to the following.

• This study utilized voluntary participation from a small sample size of three

academic experts, three industry experts, and three industry practitioners to allow

necessary focus and diversity for obtaining feedback on the assessment tool from 136

thought leaders and leading industry practitioners that have skills, knowledge,

experience, and expertise. This assessment tool can be further validated by a large

number of sample practitioners to perform more robust analysis and improve

validity.

• Searching for the best practices is constant. Following continuous improvement

philosophy of quality management, future research can focus on updating the

assessment tool based on evolution of best practices in conjunction with

additional review by academic experts, additional pilot testing by industry

experts, and additional validation by industry practitioners to further improve the

assessment tool.

• Empirical studies and longitudinal case studies can be conducted using the

assessment tool before and after implementation of best practices to further assess

the effectiveness of best practices based assessment tool in improving software

quality.

• Empirical studies and benchmarking case studies involving organizations that

produce similar software or similar organizations within same industry can be

conducted.

• Each of the best practices can be a further researched while taking the holistic

view of people, organizational, process, and technology best practices.

• Additional guidelines and assessment tool guide can be incorporated into

assessment tool to reduce or eliminate subjectivity in interpreting the questions.

• Additional research studies can focus on tailoring the assessment for different

organizations that produce software based on size, location, and so on. 137

• Additional research studies can focus on tailoring the assessment for different

types of software.

• Even though this assessment tool has a narrow focus on assessment of

organizations that produce software, other technology management disciplines

can reuse the concepts and ideas to develop similar assessment tools.

• Novel research methodologies to identify best practices in technology

management disciplines can be explored and compared with existing and evolving

research methodologies.

138

REFERENCES

Acedo, F. J., Barroso, C., & Galan, J. L. (2006). The resource‐based theory: dissemination and

main trends. Strategic Management Journal, 27(7), 621-636.

Acuña, S. T., Gómez, M. N., Hannay, J. E., Juristo, N., & Pfahl, D. (2015). Are team personality

and climate related to satisfaction and software quality? aggregating results from a twice

replicated experiment. Information and Software Technology, 57, 141-156.

doi:10.1016/j.infsof.2014.09.002

Adams, B., Kavanagh, R., Hassan, A. E., & German, D. M. (2016). An empirical study of

integration activities in distributions of open source software. Empirical Software

Engineering, 21(3), 960-1001.

Alarifi, A., Zarour, M., Alomar, N., Alshaikh, Z., & Alsaleh, M. (2016). SECDEP: Software

engineering curricula development and evaluation process using SWEBOK. Information

and Software Technology, 74, 114-126. doi:10.1016/j.infsof.2016.01.013

Allison, I., & Narciso, H. (2015). Managing Change in SPI: A Framework for Addressing

Resistance. Software Quality Professional, 17(4).

Alfonzo, P. M., Sakraida, T. J., & Hastings-Tolsma, M. (2014). Bibliometrics: Visualizing the

Impact of Nursing Research. Online Journal Of Nursing Informatics, 18(1), 3-17 15p.

Appio, F. P., Cesaroni, F., & Di Minin, A. (2014). Visualizing the structure and bridges of the

intellectual property management and strategy literature: a document co-citation

analysis. Scientometrics, 101(1), 623-661. 139

Arce, J. F., Daughtrey, T., Duncan, S., Ehrhorn, T., Jowers, L., McQuaid, P., ... & Zubrow, D.

(2013). Are Process Improvement Frameworks Worth the Dysfunction?. Software

Quality Professional Magazine, 15(3).

Arora, A., Caulkins, J. P., & Telang, R. (2006). Research note-sell first, fix later: Impact of

patching on software quality. Management Science, 52(3), 465-471.

Assefa, S. G., & Rorissa, A. (2013). A bibliometric mapping of the structure of STEM education

using co‐word analysis. Journal of the American Society for Information Science and

Technology, 64(12), 2513-2536.

Atoum, I., & Bong, C. H. (2015). Measuring Software Quality in Use: State-of-the-Art and

Research Challenges. Software Quality Professional, 17(2), 4-15.

Avgeriou, P., Kruchten, P., Nord, R. L., Ozkaya, I., & Seaman, C. (2016). Reducing friction in

software development. IEEE Software, 33(1), 66-73. doi:10.1109/MS.2016.13

Axelsson, J., & Skoglund, M. (2016). Quality assurance in software ecosystems: A systematic

literature mapping and research agenda. Journal of Systems and Software, 114, 69-81.

doi:10.1016/j.jss.2015.12.020

Babb, J., Hoda, R., & Norbjerg, J. (2014). Embedding Reflection and Learning into Agile

Software Development. IEEE Software, 31(4), 51-57.

Bakulina, M. P. (2008). Application of the Zipf law to text compression. Journal of Applied and

Industrial Mathematics, 2(4), 477-483.

Baldassarre, M. T., Caivano, D., Pino, F. J., Piattini, M., & Visaggio, G. (2012). Harmonization

of ISO/IEC 9001: 2000 and CMMI-DEV: from a theoretical comparison to a real case

application. Software Quality Journal, 20(2), 309-335. 140

Basili, V. R., Boehm, B., Davis, A., Humphrey, W. S., Leveson, N., Mead, N. R.. . Weyuker, E.

(2004). New year's resolutions for software quality. IEEE Software, 21(1), 12.

doi:10.1109/MS.2004.1259165

Basili, V. R., Lindvall, M., Regardie, M., Seaman, C., Heidrich, J., Münch, J., ... & Trendowicz,

A. (2010). Linking software development and business strategy through

measurement. Computer, 43(4), 57-65.

Bass, J. M., Allison, I. K., & Banerjee, U. (2013). Agile method tailoring in a CMMI level 5

organization: Addressing the paradox. Journal of International Technology and

Information Management, 22(4), 77.

Batarseh, F. A., & Gonzalez, A. J. (2015). Predicting failures in agile software development

through data analytics. Software Quality Journal, 1-18.

Baysal, O., Holmes, R., & Godfrey, M. W. (2013). Developer dashboards: The need for

qualitative analytics. IEEE Software, 30(4), 46-52. doi:10.1109/MS.2013.66

Bavani, R. (2012). Distributed Agile, Agile Testing, and Technical Debt. IEEE Software, 29(6),

28-33.

Beck, K., & Boehm, B. (2003). Agility through discipline: A debate. Computer, 36(6), 44-46.

Belter, C. (2012). Visualizing networks of scientific research. Online-Medford, 36(3), 14.

Behutiye, W. N., Rodríguez, P., Oivo, M., & Tosun, A. (2017). Analyzing the concept of

technical debt in the context of agile software development: A systematic literature

review. Information and Software Technology, 82, 139-158.

Boehm, B. W. (1991). Software risk management: Principles and practices. IEEE Software, 8(1),

32-41. doi:10.1109/52.62930 141

Boehm, B. (2008). Making a difference in the software century. Computer, 41(3), 32-38.

doi:10.1109/MC.2008.91

Boehm, B., Chulani, S., Verner, J., & Wong, B. (2008). Sixth workshop on software quality.

Paper presented at the 1035-1036. doi:10.1145/1370175.1370234

Boehm, B., & Mobasser, S. K. (2015, May). System thinking: educating t-shaped software

engineers. In Proceedings of the 37th International Conference on Software Engineering-

Volume 2 (pp. 333-342). IEEE Press.

Booch, G. (2016). It is cold. and lonely. IEEE Software, 33(3), 7-9. doi:10.1109/MS.2016.85

Bourque, P. & Fairley, R. E. (eds.) (2014). SWEBOK: Guide to the Software Engineering Body

of Knowledge. Los Alamitos, CA: IEEE Computer Society. ISBN: 978-0-7695-5166-1

Braun, T., Schubert, A. P., & Kostoff, R. N. (2000). Growth and trends of fullerene research as

reflected in its journal literature. Chemical reviews, 100(1), 23-38.

Breu, R., Kuntzmann-Combelles, A., & Felderer, M. (2014). New Perspectives on Software

Quality. IEEE Software, 31(1), 32-38. doi:10.1109/MS.2014.9

Briand, L. (2012). Embracing the engineering side of software engineering. IEEE

software, 29(4), 96-96.

Bull, C. N., & Whittle, J. (2014). Supporting reflective practice in software engineering

education through a studio-based approach. IEEE Software, 31(4), 44-50.

doi:10.1109/MS.2014.52

Butler, K. (1995). The economic benefits of software process improvement. Crosstalk, 8(7), 14-

17.

Buschmann, F. (2011). To Pay or Not to Pay Technical Debt. IEEE Software, 28(6), 29-31. 142

Calero-Medina, C., & Noyons, E. C. (2008). Combining mapping and citation network analysis

for a better understanding of the scientific development: The case of the absorptive

capacity field. Journal of Informetrics, 2(4), 272-279. doi:10.1016/j.joi.2008.09.005

Capretz, L. F. (2014). Bringing the human factor to software engineering. IEEE Software, 31(2),

104-104.

Cantor, M., & Royce, W. (2014). Economic governance of software delivery. IEEE

Software, 31(1), 54-61. doi:10.1109/MS.2013.102

Chao, W., Yang, L., Xinhui, L., Zini, L., Michèle, T., & Sovan, L. (2015). A bibliometric

analysis of scientific trends in phytoplankton research. Annales De Limnologie, 51(3),

249-259. doi:10.1051/limn/2015019

Chang, C. K. (2016). Situation analytics: A foundation for a new software engineering

paradigm. Computer, 49(1), 24-33. doi:10.1109/MC.2016.21

Chen, L. C., & Lien, Y. H. (2011). Using author co-citation analysis to examine the intellectual

structure of e-learning: A MIS perspective. Scientometrics, 89(3), 867-886.

Chrissis, M. B., Konrad, M., & Moss, M. (2013). Ensuring Your Development Processes Meet

Today’s Cyber Challenges. CrossTalk, 29.

Cleland-Huang, J. (2013). Are Requirements Alive and Kicking?. IEEE Software, 30(3), 13-15.

Cleland-Huang, J. (2014). Don't Fire the Architect! Where Were the Requirements?. IEEE

Software, 31(2), 27-29. doi:10.1109/MS.2014.34

Cleland-Huang, J. (2016). Requirements That Reflect Social Responsibility. IEEE

Software, 33(1), 109-111.

143

Cobo, M. J., López-Herrera, A. G., Herrera-Viedma, E., & Herrera, F. (2011). Science mapping

software tools: Review, analysis, and cooperative study among tools. Journal Of The

American Society For Information Science & Technology, 62(7), 1382-1402.

doi:10.1002/asi.21525

Cockburn, A., & Highsmith, J. (2001). Agile software development, the people

factor. Computer, 34(11), 131-133. doi:10.1109/2.963450

Cohan, S., & Glazer, H. (2009). An agile development team's quest for CMMI® maturity level 5.

In Agile Conference, 2009. AGILE'09. (pp. 201-206). IEEE.

Conboy, K., Coyle, S., Wang, X., & Pikkarainen, M. (2011). People over process: Key

challenges in agile development. IEEE Software, 28(4), 48.

Cowton, C. J. (1998). The use of secondary data in business ethics research. Journal of Business

Ethics, 17(4), 423-434. doi:10.1023/A:1005730825103

Cundiff, J., McCallum, T., Rich, A., Truax, M., Ward, T., & Havelka, D. (2014). Healthcare.gov:

Opportunity out of Disaster. Journal Of Information Systems Education, 25(4), 289-293.

Culnan, M.J. (1986). The intellectual development of management information systems, 1972–

1982: A co-citation analysis. Management Science, 32, 156–172.

Culnan, M.J. (1987). Mapping the intellectual structure of MIS, 1980–1985: A co-citation

analysis. MIS Quarterly, 11(3), 341–353.

Culnan, M.J., O’Reilly, C.A., & Chatman, J.A. (1990). Intellectual structure of research in

organizational behavior, 1972–1984: A co-citation analysis. Journal of the American

Society for Information Science, 41(6), 453.

144

Curcio, K., Malucelli, A., Reinehr, S., & Paludo, M. A. (2016). An analysis of the factors

determining software product quality: A comparative study. Computer Standards &

Interfaces, 48, 10-18. doi:10.1016/j.csi.2016.04.002

Czerwonka, J., Nagappan, N., Schulte, W., & Murphy, B. (2013). CODEMINE: Building a

software development data analytics platform at microsoft. IEEE Software, 30(4), 64-71.

doi:10.1109/MS.2013.68

Dalal, S., & Chhillar, R. S. (2012). Case studies of most common and severe types of software

system failure. International Journal of Advanced Research in Computer Science and

Software Engineering, 2(8).

Dass, K. R. (2012). Value of ISO 90003 Customer Requirement Guidelines to Improve IT

Project Success. Software Quality Professional, 15(1).

Daughtrey, T. (2013). Software Quality Costs. Software Quality Professional, 15(2), 4-15.

Daughtrey, T. (2014). Liability for Bad Software? A Discussion. Software Quality

Professional, 17(1), 14-16.

Davis, N., Humphrey, W., Redwine, S. T., Zibulski, G., & McGraw, G. (2004). Processes for

producing secure software. IEEE Security & Privacy, 2(3), 18-25.

doi:10.1109/MSP.2004.21

Demirkan, H., Spohrer, J. C., & Welser, J. J. (2016). Digital Innovation and Strategic

Transformation. IT Professional, 18(6), 14-18.

DeMarco, T. (2011). All Late Projects Are the Same. IEEE Software, 28(6), 104.

DeMarco, T., & Boehm, B. (2002). The agile methods fray. Computer, 35(6), 90-92.

Denning, P. (2014). Learning for the new digital age. Communications of the ACM.

doi:10.1145/2644230 145

Denning, P. J., & Denning, D. E. (2016). Cybersecurity is harder than building

bridges. American Scientist, 104(3), 154-157.

Department of Homeland Security (n.d.). The Vision – Software Assurance Marketplace A

transformative force in the software ecosystem. Retrieved December 29, 2015 from

https://www.dhs.gov/sites/default/files/publications/SWAMP-VISION-10.28.13.pdf

Di Stefano, G., Peteraf, M., & Verona, G. (2010). Dynamic capabilities deconstructed: a

bibliographic investigation into the origins, development, and future directions of the

research domain. Industrial and Corporate Change, 19(4), 1187-1204.

Diodato V. (1994). Dictionary of Bibliometrics. Haworth Press: Binghamton, NY.

Dion, R. (1993). Process improvement and the corporate balance sheet. IEEE Software, 10(4),

28-35.

Dingsøyr, T., Nerur, S., Balijepally, V., & Moe, N. B. (2012). A decade of agile methodologies:

Towards explaining agile software development. The Journal of Systems and

Software, 85(6), 1213-1221. doi:10.1016/j.jss.2012.02.033

Dudash, R. (2016). Software Stakeholder Management: It's Not All It's Coded Up to

Be. Software Quality Professional, 18(2), 25-30.

Duncan, S., & Peercy, D. E. (2012). Understanding Agile Concepts and the Role of Quality

(Assurance). Software Quality Professional, 14(4), 13-20.

Dyba, T., & Dingsoyr, T. (2009). What do we know about agile software development?. IEEE

software, 26(5), 6-9.

Dyba, T., Maiden, N., & Glass, R. (2014). The Reflective Software Engineer: Reflective

Practice. IEEE Software, 31(4), 32-36.

146

Earley, S. (2016). There is no AI without IA. IT Professional, 18(3), 58-64.

doi:10.1109/MITP.2016.43

Ebert, C. (2014). Software product management. IEEE Software, 31(3), 21-24.

doi:10.1109/MS.2014.72

Ebert, C. (2015). Looking into the Future. IEEE Software, 32(6), 92-97.

Ebert, C., Abrahamsson, P., & Oza, N. (2012). Lean Software Development. IEEE

Software, 29(5), 22-25.

Ebert, C., Hoefner, G., & Mani, V. S. (2015). What Next? Advances in Software-Driven

Industries. IEEE Software, 32(1), 22-28. doi:10.1109/MS.2015.21

Elbanna, A., & Sarker, S. (2016). The Risks of Agile Software Development: Learning from

Adopters. IEEE Software, 33(5), 72-79.

Eldh, S., & Murphy, B. (2015). Code Ownership Perspectives. IEEE Software, 32(6), 18-19.

Eloranta, V., Koskimies, K., & Mikkonen, T. (2016). Exploring ScrumBut—An empirical study

of scrum anti-patterns. Information and Software Technology, 74, 194-203.

Ellegaard, O., & Wallin, J. A. (2015). The bibliometric analysis of scholarly production: How

great is the impact? Scientometrics, 105(3), 1809-1831. doi:10.1007/s11192-015-1645-z

Estabrooks, C. A., Winther, C., & Derksen, L. (2004). Mapping the field: a bibliometric analysis

of the research utilization literature in nursing. Nursing research, 53(5), 293-303.

Evans, J. R. (2015). Modern Analytics and the Future of Quality and Performance

Excellence. Quality Management Journal, 22(4), 6-17.

Everett, C. (2015). Cyberwar and protecting critical national infrastructure. Computer Fraud &

Security, 2015(11), 11-14.

147

Fox, A., & Patterson, D. (2012). Crossing the software education chasm. Communications of the

ACM, 55(5), 44-49.

Fairley, R. E., & Willshire, M. J. (2017). Better Now Than Later: Managing Technical Debt in

Systems Development. Computer, 50(5), 80-87.

Falessi, D., Shaw, M., & Mullen, K. (2014). Achieving and Maintaining CMMI Maturity Level 5

in a Small Organization. IEEE Software, 31(5), 80-86.

Food and Drug Administration. (n.d.-a). Medical Devices Recall Report: Retrieved January 28,

2016 from

http://www.fda.gov/downloads/aboutfda/centersoffices/officeofmedicalproductsandtobac

co/cdrh/cdrhtransparency/ucm388442.pdf

Food and Drug Administration. (n.d.-b). General Principles of Software Validation; Final

Guidance for Industry and FDA Staff: Retrieved January 28, 2016 from

http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocument

s/ucm085281.htm#_Toc517237942

Fernandez‐Alles, M., & Ramos‐Rodríguez, A. (2009). Intellectual structure of human resources

management research: A bibliometric analysis of the journal Human Resource

Management, 1985–2005. Journal of the American Society for Information Science and

Technology, 60(1), 161-175.

Flinders, K. (2015). Digital business transformation will force software quality

assurance. Computer Weekly, 7.

França, A. C. C., Da Silva, F. Q., de LC Felix, A., & Carneiro, D. E. (2014). Motivation in

software engineering industrial practice: A cross-case analysis of two software

organisations. Information and Software Technology, 56(1), 79-101. 148

Freund, E. (2014). The Importance of Independent Verification and Validation. Software Quality

Professional, 16(4), 50-52.

Forte, D., Perez, R., Kim, Y., & Bhunia, S. (2016). Supply-Chain Security for

Cyberinfrastructure [Guest editors' introduction]. Computer, 49(8), 12-16.

García, R., Gil, R., Gimeno, J., Granollers, T., López, J., Oliva, M., & Pascual, A. (2010).

Semantic wiki for quality management in software development projects. IET

Software, 4(6), 386-395. doi:10.1049/iet-sen.2010.0044

Garzás, J., & Paulk, M. C. (2013). A case study of software process improvement with CMMI-

DEV and Scrum in Spanish companies. Journal Of Software: Evolution &

Process, 25(12), 1325-1333.

Gartner (n.d.). Gartner Says 8.4 Billion Connected "Things" Will Be in Use in 2017, Up 31

Percent From 2016. Retrieved March 9, 2017 from

http://www.gartner.com/newsroom/id/3598917

Gartner W. B., P. Davidsson and S. A. Zahra (2006), ‘Are you talking to me? The nature of

community in entrepreneurship scholarship’. Entrepreneurship Theory and Practice, 30

(3), 321-331.

Gat, I. (2012). Technical debt as a meaningful metaphor for code quality. IEEE Software, 29(6),

52.

Gleirscher, M., Golubitskiy, D., Irlbeck, M., & Wagner, S. (2014). Introduction of static quality

analysis in small- and medium-sized software enterprises: Experiences from technology

transfer. Software Quality Journal, 22(3), 499-542. doi:10.1007/s11219-013-9217-z

Gmür, M. (2003). Co-citation analysis and the search for invisible colleges: A methodological

evaluation. Scientometrics, 57(1), 27-57. doi:10.1023/A:1023619503005 149

Goldsborough, R. (2014). Dealing with the Heartbleed Bug. Teacher Librarian, 41(5), 68.

Gorschek, T., Fricker, S., Palm, K., & Kunsman, S. (2010). A lightweight innovation process for

software-intensive product development. IEEE software, 27(1), 37.

Gotterbarn, D., & Miller, K. W. (2010). Unmasking your software's ethical risks. IEEE

software, 27(1).

Grady, R. B. (1993). PRACTICAL RESULTS FROM MEASURING SOFTWARE

QUALITY. Communications Of The ACM, 36(11), 62-68. doi:10.1145/163359.163369

Greenberg, S. A. (2009). How citation distortions create unfounded authority: analysis of a

citation network. BMJ: British Medical Journal, 339.

Greengard, S. (2016). Cybersecurity gets smart. Communications of the ACM, 59(5), 29-31.

Guo, B., Yu, Z., & Zhou, X. (2015). A data-centric framework for cyber-physical-social

systems. IT Professional, 17(6), 4-7. doi:10.1109/MITP.2015.116

Guo, P. (2014). Refining students' coding and reviewing skills. Communications of the

ACM, 57(9), 10-11.

Hackbarth, R., Mockus, A., Palframan, J., & Sethi, R. (2016). Improving software quality as

customers perceive it. IEEE Software, 33(4), 40-45. doi:10.1109/MS.2015.76

Hall, D. (2009). The ethical software engineer. IEEE Software, 26(4).

Hassan, A. E., Hindle, A., Runeson, P., Shepperd, M., Devanbu, P., Kim, S., . . . Departments at

LTH. (2013). Roundtable: What's next in software analytics. IEEE Software, 30(4), 53-

56. doi:10.1109/MS.2013.85

Hatton, L., & van Genuchten, M. (2016). When software crosses a line. IEEE Software, 33(1),

29-31. doi:10.1109/MS.2016.6

150

Hayes, W., & Zubrow, D. (1995). Moving On Up: Data and Experience Doing CMM-Based

Software Process Improvement (No. CMU/SEI-95-TR-008). CARNEGIE-MELLON

UNIV PITTSBURGH PA SOFTWARE ENGINEERING INST.

Hazzan, O. (2010). Putting human aspects of software engineering in university curricula. IEEE

Software, 27(4), 90-91.

He, Q. (1999). Knowledge discovery through co-word analysis. Library Trends, 48(1), 133.

Hermans, F., Siegmund, J., Fritz, T., Bavota, G., Nagappan, M., Hindle, A., ... & Adams, B.

(2016). Leaders of Tomorrow on the Future of Software Engineering: A

Roundtable. IEEE Software, 33(2), 99-104.

Heston, K. M., & Phifer, W. (2011). The multiple quality models paradox: how much ‘best

practice’ is just enough?. Journal of Software Maintenance and Evolution: Research and

Practice, 23(8), 517-531.

Heersmink, R., van den Hoven, J., van Eck, N. J., & van den Berg, J. (2011). Bibliometric

mapping of computer and information ethics. Ethics & Information Technology, 13(3),

241-249. doi:10.1007/s10676-011-9273-7

Hilburn, T. B., & Humphrey, W. S. (2002). The impending changes in software education. IEEE

software, 19(5), 22.

Hill, E., Johnson, P. M., & Port, D. (2016). Is an athletic approach the future of software

engineering education? IEEE Software, 33(1), 97-100. doi:10.1109/MS.2016.15

Hoffman, D.L., & Holbrook, M.B. (1993). The intellectual structure of consumer research: A

bibliometric study of author co-citations in the first 15 years of the Journal of Consumer

Research. Journal of Consumer Research, 19, 505–517. 151

Holzmann, G. (2014). Mars code. New York: ACM. doi:10.1145/2560217.2560218

Holzmann, G. J. (2015a). Assertive Testing. IEEE Software, 32(3), 12-15.

Holzmann, G. J. (2015b). To Code Is Human. IEEE Software, 32(1).

Holzmann, G. J. (2015c). Out of Bounds. IEEE Software, 32(6), 24-26.

Holzmann, G. J. (2017). Code Craft. IEEE Software, 34(2), 18-21.

Hoda, R., Noble, J., & Marshall, S. (2013). Self-organizing roles on agile software development

teams. IEEE Transactions on Software Engineering, 39(3), 422-444.

doi:10.1109/TSE.2012.30

Honda, N., & Yamada, S. (2012). Success Factors to Achieve Excellent Quality: CMMI Level 5

Organizations Research Report. Software Quality Professional, 14(4), 21-32.

Hout, M. C., Papesh, M. H., & Goldinger, S. D. (2013). Multidimensional scaling. Wiley

Interdisciplinary Reviews: Cognitive Science, 4(1), 93-103.

Houser, W. (2015). Could What Happened to Sony Happen to Us?. IT Professional, 17(2), 54-

57.

Howles, T. (2014). What Can We Learn From HealthCare.gov?. Software Quality

Professional, 16(2), 27-34.

Howles, T., & McQuaid, P.(2012). Challenges in Building Secure Software. Software Quality

Professional, 14(3), 4-13.

Hurlburt, G. (2017). Superintelligence: Myth or Pressing Reality?. IT Professional, 19(1), 6-11.

Hsu, D. F., Marinucci, D., & Voas, J. M. (2015). Cybersecurity: Toward a Secure and

Sustainable Cyber Ecosystem. Computer, 48(4), 12. doi:10.1109/MC.2015.103

Huang, L. (2006). Software quality analysis: A value-based approach. University of Southern

California. Available from ProQuest Dissertations and Theses database. 152

Humphrey, W. (2007). Software process improvement—A personal view: How it started and

where it is going. Software Process: Improvement and Practice, 12(3), 223-227.

doi:10.1002/spip.324

Humphrey, W. S. (2001). Engineers will tolerate a lot of abuse. IEEE Software, 18(5), 13-15.

Humphrey, W. S. (2000). Guest Editor's Introduction: The Personal Software Process-Status and

Trends. IEEE Software, (6), 71-75.

Institute of Electrical and Electronics Engineers (n.d.). "Software and Systems Engineering

Standards.", Institute of Electrical and Electronics Engineers, Retrieved December 19,

2016 from http://standards.ieee.org/cgi-

bin/lp_index?type=standard&coll_name=software_and_systems_engineering&status=act

ive

Institute of Electrical and Electronics Engineers & Association for Computing Machinery (n.d.).

"Software Engineering 2014: Curriculum Guidelines for Undergraduate Degree Programs

in Software Engineering.", Computing Curriculum Series, Retrieved August 7, 2016 from

http://www.acm.org/education/se2014.pdf

ISO (n.d.). "ISO/IEC/IEEE 24765:2010 Systems and software engineering – Vocabulary.",

International Organization for Standardization, Retrieved December 19, 2016 from

https://www.iso.org/standard/50518.html

Jackson, M. (2012). A Bibliometric Analysis of Green Building Literature. Northcentral

University.

Jacobson, I., Spence, I., & Seidewitz, E. (2016). Industrial-scale agile: from craft to

engineering. Communications of the ACM, 59(12), 63-71. 153

Jamieson, J., & Fallah, M. (2012). Agile Quality Management Techniques. Software Quality

Professional, 14(2), 12-21.

Johnson, P. M. (2013). Searching under the Streetlight for Useful Software Analytics. IEEE

Software, 30(4), 57-63. doi:10.1109/MS.2013.69

Jones, C. (2015). Wastage: The Impact of Poor Quality on Software Economics. Software

Quality Professional, 18(1).

Jorgensen, M. (2014). What we do and don't know about software development effort

estimation. IEEE Software, 31(2), 37-40. doi:10.1109/MS.2014.49

Kaatz, T. (2014). Hiring in the Software Industry. IEEE Software, 31(6), 96-96.

Kamp, P. (2014). Quality of Software Costs Money--Heartbleed Was Free. Communications Of

The ACM, 57(8), 49-51.

Kandt, R. (2013). Using Metrics to Assess Software Quality. Software Quality

Professional, 15(3), 39-40.

Kanellopoulos, Y., & Yu, Y. (2015). Guest editorial: Special section: Software quality and

maintainability. Software Quality Journal, 23(1), 77-78. doi:10.1007/s11219-015-9270-x

Karg, L. M., Grottke, M., & Beckhaus, A. (2011). A systematic literature review of software

quality cost research. Journal of Systems and Software, 84(3), 415-427.

Kajikawa, Y., & Takeda, Y. (2009). Citation network analysis of organic LEDs. Technological

Forecasting and Social Change, 76(8), 1115-1123.

Kuo, H. K., & Yang, C. (2014). An intellectual structure of activity-based costing: a co-citation

analysis. The electronic library, 32(1), 31-46. 154

Kautz, K., & Åby Larsen, E. (2000). Diffusion theory and practice: Disseminating quality

management and software process improvement innovations. Information Technology &

People, 13(1), 11-26.

Khalil, G. M., & Crawford, C. A. G. (2015). A Bibliometric Analysis of US-Based Research on

the Behavioral Risk Factor Surveillance System. American journal of preventive

medicine, 48(1), 50-57.

Khan, G., & Wood, J. (2015). Information technology management domain: emerging themes

and keyword analysis. Scientometrics, 105(2), 959-972. doi:10.1007/s11192-015-1712-5

Kircher, M., & Volter, M. (2007). Guest Editors' Introduction: Software Patterns. IEEE

software, 24(4), 28-30.

Kitchenham, B., & Pfleeger, S. L. (1996). Software quality: The elusive target [special issues

section]. IEEE Software, 13(1), 12-21. doi:10.1109/52.476281

Kocaguneli, E., Menzies, T., & Keung, J. W. (2012). On the Value of Ensemble Effort

Estimation. IEEE Transactions On Software Engineering, 38(6), 1403-1416.

doi:10.1109/TSE.2011.111

Koerber, A., & McMichael, L. (2008). Qualitative sampling methods: A primer for technical

communicators. Journal of business and technical communication, 22(4), 454-473.

Koseoglu, M. A., Akdeve, E., Gedik, İ., & Bertsch, A. (2015). A bibliometric analysis of

strategic management articles in healthcare management literature: Past, present, and

future. International Journal Of Healthcare Management, 8(1), 27-33.

doi:10.1179/2047971914Y.0000000089

Krishnan, M. S. (1997). Cost and quality considerations in software product management (Order

No. 9802548). Available from ProQuest Dissertations & Theses A&I. (304333812). 155

Kruchten, P. (2015). Lifelong Learning for Lifelong Employment. IEEE Software, 32(4), 85-87.

Kruchten, P. (2013). Contextualizing agile software development. Journal Of Software:

Evolution & Process, 25(4), 351-361. doi:10.1002/smr.572

Laanti, M., Salo, O., & Abrahamsson, P. (2011). Agile methods rapidly replacing traditional

methods at nokia: A survey of opinions on agile transformation. Information and

Software Technology, 53(3), 276-290. doi:10.1016/j.infsof.2010.11.010

Laplante, P. A. (2012). Safe and secure software systems: The role of professional licensure.IT

Professional, 14(6), 51-53. doi:10.1109/MITP.2012.111

Laporte, C. Y., Berrhouma, N., Doucet, M., & Palza-Vargas, E. (2012). Measuring the cost of

software quality of a large software project at bombardier transportation: A case study.

Software Quality Professional, 14(3), 14.

Larrucea, X., Combelles, A., & Favaro, J. (2013). Safety-Critical Software. IEEE

Software, 30(3), 25-27.

Lasi, H., Fettke, P., Kemper, H., Feld, T., & Hoffmann, M. (2014). industry 4.0. Business &

Information Systems Engineering, 6(4), 239-242. doi:10.1007/s12599-014-0334-4

Lethbridge, T. C. (2000). What knowledge is important to a software

professional?. Computer, 33(5), 44-50.

Li, E. Y., Liao, C. H., & Yen, H. R. (2013). Co-authorship networks and research impact: A

social capital perspective. Research Policy, 42(9), 1515-1530.

doi:10.1016/j.respol.2013.06.012

Li, J.T., Tsui, A.S., 2002. A citation analysis of management and organization research in the

Chinese context : 1984-1999. Asia Pacific Journal of Management, 19, 87-107. 156

Liebowitz, J. (2015). IT project failures: What management can learn. IT Professional, 17(6), 8-

9. doi:10.1109/MITP.2015.110

Lim, E., Taksande, N., & Seaman, C. (2012). A Balancing Act: What Software Practitioners

Have to Say about Technical Debt. IEEE Software, 29(6), 22-27.

Lin, H. C., Lee, Y. D., & Tai, C. (2012). A study on the relationship between human resource

management strategies and core competencies. International Journal of Organizational

Innovation (Online), 4(3), 153.

Lipner, S. B. (2015). Security Assurance. Communications Of The ACM, 58(11), 24-26.

doi:10.1145/2822513

Lindsjørn, Y., Sjøberg, D. I., Dingsøyr, T., Bergersen, G. R., & Dybå, T. (2016). Teamwork

quality and project success in software development: A survey of agile development

teams. Journal of Systems and Software, 122, 274-286.

Louridas, P. (2011). Test management. IEEE Software, 28(5), 86-91. doi:10.1109/MS.2011.111

Louridas, P., & Ebert, C. (2013). Embedded analytics and statistics for big data. IEEE

Software, 30(6), 33-39. doi:10.1109/MS.2013.125

Łukasiewicz, K., & Miler, J. (2012). Improving agility and discipline of software development

with the Scrum and CMMI. IET Software, 6(5), 416-422. doi:10.1049/iet-sen.2011.0193

Maalej, W., Nayebi, M., Johann, T., & Ruhe, G. (2016). Toward data-driven requirements

engineering. IEEE Software, 33(1), 48-54. doi:10.1109/MS.2015.153

MacCormack, A. (2001). Product-development practices that work: How internet companies

build software. MIT Sloan Management Review, 42(2), 75.

Maglyas, A., Nikula, U., & Smolander, K. (2012). Lean solutions to software product

management problems. IEEE Software, 29(5), 40-46. doi:10.1109/MS.2012.108 157

Maiden, N. (2012a). Cherishing Ambiguity. IEEE Software, 29(6), 16-17.

Maiden, N. (2012b). Framing Requirements Work as Learning. IEEE software, 29(3), 8-9.

Malhotra, R. (2013). Prediction of High-, Medium-, and Low Severity Faults Using Software

Metrics. Software Quality Professional, 15(4).

Malhotra, R., & Pritam, N. (2012). Assessment of Code Smells for Predicting Class Change

Proneness. Software Quality Professional, 15(1).

Mandal, A., & Pal, S. C. (2015). Achieving agility through BRIDGE process model: an approach

to integrate the agile and disciplined software development. Innovations in Systems and

Software Engineering, 11(1), 1-7.

Marasco, J. (2012). Silver Bullets: No Secret Ingredients. IEEE software, 29(3), 102-104.

Marçal, A. S. C., de Freitas, B. C. C., Soares, F. S. F., Furtado, M. E. S., Maciel, T. M., &

Belchior, A. D. (2008). Blending Scrum practices and CMMI project management

process areas. Innovations in Systems and Software Engineering, 4(1), 17-29.

Margarido, I. L., Faria, J. P., VidaL, R. M., & Vieira, M. (2013). Challenges in Implementing

CMMI® High Maturity: Lessons Learned and Recommendations. Software Quality

Professional, 16(1)

Mathiassen, L., Ngwenyama, O. K., & Aaen, I. (2005). Managing change in software process

improvement. IEEE software, 22(6), 84-91.

Maughan, D. (2010). The Need for a National Cybersecurity Research and Development

Agenda. Communications Of The ACM, 53(2), 29-31.

McAfee, A., & Brynjolfsson, E. (2008). Investing in the IT That Makes a Competitive

Difference. Harvard Business Review,86(7/8), 98-107.

McConnell, S. (2002). Closing the gap. IEEE Software, 19(1), 3-5. 158

McGraw, G. (2017). Six Tech Trends Impacting Software Security. Computer, 50(5), 100-102.

McHugh, O., Conboy, K., & Lang, M. (2012). Agile Practices: The Impact on Trust in Software

Project Teams. IEEE Software, 29(3), 71-76.

McHugh, M., McCaffery, F., & Casey, V. (2014). Adopting agile practices when developing

software for use in the medical domain. Journal Of Software: Evolution &

Process, 26(5), 504-512.

McManus, J., & Wood-Harper, T. (2007). Software engineering: a quality management

perspective. The TQM magazine, 19(4), 315-327.

McQuaid, P., & Banerjee, R. (2015). Establishing Long-Lasting Relationships Between Industry

and Academia. Software Quality Professional, 18(1), 4-10.

Mead, N. R. (2009). Software engineering education: How far we’ve come and how far we have

to go. The Journal of Systems & Software, 82(4), 571-575. doi:10.1016/j.jss.2008.12.038

Mead, N. R., & Hilburn, T. B. (2013). Building security in: Preparing for a software security

career. IEEE Security & Privacy, 11(6), 80-83.

Mead, N. R. (2015). Industry/university collaboration in software engineering education:

refreshing and retuning our strategies. In Proceedings of the 37th International

Conference on Software Engineering-Volume 2 (pp. 273-275). IEEE Press.

Menzies, T., & Zimmermann, T. (2013a). Software Analytics: So What?. IEEE Software, 30(4),

31-37. doi:10.1109/MS.2013.86

Menzies, T., & Zimmermann, T. (2013b). The many faces of software analytics. IEEE

Software, 30(5), 28-29.

Meyer, B., Choppy, C., Staunstrup, J., van Leeuwen, J., 2009. Viewpoint: research evaluation for

computer science. Communications of the ACM. 52 (4), 31–34. 159

Michael, J. B., Drusinsky, D., Otani, T. W., & Shing, M. (2011). Verification and validation for

trustworthy software systems. IEEE Software, 28(6), 86-92. doi:10.1109/MS.2011.151

Middleton, P. (2001). Lean software development: two case studies. Software Quality

Journal, 9(4), 241-252.

Minku, L. L., & Yao, X. (2013). Ensembles and locality: Insight on improving software effort

estimation. Information and Software Technology, 55(8), 1512-1528.

doi:10.1016/j.infsof.2012.09.012

Misa, T. J. (2015). Computing Is History. Communications Of The ACM, 58(10), 35-37.

doi:10.1145/2814845

Misirli, A. T., Caglayan, B., Bener, A., & Turhan, B. (2013). A Retrospective Study of Software

Analytics Projects: In-Depth Interviews with Practitioners. IEEE Software, 30(5), 54-61.

doi:10.1109/MS.2013.93

Misra, S. C., Singh, V., & Bisui, S. (2016). Characterization of Agile ERP. Software Quality

Professional, 18(3).

Mithas, S., & McFarlan, F. W. (2017). What Is Digital Intelligence?. IT Professional, 19(4), 3-6.

Mizruchi, M., Fein, L., 1999. The social construction of organizational knowledge: A study of

the uses of coercive, mimetic, and normative isomorphism. Administrative Science

Quarterly, 44, 653-683.

Mohan, M., Greer, D., & McMullan, P. (2016). Technical debt reduction using search based

automated refactoring. The Journal of Systems and Software, 120, 183-194.

doi:10.1016/j.jss.2016.05.019

Moore, A., O'Reilly, T., Nielsen, P. D., & Fall, K. (2016). Four thought leaders on where the

industry is headed. IEEE Software, 33(1), 36-39. doi:10.1109/MS.2016.1 160

Murphy-Hill, E., Roberts, D., Sommerlad, P., & Opdyke, W. F. (2015). Refactoring. IEEE

Software, 6(32), 27-29.

Nagappan, N., Murphy, B., & Basili, V. (2008). The influence of organizational structure on

software quality. In Software Engineering, 2008. ICSE'08. ACM/IEEE 30th International

Conference on (pp. 521-530). IEEE.

Neeley, J.D. (1981). The management and social sciences literatures: an interdisciplinary cross-

citation analysis. Journal of the American Society for Information Science, 32, 217–224

Nerur, S. P., Rasheed, A. A., & Natarajan, V. (2008). The intellectual structure of the strategic

management field: An author co‐citation analysis. Strategic Management Journal, 29(3),

319-336.

Niazi, M. (2015). Teaching global software engineering: experiences and lessons learned. IET

Software, 9(4), 95-102. doi:10.1049/iet-sen.2014.0042

Nichols, W. R. (2012). Plan for Success, Model the Cost of Quality. Software Quality

Professional, 14(2), 4-11.

Nielsen, P. D. (2015). Software Engineering and the Persistent Pursuit of Software

Quality. CrossTalk, 5.

Nindel-Edwards, J., & Steinke, G. (2008). Ethical Issues in the Software Quality Assurance

Function. Communications of the IIMA, 8(1), 53.

Notkin, D. (2009). Software, software engineering and software engineering research: Some

unconventional thoughts. Journal of Computer Science and Technology, 24(2), 189-197.

doi:10.1007/s11390-009-9217-4

Office Of Personal Management (n.d.). Cyber Security Incidents. Retrieved August 7, 2016 from

https://www.opm.gov/cybersecurity/cybersecurity-incidents/ 161

Offutt, J. (2013). Putting the engineering into software engineering education. IEEE

Software, 30(1), 96-96. doi:10.1109/MS.2013.12

Olijnyk, N. V. (2014). Information security: A scientometric study of the profile, structure, and

dynamics of an emerging scholarly specialty (Order No. 3619074). Available from

ProQuest Dissertations & Theses A&I; ProQuest Dissertations & Theses Global.

(1530479755).

O'Leary, D. E. (2013). Artificial intelligence and big data. IEEE Intelligent Systems, 28(2), 96-

99. doi:10.1109/MIS.2013.39

Orr, K. (2004). Agile requirements: Opportunity or oxymoron? IEEE Software, 21(3), 71-73.

doi:10.1109/MS.2004.1293075

Osterweil, L. (1996). Strategic directions in software quality. ACM Computing Surveys

(CSUR), 28(4), 738-750. doi:10.1145/242223.242288

Osterweil, L. J., Ghezzi, C., Kramer, J., & Wolf, A. L. (2008). Determining the impact of

software engineering research on practice. Computer, 41(3), 39-49.

doi:10.1109/MC.2008.85

Otero, C. E., & Peter, A. (2015). Research directions for engineering big data analytics

software. IEEE Intelligent Systems, 30(1), 13-19. doi:10.1109/MIS.2014.76

Pachidi, S., Spruit, M., & Van de Weerd, I. (2014). Understanding users' behavior with software

operation data mining. Computers in Human Behavior, 30, 583.

doi:10.1016/j.chb.2013.07.049

Palmquist, M. S., Lapham, M. A., Miller, S., Chick, T., & Ozkaya, I. (2013).Parallel Worlds:

Agile and Waterfall Differences and Similarities (No. CMU/SEI-2013-TN-021).

CARNEGIE-MELLON UNIV PITTSBURGH PA SOFTWARE ENGINEERING INST. 162

Papadopoulos, G. (2015). Moving from Traditional to Agile Software Development

Methodologies Also on Large, Distributed Projects. Procedia-Social and Behavioral

Sciences, 175, 455-463.

Pardo, C., Pino, F. J., García, F., & Piattini, M. (2011). Harmonizing quality assurance processes

and product characteristics. Computer, 44(6), 0094-96.

Parnas, D.L., (2007). Stop the numbers game. Communications of the ACM. 50 (11), 19–21.

Parnin, C., Helms, E., Atlee, C., Boughton, H., Ghattas, M., Glover, A., ... & Stumm, M. (2017).

The top 10 adages in continuous deployment. IEEE Software, 34(3), 86-95.

Pasadeos, Y., Phelps, J., & Kim, B. H. (1998). Disciplinary impact of advertising scholars:

Temporal comparisons of influential authors, works and research networks. Journal of

Advertising, 27(4), 53-70.

Passos, O. M., Dias-Neto, A. C., & da Silva Barreto, R. (2012). Organizational Culture and

Success in SPI Initiatives. IEEE Software, 29(3), 97-99.

Paulk, M. C. (1985). The ARC network: A case StudY. IEEE Software, 2(3), 62-69.

doi:10.1109/MS.1985.230703

Paulk, M. C. (2002). Agile methodologies and process discipline. Institute for Software

Research, 3.

Paulk, M. C. (2005). An empirical study of process discipline and software quality (Doctoral

dissertation, University of Pittsburgh).

Paulk, M. C. (2013). A Scrum Adoption Survey. Software Quality Professional, 15(2), 27-34.

Payne, J. (2010). Integrating application security into software development. IT

professional, 12(2), 6-9. 163

Peng, M. W., & Zhou, J. Q. (2006). Most cited articles and authors in global strategy research.

Journal of International Management, 12(4), 490-508.

Petrie, C. (2014). The Failure of HealthCare.gov Exposes Silicon Valley Secrets. IEEE Internet

Computing, 18(6), 85-86. doi:10.1109/MIC.2014.121

Phan, D. D. (2001). SOFTWARE QUALITY AND MANAGEMENT. Information Systems

Management, 18(1), 56.

Pittman, R. M. (2016). An integrated process model for systems engineering-based development

at small commercial enterprises (Order No. 10138187). Available from ProQuest

Dissertations & Theses A&I. (1820072492).

Poort, E. R. (2014). Driving Agile Architecting with Cost and Risk. IEEE Software, 31(5), 20-

23.

Poppendieck, M., & Cusumano, M. A. (2012). Lean Software Development: A Tutorial. IEEE

Software, 29(5), 26-32.

Porter, M. E., & Heppelmann, J. E. (2015). How smart, connected products are transforming

companies. Harvard Business Review, 93(10), 96-114.

Poth, A., & Sunyaev, A. (2014). Effective Quality Management: Value- and Risk-Based

Software Quality Management. IEEE Software, 31(6), 79-85. doi:10.1109/MS.2013.138

Prahalad, C. K., & Krishnan, M. S. (1999). The new meaning of quality in the information age.

Harvard business review, 77, 109-119.

Prikladnicki, R., Lassenius, C., Tian, E., & Carver, J. C. (2016). Trends in Agile: Perspectives

from the Practitioners. IEEE Software, 33(6), 20-22.

Radziwill, N. (2005). An Ethical Theory for the Advancement of Professionalism in Software

Engineering. Software Quality Professional, Vol. 7, p. 14-23 164

Radziwill, N., Benton, M., Boadu, K., & Perdomo, W. (2015). A Case-Based Look at Integrating

Social Context Into Software Quality. Software Quality Professional, 18(1), 16-22.

Radziwill, N. (2009). Topology, evolution, and network-based continuous improvement of the

Quality Management Journal (Doctoral dissertation). Available from ProQuest

Dissertations and Theses database. (UMI No. 3374728)

Rai, A., Song, H., & Troutt, M. (1999). Software quality assurance: An analytical survey and

research prioritization. Computer Standards & Interfaces, 21(2), 160-160.

doi:10.1016/S0920-5489(99)92154-2

Rakitin, S. R. (2016). What Can Software Quality Engineering Contribute to Cyber

Security?. Software Quality Professional Magazine, 18(2).

Ramasubbu, N., & Kemerer, C. F. (2015). Technical Debt and the Reliability of Enterprise

Software Systems: A Competing Risks Analysis. Management Science, 62(5), 1487-

1510.

Ramos‐Rodríguez, A. R., & Ruíz‐Navarro, J. (2004). Changes in the intellectual structure of

strategic management research: A bibliometric study of the Strategic Management

Journal, 1980–2000. Strategic Management Journal, 25(10), 981-1004.

Rech, J., Bogner, C., & Haas, V. (2007). Using wikis to tackle reuse in software projects. IEEE

software, 24(6).

Richardson, I., Reid, L., Seidman, S. B., Pattinson, B., & Delaney, Y. (2011). Educating software

engineers of the future: Software quality research through problem-based learning.

In Software Engineering Education and Training (CSEE&T), 2011 24th IEEE-CS

Conference on (pp. 91-100). IEEE. 165

Rigby, D. K., Sutherland, J., & Takeuchi, H. (2016). The secret history of agile

innovation. Harvard Business Review.

Robbes, R., Vidal, R., & Bastarrica, M. C. (2013). Are software analytics efforts worthwhile for

small companies? the case of amisoft. IEEE Software, 30(5), 46-53.

doi:10.1109/MS.2013.92

Rodriguez, M., Piattini, M., & Fernandez, C. M. (2015). A HARD LOOK at Software

Quality. Quality Progress, 48(9), 30-36.

Rodríguez, M., Oviedo, J. R., & Piattini, M. (2016). Evaluation of Software Product Functional

Suitability: A Case Study. Software Quality Professional, 18(3).

Rodrigues, S. P., Van Eck, N. J., Waltman, L., & Jansen, F. W. (2014). Mapping patient safety: a

large-scale literature review using bibliometric visualisation techniques. BMJ open, 4(3),

e004468.

Rorissa, A., & Yuan, X. (2012). Visualizing and mapping the intellectual structure of

information retrieval. Information processing & management,48(1), 120-135.

Rozanski, N. (2015). The Five Properties of Successful Architectural Oversight. IEEE

Software, 32(1), 110-112.

Rus, I., & Lindvall, M. (2002). Knowledge management in software engineering. IEEE

Software, 19(3), 26-38. doi:10.1109/MS.2002.1003450

Savolainen, P., Ahonen, J. J., & Richardson, I. (2012). Software development project success

and failure from the supplier's perspective: A systematic literature review. International

Journal of Project Management, 30(4), 458-469.

Selby, R. (2007). Software engineering: The legacy of Barry W. Boehm. Paper presented at the

37-38. doi:10.1109/ICSECOMPANION.2007.6 166

Selby, R. W. (2009). Analytics-driven dashboards enable leading indicators for requirements and

designs of large-scale systems. IEEE Software, 26(1), 41-49. doi:10.1109/MS.2009.4

Shaw, M. (2016). Progress toward an engineering discipline of software. IEEE/ACM 38th

International Conference on Software Engineering Companion (ICSE-C). ACM; 3-4.

Sherer, S. A., & Alter, S. (2004). Information systems risks and risk factors: are they mostly

about information systems?. Communications of the Association for Information

Systems, 14(1), 2.

Shiau, W. L., Dwivedi, Y. K., & Tsai, C. H. (2015). Supply chain management: exploring the

intellectual structure. Scientometrics, 105(1), 215-230.

Shull, F. (2011). Perfectionists in a world of finite resources. IEEE Software, 28(2), 4-6.

doi:10.1109/MS.2011.38

Shull, F. (2012). Disbanding the" Process Police": New Visions for Assuring Compliance. IEEE

software, 29(3), 3-6.

Shull, F. (2013). A Lifetime Guarantee. IEEE Software, 30(6), 4-8.

Shull, F. (2014a). Data, Data Everywhere. IEEE Software, 31(5), 4-7. doi:10.1109/MS.2014.110

Shull, F. (2014b). Our Best Hope. IEEE Software, 31(4), 4-8.

Shull, F. (2014c). Progression, Regression, or Stasis?. IEEE Software, 31(1), 4-8.

Shull, F., Basili, V., Booch, G., & Vogel, D. (2011). Watts Humphrey. IEEE Software, (1), 5.

Shull, F., Carleton, A., Carriere, J., Prikladnicki, R., & Zhang, D. (2016). The future of software

engineering. IEEE Software, 33(1), 32-35. doi:10.1109/MS.2016.8

Shull, F., Seaman, C., & Zelkowltz, M. (2006). Victor r. Basili's contributions to software

quality. IEEE Software, 23(1), 16-18. 167

Silva, F. S., Soares, F. S. F., Peres, A. L., de Azevedo, I. M., Vasconcelos, A. P. L., Kamei, F.

K., & de Lemos Meira, S. R. (2015). Using CMMI together with agile software

development: A systematic review. Information and Software Technology, 58, 20-43.

Singh, V., Perdigones, A., García, J. L., Cañas-Guerrero, I., & Mazarrón, F. R. (2014). Analysis

of worldwide research in the field of cybernetics during 1997–2011. Biological

cybernetics, 108(6), 757-776.

Šmite, D., Moe, N. B., Šāblis, A., & Wohlin, C. (2017). Software teams and their knowledge

networks in large-scale software development. Information and Software Technology, 86,

71-86.

Smith, B. L. (2002). Software Development Cost Estimation for Infrastructure Systems. Journal

Of Management In Engineering, 18(3), 104.

Sneed, H. M., & Verhoef, C. (2016). From Software Development to Software Assembly. IEEE

Software, 33(5), 80-85.

Software Engineering Institute. (2010). Benefits of CMMI within the defense industry. Retrieved

from the Software Engineering Institute website:

http://www.sei.cmu.edu/library/assets/presentations/CMMI%20Benefits.pdf

Sommerville, I. (2016). IEEE software and professional development. IEEE Software, 33(2), 90-

92. doi:10.1109/MS.2016.43

Soos, S., Kampis, G., & Gulyás, L. (2013). Large-scale temporal analysis of computer and

information science. The European Physical Journal Special Topics, 222(6), 1441-1465.

Sowe, S. K., Simmon, E., Zettsu, K., de Vaulx, F., & Bojanova, I. (2016). Cyber-physical-human

systems: Putting people in the loop. IT Professional, 18(1), 10-13.

doi:10.1109/MITP.2016.14 168

Spiewak, R., & Mcritchie, K. (2017). Apply the Fundamentals of Quality in Software

Construction to Reduce Costs and Prevent Defects. Software Quality Professional, 19(3).

Spinellis, D. (2011). Agility Drivers. IEEE Software, 28(4), 96-95.

Spinellis, D. (2016a). Being a DevOps developer. IEEE Software, 33(3), 4-5.

Spinellis, D. (2016b). The Strategic Importance of Release Engineering. IEEE Software, 32(2),

3-5.

Spinellis, D. (2016c). Reflecting on quality. IEEE Software, 33(4), 4-5. doi:10.1109/MS.2016.90

Stankovic, D., Nikolic, V., Djordjevic, M., & Cao, D. B. (2013). A survey study of critical

success factors in agile software projects in former Yugoslavia IT companies. Journal of

Systems and Software, 86(6), 1663-1678.

Stavru, S. (2014). A critical examination of recent industrial surveys on agile method

usage. Journal of Systems and Software, 94, 87-97.

Stettina, C. J., & Hörz, J. (2015). Agile portfolio management: An empirical perspective on the

practice in use. International Journal of Project Management, 33(1), 140-152.

doi:10.1016/j.ijproman.2014.03.008

Swinarski, M., Parente, D. H., & Kishore, R. (2012). Do small IT firms benefit from higher

process capability?. Communications of the ACM, 55(7), 129-134.

Tahai, A., & Meyer, M.J. (1999). A revealed preference study of management journals’ direct

influences. Strategic Management Journal, 20, 279–296.

Tan, J., Fu, H. Z., & Ho, Y. S. (2014). A bibliometric analysis of research on proteomics in

Science Citation Index Expanded. Scientometrics, 98(2), 1473-1490.

169

Thomas, S. A., Hurley, S. F., & Barnes, D. J. (1996). Looking for the human factors in software

quality management. In Software Engineering: Education and Practice, 1996.

Proceedings. International Conference (pp. 474-480). IEEE.

Thompson, D. F., & Walker, C. K. (2015). A Descriptive and Historical Review of Bibliometrics

with Applications to Medical Sciences. Pharmacotherapy: The Journal of Human

Pharmacology and Drug Therapy,35(6), 551-559.

Thornton, M. A. (2012). Professional licensure for software engineers: An update. Computing in

Science & Engineering, 14(5), 85-87. doi:10.1109/MCSE.2012.104

Tockey, S. (2015). Insanity, Hiring, and the Software Industry. Computer, (11), 96-101.

Tsay, M. Y. (2004). Literature growth, journal characteristics, and author productivity in subject

indexing, 1977 to 2000. Journal of the Association for Information Science and

Technology, 55(1), 64-73.

Tsay, M. Y., & Yang, Y. H. (2005). Bibliometric analysis of the literature of randomized

controlled trials. Journal of the Medical Library Association, 93(4), 450.

Unterkalmsteiner, M., Gorschek, T., Islam, A. K. M., Cheng, C. K., Permadi, R. B., & Feldt, R.

(2014). A conceptual framework for SPI evaluation. Journal of Software: Evolution and

Process, 26(2), 251-279.

United States Computer Emergency Readiness Team (n.d.). Recent Vulnerabilities. Retrieved

August 7, 2016 from https://www.us-cert.gov/

Vallespir, D., & Nichols, W. (2016). Quality Is Free, Personal Reviews Improve Software

Quality at No Cost. Software Quality Professional, 18(2).

Van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for

bibliometric mapping. Scientometrics, 84(2), 523–538. 170

Van Eck, N. J., Waltman, L., Dekker, R., & Van den Berg, J. (2010). A comparison of two

techniques for bibliometric mapping: Multi- dimensional scaling and VOS. Journal of the

American Society for Information Science and Technology, 61(12), 2405–2416.

Vogel, R., & Güttel, W. H. (2013). The dynamic capability view in strategic management: A

bibliometric review. International Journal of Management Reviews, 15(4), 426-446. von Wangenheim, C. G., Anacleto, A., & Salviano, C. F. (2006). Helping small companies

assess software processes. IEEE software, 23(1), 91-98.

Walker, D. W. (2012). Comparing CMMI for Development and Medical Device

Engineering. Software Quality Professional, 15(1).

Walrad, C. (2017). Standards for the Enterprise IT Profession. Computer, 50(3), 70-73.

Walrad, C. (2016). The IEEE computer society and ACM's collaboration on computing

education. Computer, 49(3), 88-91. doi:10.1109/MC.2016.67

Wang, N., Liang, H., Jia, Y., Ge, S., Xue, Y., & Wang, Z. (2016). Cloud computing research in

the IS discipline: A citation/co-citation analysis. Decision Support Systems, 86, 35-47.

Waltman, L., van Eck, N. J., & Noyons, E. C. M. (2010). A unified approach to mapping and

clustering of bibliometric networks. Journal of Informetrics, 4(4), 629-635.

doi:10.1016/j.joi.2010.07.002

Williams, L., & Cockburn, A. (2003). Guest Editors' Introduction: Agile Software Development:

It's about Feedback and Change. Computer, 36(6), 39-43.

Wirth, N. (2008). A brief history of software engineering. IEEE Annals of the History of

Computing, 30(3), 32-39. 171

Wolf, M. (2016). Embedded Software in Crisis. Computer, 49(1), 88-90.

White, H.D., & Griffith, B.C. (1981). Author co-citation: A literature measure

of intellectual structure. Journal of the American Society for Information

Science, 32, 163–171.

White, H. D., & McCain, K. W. (1998). Visualizing a discipline: An author co-citation analysis

of information science, 1972-1995. Journal of the American Society for Information

Science, 49(4), 327-355.

Whittaker, J., Voas, J. (2002). “Fifty Years at Software: Key Principles for Quality”, IEEE IT

Pro, 4(6): 28-35.

Xie, P. (2015). Study of international anticancer research trends via co-word and document co-

citation visualization analysis. Scientometrics, 105(1), 611-622.

Yin, C. Y., & Chiang, J. K. (2010). Bibliometric Analysis of Social Capital Research During

1956 to. Journal of Convergence Information Technology, 5(2).

Yourdon, E. (2007). Celebrating Peopleware's 20th Anniversary. IEEE Software, 24(5).

Zarour, M., Abran, A., Desharnais, J. M., & Alarifi, A. (2015). An investigation into the best

practices for the successful design and implementation of lightweight software process

assessment methods: A systematic literature review. Journal of Systems and

Software, 101, 180-192.

Zhang, D., Han, S., Dang, Y., Lou, J., Zhang, H., & Xie, T. (2013). Software analytics in

practice. IEEE Software, 30(5), 30-37. doi:10.1109/MS.2013.94

Zhang, G., Xie, S., & Ho, Y. S. (2009). A bibliometric analysis of world volatile organic

compounds research trends. Scientometrics, 83(2), 477-492. 172

Zheng, T., Wang, J., Wang, Q., Nie, C., Smale, N., Shi, Z., & Wang, X. (2015). A bibliometric

analysis of industrial wastewater research: current trends and future

prospects. Scientometrics, 105(2), 863-882.

Zhuge, H. (2006). Discovery of knowledge flow in science. Communications of the ACM. 49(5),

101-107.

Zhi, W., Yuan, L., Ji, G., Liu, Y., Cai, Z., & Chen, X. (2015). A bibliometric review on carbon

cycling research during 1993-2013. Environmental Earth Sciences, 74(7), 6065-6075.

doi:10.1007/s12665-015-4629-7

Zupic, I., & Čater, T. (2015). Bibliometric methods in management and

organization. Organizational Research Methods, 18(3), 429-472.

173

APPENDIX A: LISTING OF STANDARDS

Standard Title

IEEE Standard 730-2014 Software Quality Assurance

IEEE Standard 828-2012 Configuration Management in Systems and Software Engineering IEEE Standard 982.1-2005 Dictionary of Measures of the Software Aspects of Dependability IEEE Standard 1012-2016 System, Software and Hardware Verification and Validation IEEE Standard 1016-2009 Information Technology – Systems Design – Software Design Descriptions IEEE Standard 1028-2008 Software Reviews and Audits IEEE Standard 1044-2009 Classification for Software Anomalies IEEE Standard 1061-1998 Software Quality Metrics Methodology

IEEE Standard 1062-2015 Recommended Practice for Software Acquisition

IEEE Standard 1228-1994 Software Safety Plans IEEE Standard 1517-2010 Information Technology --System and Software Life Cycle Processes -- Reuse Processes IEEE Standard 1633-2016 Recommended Practice on Software Reliability IEEE Standard 12207-2008 Systems and Software Engineering – Software Life Cycle Processes IEEE Standard 14471-2010 Software Engineering -- Guidelines for the adoption of CASE Tools ISO/IEC/IEEE Standard Software Engineering — Software Life Cycle Processes – 14764-2006 Software Maintenance IEEE Standard 15026-1-2011 Systems and Software Engineering—System and Software Assurance—Part 1: Concepts and Vocabulary IEEE Standard 15026-2-2011 Systems and Software Engineering—System and Software Assurance—Part 2: Assurance Case IEEE Standard 15026-3-2013 Systems and Software Engineering—System and Software Assurance—Part 3: System Integrity Levels IEEE Standard 15026-4-2013 Systems and Software Engineering—System and Software Assurance—Part 4: Assurance in the Life Cycle 174

IEEE Standard 15288-2015 Systems and Software Engineering – System Life Cycle Processes IEEE Standard 15289-2017 Systems and Software Engineering – Content of life cycle information items (documentation) IEEE Standard 15939-2017 Systems and Software Engineering -- Measurement Process IEEE Standard 16085-2006 Software Engineering – Software Life Cycle Processes – Risk Management IEEE Standard 16326-2009 Systems and Software Engineering – Life Cycle Processes – Project Management IEEE Standard 24774-2012 Systems and Software Engineering -- Life Cycle Management – Guidelines for Process Description ISO/IEC/IEEE Standard Systems and Software Engineering – Vocabulary 24765-2017 IEEE Standard 26515-2012 Systems and Software Engineering - - Developing user documentation in Agile environment ISO/IEC/IEEE Standard Software and Systems Engineering – Software Testing – 29119-1-2013 Part-1: Concepts and Definitions ISO/IEC/IEEE Standard Software and Systems Engineering – Software Testing – 29119-2-2013 Part-2: Test processes ISO/IEC/IEEE Standard Software and Systems Engineering – Software Testing – 29119-3-2013 Part-3: Test documentation ISO/IEC/IEEE Standard Software and Systems Engineering – Software Testing – 29119-4-2013 Part-4: Test techniques ISO/IEC/IEEE Standard Software and Systems Engineering – Software Testing – 29119-5-2013 Part-5: Keyword-Driven Testing ISO/IEC/IEEE Standard System and Software Engineering – Life Cycle Processes – 29148-2011 Requirements Engineering ISO/IEC/IEEE Standard Systems and Software Engineering – Architectural 42010-2011 Description ISO/IEC Standard 900003- Software Engineering – Guidelines for the application of 2015 ISO 9001:2008 to Computer Software

175

APPENDIX B: IRB TRAINING COMPLETION REPORT

176

APPENDIX C: PRODUCTIVE AND INFLUENTIAL JOURNALS

SOFTWARE QUALITY JOURNAL

INFORMATION AND SOFTWARE TECHNOLOGY

JOURNAL OF SYSTEMS AND SOFTWARE

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING

JOURNAL OF SOFTWARE EVOLUTION AND PROCESS

IEEE SOFTWARE

INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE

ENGINEERING

EMPIRICAL SOFTWARE ENGINEERING

COMPUTER

IET SOFTWARE

SOFTWARE PRACTICE EXPERIENCE

IEEE TRANSACTIONS ON RELIABILITY

COMPUTER STANDARDS INTERFACES

EXPERT SYSTEMS WITH APPLICATIONS

INFORMATION MANAGEMENT

COMMUNICATIONS OF THE ACM

INFORMATION SCIENCES

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT 177

MIS QUARTERLY

IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS

MANAGEMENT SCIENCE

ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY

INFORMATION SYSTEMS RESEARCH

178

APPENDIX D: IRB DETERMINATION OF EXEMPT STATUS

179

APPENDIX E: INVIATIONAL EMAIL TO PARTICIPANTS

Development of a Software Quality Management Assessment Tool

You are being invited to participate in a research study about development of a software quality management assessment tool. Kishore Erukulapati from the Department of Applied Engineering and Technology Management at Indiana State University is conducting this study. Dr. Affan Badar is faculty sponsor for this project. The study is being conducted as part of a doctoral dissertation in technology management.

There are no known risks if you decide to participate in this research study. There are no costs to you for participating in the study. The information you provide will help develop a software quality management assessment tool to evaluate software using software quality management best practices. The questionnaire will take about 30 minutes to complete. The information collected may not benefit you directly, but the information learned in this study should provide more general benefits to organizations and communities.

This survey is anonymous. Do not write your name in any fields on the survey website. No one will be able to identify you or your answers, and no one will know whether or not you participated in the study. No individual information will be disclosed. The Institutional Review Board may inspect these records.

Your participation in this study is voluntary. By completing and submitting the survey, you are voluntarily agreeing to participate. If you agree to participate, there will be no future email contact as a result of participating in this study. You are free to exit the survey anytime or decline to answer any particular question you do not wish to answer for any reason.

If you have any questions about the study, please contact Kishore Erukulapati at [email protected] and Dr. Affan Badar at [email protected].

If you have any questions about your rights as a research subject or if you feel you’ve been placed at risk, you may contact the Indiana State University Institutional Review Board (IRB) by mail at Indiana State University, Office of Sponsored Programs, Terre Haute, IN, 47809, by phone at (812) 237-8217, or by e-mail at [email protected].

Thank you in advance for your participation.

Follow this link to the Survey:

Or copy and paste the URL below into your internet browser:

Follow this link to opt out of future emails:

180

APPENDIX F: INFORMED CONSENT

181

APPENDIX G: INDUSTRY EXPERTS PILOT TEST

G.1 Industry Experts Pilot Test: Organizational Environment

Organizational Environment Best Expert#1 Expert#2 Expert#3 Maximum Practices Score Score Score Possible Score The management at all levels 4 2 4 5 demonstrates strong and determined leadership and commitment to shared vision, mission, values, business strategy, goals, objectives, quality policy, and quality culture. The management at all levels 4 2 5 5 respects, trusts, and empowers people across the organization and provides meaningful oversight. The organization attracts, 3 1 4 5 recruits, develops, motivates, organizes, excites, and retains competent staff at all levels. The top management leads the 4 2 4 5 strategic change management and provides the necessary infrastructure (People, Technology, Processes) to support diffusion and adoption of best practices across the organization. The organization provides work 4 1 4 5 environment that is safe, secure, stress free, and contributes to productivity, work-life balance, quality of work-life, and employee health and well-being. 182

Organizational Environment Best Expert#1 Expert#2 Expert#3 Maximum Practices Score Score Score Possible Score The organization provides 4 1 4 5 appropriate compensation, career development paths, rewards, and recognition system based on contribution and value. Total Score (Organizational 23 9 25 30 Environment Best Practices) Total Weighted Score 8.05 3.15 8.75 10.5 (Organizational Environment Best Practices) = 35% of Total Score

183

G.2 Industry Experts Pilot Test: People

People Best Practices Expert#1 Expert#2 Expert#3 Maximum Score Score Score Possible Score Staff is competent, self- 5 3 5 5 motivated, and empowered. Teams are coherent, cross- 4 2 4 5 functional, and self-organizing that cooperate, coordinate, communicate, and collaborate for a shared vision. Staff is engaged in continuous 4 4 4 5 learning, continuous improvement, competency development, and career development through reflection, problem solving, experimentation, training, coaching, and mentoring. Staff embraces the culture of 4 2 4 5 agility and innovation, and contributes to waste elimination, value stream optimization, and adding value to consumers. Staff embraces change, adopts, 3 2 4 5 and adapts best practices across the organization. Total Score (People Best 20 13 21 25 Practices) Total Weighted Score (People 8 5.2 8.4 10 Best Practices) = 40% of Total Score

184

G.3 Industry Experts Pilot Test: Process

Process Best Practices Expert#1 Expert#2 Expert#3 Maximum Score Score Score Possible Score Formal and well-defined software 4 4 2 5 engineering (requirements, architecture, design, development, and maintenance) processes are consistently applied across the organization. Formal and well-defined software 3 4 2 5 quality control engineering (verification, reviews, inspections, validation, testing, regression testing, and audit) processes are consistently applied across the organization. Formal and well-defined software 4 4 2 5 quality assurance engineering (process definition, process assessment, process improvement, process deployment, process quality assurance, product quality assurance, and metrics) processes are consistently applied across the organization. Formal and well-defined software 4 4 2 5 configuration management, product integration, software deployment, and software release management processes are consistently applied across the organization. Formal and well-defined fact driven 3 3 4 5 decision making process is consistently applied across the organization. Formal and well-defined problem 3 2 3 5 management and root cause analysis process is consistently applied across the organization. Formal and well-defined human capital 4 1 5 5 management, workforce planning, competency development, and knowledge management processes are consistently applied across the organization. 185

Process Best Practices Expert#1 Expert#2 Expert#3 Maximum Score Score Score Possible Score Formal and well-defined engineering 4 4 4 5 management (project management, product management, program management, portfolio management, supplier management, and risk management) processes are consistently applied across the organization. Total Score (Process Best Practices) 29 26 24 40 Total Weighted Score (Process Best 4.35 3.9 3.6 6 Practices) = 15% of Total Score

186

G.4 Industry Experts Pilot Test: Technology

Technology Best Practices Expert#1 Industry Industry Maximum Score Expert#2 Expert#3 Possible Score Score Score Well defined engineering 4 4 2 5 standards for requirements, architecture, design, code, and testing are consistently used across the organization. Appropriate tools for 3 4 4 5 requirements, architecture, design, code, and testing are consistently used across the organization. Continuous integration and 4 3 4 5 continuous delivery practices are applied through automated builds, deployment, and build verification across the organization. Reusability, architectural 5 4 4 5 patterns, design patterns, and refactoring are incorporated across the organization. Appropriate static analysis tools 3 4 4 5 are used across the organization. Appropriate tools are used for 3 3 5 5 knowledge management, project management, program management, portfolio management, quality management, risk management, and product management across the organization. Appropriate artificial intelligence 2 1 1 5 based software analytics tools are used across the organization. Total Score (Technology Best 24 23 24 35 Practices) Total Weighted Score 2.4 2.3 2.4 3.5 (Technology Best Practices) = 10% of Total Score

187

APPENDIX H: REVISED BEST PRACTICES

H.1 Revised Best Practices: People

People Response

Staff is competent, 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly self-motivated, and Disagree Agree empowered. Teams are 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly coherent, cross- Disagree Agree functional, and self- organizing that cooperate, coordinate, communicate, and collaborate for a shared vision. Staff is engaged in 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly continuous Disagree Agree learning, continuous improvement, competency development, and career development through reflection, problem solving, experimentation, training, coaching, and mentoring. Staff embraces the 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly culture of agility Disagree Agree and innovation, and contributes to waste elimination, value stream optimization, and 188 adding value to consumers. Staff embraces 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly change, adopts, and Disagree Agree adapts best practices across the organization.

189

H.2 Revised Best Practices: Organizational Environment

Organizational Response environment

The management 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly at all levels Disagree Agree demonstrates strong and determined leadership and commitment to shared vision, mission, values, business strategy, goals, objectives, quality policy, and quality culture. The management 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly at all levels Disagree Agree respects, trusts, and empowers people across the organization and provides meaningful oversight. The organization 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly attracts, recruits, Disagree Agree develops, motivates, organizes, excites, and retains competent staff at all levels. The top 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly management Disagree Agree leads the strategic change management and provides the 190 necessary infrastructure (People, Technology, Processes) to support diffusion and adoption of best practices across the organization. The organization 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly provides work Disagree Agree environment that is safe, secure, stress free, and contributes to productivity, work-life balance, quality of work-life, and employee health and well-being. The organization 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly provides Disagree Agree appropriate compensation, career development paths, rewards, and recognition system based on contribution and value.

191

H.3 Revised Best Practices: Process

Process Response Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree engineering (requirements, architecture, design, development, and maintenance) processes are consistently used across the organization and tailored as appropriate. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree quality control engineering (verification, reviews, inspections, validation, testing, regression testing, and audit) processes are consistently used across the organization and tailored as appropriate. 192

Process Response Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree quality assurance engineering (process definition, process assessment, process improvement, process deployment, process quality assurance, product quality assurance, and metrics) processes are consistently used across the organization and tailored as appropriate. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined software Disagree Agree configuration management, product integration, software deployment, and software release management processes are consistently used across the organization and tailored as appropriate. 193

Process Response Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined fact driven Disagree Agree decision making process is consistently used across the organization and tailored as appropriate. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined problem Disagree Agree management and root cause analysis process is consistently used across the organization and tailored as appropriate. Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined human Disagree Agree capital management, workforce planning, competency development, and knowledge management processes are consistently used across the organization and tailored as appropriate. 194

Process Response Formal and well- 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly defined Disagree Agree engineering management (project management, product management, program management, portfolio management, supplier management, and risk management) processes are consistently used across the organization and tailored as appropriate.

195

H.4 Revised Best Practices: Technology

Technology Response Well defined 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly engineering Disagree Agree standards for requirements, architecture, design, code, and testing are consistently used across the organization. Appropriate tools for 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly requirements, Disagree Agree architecture, design, code, and testing are consistently used across the organization. Continuous 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly integration and Disagree Agree continuous delivery practices are applied through behavior driven development, test driven development, automated builds, deployment, and build verification across the organization. Reusability, 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly architectural patterns, Disagree Agree design patterns, and refactoring are incorporated across the organization. Appropriate static 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly analysis tools are Disagree Agree used across the organization. 196

Technology Response Appropriate tools are 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly used for knowledge Disagree Agree management, project management, program management, portfolio management, quality management, risk management, and product management across the organization. Appropriate artificial 1.Strongly 2. Disagree 3. Neutral 4. Agree 5. Strongly intelligence based Disagree Agree software analytics tools are used across the organization.

197

APPENDIX I: FINAL ASSESSMENT TOOL

198 199 200 201 202 203

204

Based on your responses, your organization's overall raw score is __ out of 130.

Using factor weighting (People = 40%; Organization environment = 30%; Process = 15%; Technology = 15%), your organization's overall weighted score is __ out of 30.25.

In your opinion, what are the strengths and added value of this assessment tool?

In your opinion, what improvement opportunities exist to improve this assessment tool?

205

APPENDIX J: PRACTITIONER VALIDATION RESULTS

J.1 Practitioner Validation Results: People

People Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score Staff is competent, 4 5 5 5 self-motivated, and empowered. Teams are 4 4 5 5 coherent, cross- functional, and self- organizing that cooperate, coordinate, communicate, and collaborate for a shared vision. Staff is engaged in 4 5 5 5 continuous learning, continuous improvement, competency development, and career development through reflection, problem solving, experimentation, training, coaching, and mentoring. 206

Staff embraces the 4 4 5 5 culture of agility and innovation, and contributes to waste elimination, value stream optimization, and adding value to consumers. Staff embraces 4 5 5 5 change, adopts, and adapts best practices across the organization. Total Score (People 20 23 25 25 Best Practices) Total Weighted 8 9.2 10 10 Score (People Best Practices) = 40% of Total Score

207

J.2 Practitioner Validation Results: Organizational Environment

Organizational Practitioner#1 Practitioner#2 Practitioner#3 Maximum Environment Best Score Score Score Possible Practices Score The management at 4 4 4 5 all levels demonstrates strong and determined leadership and commitment to shared vision, mission, values, business strategy, goals, objectives, quality policy, and quality culture. The management at 4 4 5 5 all levels respects, trusts, and empowers people across the organization and provides meaningful oversight. The organization 3 5 4 5 attracts, recruits, develops, motivates, organizes, excites, and retains competent staff at all levels. The top management 3 4 3 5 leads the strategic change management and provides the necessary infrastructure (People, Technology, Processes) to support diffusion and adoption of best practices across the organization. 208

Organizational Practitioner#1 Practitioner#2 Practitioner#3 Maximum Environment Best Score Score Score Possible Practices Score The organization 4 4 4 5 provides work environment that is safe, secure, stress free, and contributes to productivity, work-life balance, quality of work-life, and employee health and well-being. The organization 4 5 3 5 provides appropriate compensation, career development paths, rewards, and recognition system based on contribution and value. Total Score 22 26 23 30 (Organizational Environment Best Practices) Total Weighted 6.6 7.8 6.9 9 Score (Organizational Environment Best Practices) = 30% of Total Score

209

J.3 Practitioner Validation Results: Process

Process Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score

Formal and 3 2 4 5 well-defined software engineering (requirements, architecture, design, development, and maintenance) processes are consistently applied across the organization and tailored as appropriate. 210

Process Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score

Formal and 3 2 5 5 well-defined software quality control engineering (verification, reviews, inspections, validation, testing, regression testing, and audit) processes are consistently applied across the organization and tailored as appropriate. 211

Process Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score

Formal and 3 2 4 5 well-defined software quality assurance engineering (process definition, process assessment, process improvement, process deployment, process quality assurance, product quality assurance, and metrics) processes are consistently applied across the organization and tailored as appropriate. 212

Process Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score

Formal and 3 4 4 5 well-defined software configuration management, product integration, software deployment, and software release management processes are consistently applied across the organization and tailored as appropriate. Formal and 2 3 4 5 well-defined fact driven decision making process is consistently applied across the organization and tailored as appropriate. 213

Process Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score

Formal and 3 4 5 5 well-defined problem management and root cause analysis process is consistently applied across the organization and tailored as appropriate. Formal and 3 4 4 5 well-defined human capital management, workforce planning, competency development, and knowledge management processes are consistently applied across the organization and tailored as appropriate. 214

Process Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score

Formal and 3 3 4 5 well-defined engineering management (project management, product management, program management, portfolio management, supplier management, and risk management) processes are consistently applied across the organization and tailored as appropriate. Total Score 23 24 34 40 (Process Best Practices) Total 3.45 3.6 5.1 6 Weighted Score (Process Best Practices) = 15% of Total Score

215

J.4 Practitioner Validation Results: Technology

Technology Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score Well defined 3 2 4 5 engineering standards for requirements, architecture, design, code, and testing are consistently used across the organization. Appropriate tools 3 2 4 5 for requirements, architecture, design, code, and testing are consistently used across the organization. Continuous 4 5 4 5 integration and continuous delivery practices are applied through behavior driven development, test driven development, automated builds, deployment, and build verification across the organization. 216

Technology Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score Reusability, 4 4 3 5 architectural patterns, design patterns, and refactoring are incorporated across the organization. Appropriate static 4 4 4 5 analysis tools are used across the organization. Appropriate tools 4 2 3 5 are used for knowledge management, project management, program management, portfolio management, quality management, risk management, and product management across the organization. Appropriate 4 1 5 5 artificial intelligence based software analytics tools are used across the organization. Total Score 26 20 27 35 (Technology Best Practices) 217

Technology Best Practitioner#1 Practitioner#2 Practitioner#3 Maximum Practices Score Score Score Possible Score Total Weighted 3.9 3 4.05 5.25 Score (Technology Best Practices) = 15% of Total Score